Welcome members of the oil and gas high performance computing community! This time of year is always exciting for us as committee members as it culminates many months of collaboration to bring together some of the world’s leaders in oil and gas and technology for the annual Rice Oil and Gas High-Performance Computing Conference, now in its 11th year.
As a committee, we agreed that a question relevant for this year’s conference is: What is the role of exascale and how can it transform our industry? The race for who will deploy the first exascale supercomputer, a system capable of delivering 10^18 floating point operations per second, is on. Major programs in China, Europe, Japan, and the United States are all focused on this. While an exascale machine is inevitable and will be realized in the near future (possibly by 2020 or sooner), being “first” only qualifies for bragging rights and may fall short of more important criteria, such as programmability and usability by a broader community. Governments – yes more than one – will likely make, buy and deploy one or more of these systems. While technology trickle down is inevitable, will there be a market for exascale systems that parallels petascale?
Some will argue that exascale is no different from petascale, but that is a hard argument to make when we consider megatrends that have happened since the IBM Roadrunner at Los Alamos National Laboratory in New Mexico clocked in at just above 1 petaflop in 2008. Specifically, with the breakdown of Dennard scaling around 2006 (i.e., end of “free performance”) and at 51 years of age, the breakdown of Moore’s Law (end of “ever cheaper transistors”), increasing “useful” application performance is increasingly difficult and does not only require major capital and operational investments, but the skills and talents to program the systems.
Assuming your company has the capital and the wallet to pay for power, the more critical question is: How easy will it be to program and port codes to this new system? For an exascale system to be a success and be useful to industry, we need not only the hardware, but also the trained people, the right software tool chains and development environments that can scale to take advantage of what undoubtedly will be a heterogeneous architecture.
The ability to put these potentially disruptive technologies to good use in the oil and gas industry will require an investment in organizational capability and continued development of new skills. Well trained, knowledgeable staff that can apply high-performance computing technologies at the exascale level to relevant problems will be a key competitive advantage for companies utilizing these technologies.
Here, the Oil and Gas High-Performance Computing Conference will continue to make a significant contribution by providing a forum for practitioners in the industry to connect with peers, technology providers and academia on topics specific to the oil and gas industry. This is an opportunity to explore questions surrounding the future exascale system, with an eye towards answering: What does an exascale system need to look like for it to be transformative for the energy industry?
The conference will be kicked off by Doug Kothe and later joined by Andrew Siegel, both experts from the US Excascale Computing Project. They will share where this project stands today and how to prepare for the next-generation HPC architectures and develop the tools and workforce for the road ahead.
In 2011 Scott Morton (Rice University, formerly at Hess), Henri Calandra (Total), and John Etgen (BP) shared a chart on the evolution of the seismic depth imaging that suggested the industry would need an exascale computer in 2020 to support new innovations in-depth imaging. Scott will join a panel of experts to discuss what will be needed to make exascale relevant to industry.
Day two will be kicked off by Ahmed Hashmi, Global Head of Upstream Technology at BP with an inside perspective and the importance of HPC for the energy industry. Leading up to the poster session and networking reception, Kevin Kissell, Technical Director, Office of the CTO, Google, will explore new paradigms for large scale computing enabled by cloud platforms.
Where do we go from here? Now that exascale is no longer a faint dream on the horizon, we need to prepare, train, learn, develop and seek opportunities to engage to make this successful for our needs – exascale systems that will impact our industry as did the advent of petascale computing a decade ago. Alas, don’t panic if you’re not ready for exascale computing. We have designed this year’s conference to specifically help educate and prepare you for it.
Best regards,
2018 OGHPC Committee