as published in HPCwire.
If you ever want to push your knowledge of supercomputing and science, go to the Energy High Performance Computing Conference, hosted annually by the Ken Kennedy Institute at Rice University in Houston. Any gathering focused on a specific vertical market hones the discussion to a specific set of workloads and innovations, and the energy industry does so with massively scalable HPC. Seven of the most powerful 50 supercomputers on the November 2022 Top500 list are at energy companies – and those are just the ones we know about.
There’s a good reason behind all that horsepower. Like many scientific endeavors, oil exploration is a pursuit that only gets more challenging each year. The next reservoir is harder to explore and extract than the previous one. Like hidden Easter eggs sought by enthusiastic children, the more obvious and accessible ones go first, and each successive one takes more effort to gather.
At previous iterations of this event, I’ve learned about successive innovations in the never-ending refinements of seismic modeling – Kirchhoff migration, subsalt depth imaging, multi-azimuth acquisition, etc. – to provide more accurate maps to long-buried treasure. As soon as I think I’ve got it, someone puts up another slide full of partial differential equations that represent the new New Math.
“The cycle time for the turnover of hot technologies is getting faster,” said John Etgen, Distinguished Advisory for Seismic Imaging at BP, in a day-one keynote entitled “What Really Matters to our Industry in the Area of HPC and the Era of Exascale.” Etgen predicted that at least one energy company would have an Exascale-class data center “within a year or two,” although it might not be publicly disclosed.
The newest application driving that level of scale was eventually revealed. During a second-morning Birds of a Feather session, ExxonMobil’s Charlie Fazzino called it out: “Elastic FWI is the big problem on the horizon.”
FWI in this context stands for Full Wave-Field Inversion, a technique for combining multiple seismic wave types to create a higher-resolution subsurface image. It was described in greater detail (complete with slides full of partial differential equations) by Fazzino’s ExxonMobil colleague, Partha Routh. The essence of the most complicated bits of math was resolving local minima in a non-linear space, so as to still find global minima.
If the detailed mathematical explanation caused one’s attention to drift, Routh’s payoff slide contained a simple statement that would get any audience to sit up and take notes. “It saved us four wells,” said Routh, showing the visualization that led ExxonMobil to “deprioritize” four planned wells, while converting a fifth to a “high-grade opportunity.” At a cost upwards of $350 million per well, that’s a savings of more than a billion dollars, or if you like, about 1.4 Petabucks. That kind of savings can fund an Exascale initiative.
New Energy, New HPC
The advancements in seismic imaging and reservoir simulation, despite being an evergreen topic at Rice, was only a starting point for the conversation. Every energy company in attendance expressed acute awareness of public and political pressure to look toward a future beyond fossil fuels, and the conference content has expanded to these new directions in energy. (The conference rebranded to change the words “Oil & Gas” in its name to “Energy” beginning in 2022.)
This year, an entire session was dedicated to the “Role of HPC in the Energy Transition,” presented by Samir Khanna of BP. Khanna outlined complex HPC simulation in green energy development. In one noteworthy example, he showed a “fully coupled model for floating offshore wind turbines,” combining the multiple models of wind, waves, current, and soil simulation. Carbon capture, utilization, and storage (CCUS) was another domain that came up in multiple sessions.
Furthermore, the nature of HPC itself is changing. Cloud computing has been a focus topic in past conferences, and it got its mentions this year. “Cloud computing has become a parameter,” said Etgen. But for all the enthusiasm around cloud in past years, it was a minor concern compared to the new frontier in HPC: artificial intelligence.
While none of the energy companies presented on machine learning use cases, Dan Stanzione of TACC gave a highly entertaining session in which he demonstrated the potential of ChatGPT for HPC-related tasks, such as converting segments of code between languages. Still, he pointed out that where it fails, it fails spectacularly, including an example in which it generated a false URL as a reference. “We’ve created a computer that can lie with confidence when it doesn’t know, just like a human can,” Stanzione quipped. He spoke similarly about the benefits – with limits – of exploring reduced precision as a means for improving performance. “I can’t do FP8,” he said. “At some point you’re just guessing.”
Although there was clear enthusiasm for the future of AI in energy, the specific use cases were still underdeveloped. “It seems like machine learning is everywhere and nowhere,” said Diego Klahr, VP Computational Science and Engineering at Total Energies. “At scale, it’s mainly coupling with simulation.”
Fazzino was only somewhat more expansive, saying “There are some [ML] applications around interpretation.” Elizabeth L’Heureux, head of HPC at BP, admitted AI is in its early stages, a new tool in search of problems to solve.
In addition to Stanzione’s talk, two U.S. national labs were represented to give a look at the forefront of scalability. Bronson Messer of Oak Ridge National Laboratory talked about the first exascale system on the Top500 list, ignoring Chinese advancements in a talk titled “Frontier: The World’s First Exascale Supercomputer,” and Gary Grider of Los Alamos National Laboratory gave a preview of the upcoming Venado supercomputer in his presentation, “LANL Platform Planning and Upgrade.” Both national lab speakers contributed to the panel mixed with energy company representatives in the second-day BOF.
Left to right: Gary Grider, LANL; Elizabeth L’Heureux, BP; Diego Klahr, Total Energies; Bronson Messer, ORNL; Charlie Fazzino, ExxonMobil
Remembering Scott Morton
The proceedings kicked off on a somber note, as the conference paused to acknowledge the contributions of Scott Morton, who passed away last September. Morton had a 30-year career in energy and HPC, both on the vendor side (Thinking Machines, Silicon Graphics, Cray Research) and the energy side (Shell, Hess), and he was a founding committee member of the Rice conference.
Keith Gray, the former head of HPC at BP who is now an advisor at Intel, led the remembrance, and he presented an honorary plaque to be delivered to Morton’s family. To commemorate Morton’s legacy, Gray announced the creation of the Scott Morton Memorial Graduate Fellowship to benefit graduate students’ work in the industry.
Keeping up with the Latest, Leaning into the Future
The Rice Energy HPC conference is a critical annual touchpoint for discussing the latest developments at the pinnacle of commercial supercomputing. But in case you missed it, fear not: Most of the presentations are being made available on the Rice Ken Kennedy Institute YouTube channel. More importantly, it is a conference for examining strategic directions, for traditional and evolving definitions of energy and HPC.
The energy industry is embracing its future, and that future is just as dependent on supercomputing. New discoveries require New Math, and the Rice Energy HPC Conference is where you’ll get a first look at it.