fastest supercomputer

The US Department of Energy awarded a total of over seven million node hours on the Summit Powerful Supercomputer to 20 research teams.

fastest supercomputer

Energy giant General Electric (GE) will be using one of the world’s most Powerful Supercomputer, IBM’s Summit, to run two new research projects that could boost the production of cleaner power. Last month, the US Department of Energy (DoE), which hosts Summit in Oak Ridge National Laboratory, awarded a total of over seven million node hours on the supercomputer to 20 research teams, two of which belong to GE Research.

The Summit supercomputing system is the second most Powerful Supercomputer in the world, behind the Fugaku supercomputer located in Japan. Built by IBM, Summit boasts system power equivalent to 70 million iPhone 11s, which scientists can leverage to run large computations such as simulating systems’ behavior or solving complex physics problems.

GE has now lifted the veil on the two projects that were selected by the DoE to run on Summit, and they will both address sticking points in the generation of renewable energy. 

One team, led by GE researcher Jing Li, received 240,000 node hours to advance research in the field of offshore wind power. Using the Summit supercomputer, Li hopes to be able to run complex simulations to study new ways of controlling and operating offshore turbines to best optimize wind production. 

In particular, Li’s team will be looking at a wind phenomenon known as coastal low-level jets, which occur along many coastlines and can affect the performance and reliability of offshore wind turbines. Thanks to high-fidelity computational models, the researchers will simulate interactions between wind farms and coastal low-level jets, to inform future, more efficient designs for the farms. 

The findings will also be used to guide the DoE’s ExaWind project, which is designed to accelerate the US’s deployment of onshore and offshore wind plants. 

Doing so requires a precise understanding of the ways that natural wind phenomena interact with the built infrastructure. Simulating these interactions, however, comes at a large computational cost, due to the many factors at play. Most research projects are currently only able to predict the behavior of a small number of turbines. 

The ExaWind project is aiming to generate predictive simulations of wind farms with tens of megawatt-scale wind turbines dispersed over an area of many square kilometers with complex terrain – a computation that could involve simulations with up to 100 billion grid points.  

The huge compute power that has been granted to Li’s team with Summit is, therefore, a promising step towards achieving the ExaWind challenge. 

GE researcher Michal Osusky was also awarded another 256,000 node hours on Summit for a separate research project that focuses on applying machine-learning methods to improve the design of physical machines like jet engines or power generation turbines. 

Combining machine learning and simulation, Osusky’s team could mimic real-world engines quickly and run virtual tests to verify designs faster than with conventional means. 

“These simulations would provide unprecedented insight into what’s happening in these complex machines, way beyond what is possible through today’s experimental tests,” said Osusky. “The hope is we can utilize a platform like this to accelerate the discovery and validation process for cleaner, more efficient engine designs that further promote our decarbonization goals.” 

ROOTING FOR AN EXASCALE FUTURE 

The Summit supercomputer, with its 200 petaflops-worth of compute power, is likely to significantly push Li and Osusky’s research efforts – but GE already has its eyes on even bigger systems. 

The DoE is effectively investing in the next generation of supercomputing, known as exascale, and which will refer to systems capable of performing a quintillion calculations each second. Oak Ridge National Laboratory is currently in the process of launching the US’s first exascale system, called Frontier. 

Set to debut in 2021 and to open to users in 2022, Frontier is a $600 million device that will deliver a performance of 1.5 exaflops – that is, 50 times faster than today’s top supercomputers. Eight research projects have already been selected to gain early access to the system. They range from simulating a Milky Way-like galaxy to studying the way that viruses enter host cells, through understanding the universal properties of turbulence. 

SEE: Fastest supercomputer in the UK is ready to go: Here’s what it’s going do

Other laboratories across the country are also racing to launch exascale devices. Argonne National Laboratory, for example, has partnered with HPE to deliver an exascale system called Polaris. 

And GE is keen to be part of the upgrade when exascale supercomputers come online. “Government agencies have collaborated with industry and academic partners to propel the computational science and engineering workforce and ecosystem from the gigascale of the 90s through terascale to today’s petascale and the imminent exascale – each leap in ‘scale’ 1,000 times the capability of the prior,” said Richard Arthur, senior director of computational methods research at GE Research. 

“The marvel of sustained exponential breakthroughs in hardware and software technologies, enduring decades, shapes computational modeling into a foundational instrument for scientific insights.” 

Some experts, however, have previously voiced doubts that the exascale revolution is anywhere near. Eric Stroh Maier, one of the authors of the well-established Top500 list, which regularly lists the 500 most powerful supercomputers in the world, recently predicted that a supercomputer capable of achieving one exaflop should not be expected before the second half of the 2020s – a forecast that some of his colleagues described as optimistic.

Source zd net