The art of weather prediction, and even more so, climate modeling, is often characterized by its inherent imprecision. However, their growing proficiency in anticipating the planet’s natural phenomena is primarily attributable to two advancements: enhanced predictive models and amplified computational capacity.
A recent publication from a research team spearheaded by Daniel Klocke at Germany’s Max Planck Institute introduces what many within the climate modeling sphere have hailed as a paradigm shift – a model boasting nearly kilometer-scale resolution that seamlessly integrates weather forecasting with climate simulation.
Strictly speaking, the spatial granularity of this novel model is not precisely 1 square kilometer per simulated section; rather, it operates at 1.25 kilometers.
Nevertheless, at this level of detail, such distinctions become largely academic. The Earth’s terrestrial and oceanic expanses are approximated by an estimated 336 million discrete cells. The researchers further augmented this by incorporating an equivalent number of “atmospheric” cells positioned vertically above their surface counterparts, culminating in a total of 672 million calculated elements.
For each of these constituent cells, the researchers employed a suite of interconnected models designed to mimic Earth’s fundamental dynamic processes, categorized into “fast” and “slow” systems.
The “fast” systems encompass the intricate cycles of energy and water – essentially, the constituents of weather. To meticulously chronicle these phenomena, a model necessitates exceptionally fine resolution, such as the 1.25 km capability offered by this new system.
In crafting this model, the research team leveraged the ICOsahedral Nonhydrostatic (ICON) model, an initiative jointly developed by the German Weather Service and the Max Planck Institute for Meteorology.
An in-depth exploration of climate modeling principles provides a foundational understanding for the concepts presented in this paper:
Conversely, “slow” processes involve phenomena like the carbon cycle and alterations within the biosphere and ocean geochemistry. These dynamics unfold over extended periods, spanning years or even decades, a stark contrast to the mere minutes it takes for a thunderstorm to traverse a single 1.25 km cell.
The synergistic integration of these fast and slow processes represents the seminal achievement of this research, a point readily acknowledged by the study’s authors. Conventional models attempting to incorporate such complex systems would typically only be computationally feasible at resolutions exceeding 40 km.
The methodology employed involved a sophisticated fusion of advanced software engineering techniques and the utilization of cutting-edge computer processors. For those with an interest in the intricacies of computational infrastructure, the subsequent paragraphs delve into the specifics of the software and hardware engineering involved.
The foundational codebase for a significant portion of this endeavor originated in Fortran – a programming language often perceived as a formidable challenge for modernizing legacy code from pre-1990 eras.

Over time, this original code had accumulated numerous extraneous components, hindering its compatibility with contemporary computational architectures. To surmount this obstacle, the researchers adopted the Data-Centric Parallel Programming (DaCe) framework, which facilitates data management in a manner amenable to modern systems.
Simon Clark investigates the feasibility of running a climate model on less powerful hardware, exemplified by a Raspberry Pi:
This state-of-the-art computational environment comprised the JUPITER and Alps supercomputers, situated in Germany and Switzerland respectively. Both systems are equipped with Nvidia’s advanced GH200 Grace Hopper chips.
These integrated chips feature a Graphics Processing Unit (GPU), akin to those employed in AI training (in this instance, the Hopper architecture), coupled with a Central Processing Unit (CPU) supplied by ARM, another prominent chip manufacturer, designated as Grace.
This segregation of computational tasks and specialized processing units enabled the researchers to delegate the “fast” models, with their rapid update requirements, to the GPU. Concurrently, the CPU components were utilized in parallel to support the slower-paced carbon cycle models.
By discerningly allocating computational resources in this manner, the team harnessed 20,480 GH200 superchips to simulate 145.7 days of climatic events within a single day. The model’s operations involved processing nearly 1 trillion “degrees of freedom,” which, in this context, signifies the total count of calculable values. The necessity for a supercomputer to execute such computations is thus readily apparent.
Consequently, the widespread deployment of models exhibiting this level of complexity is not anticipated in the immediate future at local weather stations.
Access to such profound computational power is inherently restricted, and major technology corporations are more inclined to allocate these resources towards maximizing the output of generative AI, irrespective of the implications for climate modeling advancements.
Nevertheless, the sheer accomplishment of executing such an ambitious computational undertaking warrants significant commendation and recognition. It is hoped that future developments will usher in an era where simulations of this magnitude become commonplace.
The research findings are accessible as a preprint on arXiv.
This content was initially published by Universe Today. Access the original publication.

