|CCT Colloquium Series|
|An Inconvenient Question: Are We Going to Get the Algorithms and Computing Technology We Need to Make Critical Climate Predictions in Time?|
|Richard D. Loft, National Center for Atmospheric Research|
|Director, Technology Development, Computational and Information Systems Laboratory|
|Johnston Hall 338
February 06, 2009 - 11:00 am
The anticipated availability of massively parallel petascale computers in the next few years offers the climate community a golden opportunity to dramatically advance our understanding of the Earth’s climate system and climate change, if they can be harnessed to the task. Unfortunately the fit is not perfect. First, massively parallel systems impose stringent and unavoidable Amdahl-law requirements on application scalability. Second, the trade-off between resolution and integration rate, both critical factors in climate modeling, are severe. Third, the increasing complexity of petascale systems, e.g. in terms of the numbers of cores on a chip, and the number of chips in a system, increases the tension between the system architectural trends and programmability. Finally, the size and complexity of climate applications make them difficult to port, adapt, and validate on new architectures. There is no single computational kernel in these models to optimize. This talk will discuss on-going efforts within the DOE SciDAC and NSF PetaApps programs to both seize this important scientific opportunity and address the increased complexity of petascale systems. Efforts to develop lightweight, incremental, and beneficial scaling improvements on existing climate ocean, land and sea-ice components will be demonstrated. Similar improvements for the atmosphere will be shown for the High-Order Method Modeling Environment (HOMME), a new dynamical core cur¬rently being evaluated within the Community Atmosphere Model (CAM). This progress has improved scalability and performance of these components to the point that 50 km atmospheric component coupled to eddy-resolving ocean and sea-ice simulations are now being attempted at Lawrence Livermore National Laboratory, the National Energy Research Scientific Computing Center, National Institute for Computational Sciences, and elsewhere. Further gains are required, and may involve even more complex and far-reaching modifications to our algorithms as well as the use of exotic architectures.
Dr. Richard Loft is the Director for Technology Development in the Computational and Information Systems Laboratory of the National Center for Atmospheric Research (NCAR) in Boulder, Colorado. With a B.S. in Chemistry and an M.S. and a Ph.D. in Physics, Dr. Loft has been involved with massively parallel computing since joining Thinking Machines Corporation as an Application Engineer in 1989. Throughout his career he has contributed to the understanding and effective use of parallelism as applied to the National Science Foundation’s Grand Challenge simulations. He has made significant contributions to the design and performance of a variety of Earth System models, and he helped develop one of the first climate simulation schemes that coupled an atmosphere model with an ocean model. He contributed to a team that developed an efficient spectral-element-based primitive-equations dynamical core on the cubed-sphere. This core has evolved into the High Order Method Modeling Environment (HOMME). HOMME is currently being integrated with the CCSM Community Atmosphere Model (CAM) to transfer its capabilities to a broad community of climate researchers. The algorithm aspects of this work were recognized with an honorable mention prize in the IEEE/ACM Gordon Bell competition at the international Supercomputing 2001 conference. In 2005, Dr. Loft was co-PI on an NSF MRI grant that brought a 2,048-processor IBM Blue Gene/L system to the Colorado Front Range. The successful deployment and use of the Blue Gene/L system as a computational science research platform has led to 59 publications at last count. Since 2005, Dr. Loft has led NCAR’s participation in the NSF-funded TeraGrid project as resource provider principal investigator (RPPI), and successfully deployed the IBM Blue Gene/L as a TeraGrid resource on August 1, 2007. In 2007, Dr. Loft established the Summer Internship in Parallel Computational Science (SIParCS) program at NCAR. Each summer’s 10-week period program brings upper-division undergraduates and first and second-year graduate students in contact with practical, HPC-related applied mathematics and computational science problems derived from the mission and needs of NCAR’s Computational and Information Systems Labortatory.
|Refreshments will be served.|
|This lecture has a reception.|