lecture image Frontiers of Scientific Computing Lecture Series
Challenges for the Message-Passing Interface in the PetaFLOPS Era
William Gropp, Department of Computer Science, University of Illinois
Paul and Cynthia Saylor Professor
Johnston Hall 338
February 22, 2008 - 11:30 am
MPI has been a successful parallel programming model. The combination of performance, scalability, composability, and support for libraries has made it relatively easy to build complex parallel applications. Further, these applications scale well, with some applications already running on systems with over 128000 processors. However, MPI is by no means the perfect parallel programming model. This talk will review the strengths of MPI with respect to other parallel programming models and discuss some of the weaknesses and limitations of MPI in the areas of performance, productivity, scalability, and interoperability. The impact of recent developments in computing, such as multicore (and manycore), better networks, and global view programming models on both MPI and applications that use MPI will be covered, as well as lessons from the success of MPI that are relevant to furture progress in parallel computing. The talk will conclude with a discussion of what extensions (or even changes) may be needed in MPI, and what issues should be addressed by combining MPI with other parallel programming models.
Speaker's Bio:
William Gropp is the Paul and Cynthia Saylor Professor in the Department of Computer Science at the University of Illinois in Urbana-Champaign. After receiving his Ph.D. in Computer Science from Stanford University in 1982, he held the positions of assistant (1982-1988) and associate (1988-1990) professor in the Computer Science Department of Yale University. In 1990, he joined the Numerical Analysis group at Argonne, where he held the positions of Senior Scientist (1998-2007) and Associate Division Director (2000-2006). His research interests are in parallel computing, software for scientific computing, and numerical methods for partial differential equations. He is a co-author of "Using MPI: Portable Parallel Programming with the Message-Passing Interface", and is a chapter author in the MPI-2 Forum. His current projects include the design and implementation of MPICH, a portable implementation of the MPI Message-Passing Standard, the design and implementation of PETSc, a parallel, numerical library for PDEs, and research into programming models for parallel architectures.
This lecture has a reception.