Motivation:
Computer simulation is a primary approach for studying complex processes in virtually every field
of academic and industrial science and engineering. Communities in many disciplines
are developing cyberinfrastructures to enable new scientific discovery on current distributed
terascale resources, and to prepare for petascale computing.
The numerical relativity community has faced computational issues for years. Einstein's equations
are among the most complex formulations in physics; solving them for realistic astrophysical scenarios requires a blend of theoretical physics, astronomy, nuclear and particle physics, computational
mathematics, parallel computing, software engineering, and beyond. Single code modules, e.g.,
for locating a black hole horizon or extracting gravitational waves, may take several person
years to develop by one group, and are
often either unavailable for other groups or incompatible with their code base, slowing progress.
One critical area of numerical relativity research is the simulation of strongly gravitating systems,
such as binary black holes. The merger of two black holes releases a tremendous amount of gravitational energy
and is one of the most promising sources of gravitational waves in the universe. Such waves from
stellar mass BHs are on the verge of being directly observed with newly operating groundbased
laser interferometric detectors (LIGO, GEO600, VIRGO), while future space based observatories
(LISA) are planned for observing galactic mass black holes. Numerical modeling of the waves
produced by these binaries is crucial for these efforts.
However, black hole binaries deal only with vacuum; a deep understanding of other binary systems, includ
ing combinations of black holes, neutron stars, quark stars, etc., as well as corecollapse supernovae and
gammaray bursts, is needed for a complete understanding of observable sources of gravitational waves.
Modeling these systems is much more complex, requiring a complete treatment of GR hydrodynamics,
complex equations of state, MHD, radiation transport, etc., with greater computational and
collaboration challenges. Such problems will require petascale computations that scale, new algorithms, closer cooperation between scientists from different disciplines, and a software infrastructure
framework that facilitates these activities.
Project:
XiRel will develop the basis for a next generation infrastructure for numerical relativity, building on
existing efforts in Cactus and Carpet, and experiences in developing and using Cactus infrastructure
for numerical relativity, and collaborations with new enabling technologies. XiRel has three core thrusts:
 Next Generation Infrastructure for Numerical Relativity: The central goal of XiRel is the development of a highly scalable, efficient and accurate adaptive
mesh refinement layer based on the existing Carpet driver, fully integrated and sup
ported in Cactus and optimized for numerical relativity. XiRel will enhance current tech
nologies to enable current science needs, working towards longer term goals for petascale machines,
with minimal changes needed to properly constructed physics modules.
By researching new algorithms and implementations for automatic choice of required accuracy,
dynamic load balancing, and task spawning to offload selected tasks, we will: (i) dramatically
improve the scaling of Carpet and numerical relativity codes for computation on at least 1000
processors or more, (ii) enable automatic grid hierarchy adaptation and dynamic load distribution
on architectures with deep communication hierarchies, and finally (iii) collaborate with and provide
expertise to other pro jects to incorporate and use new highly scalable mesh refinement drivers; all in anticipation of solving high resolution petascale sized problems deployed on 100,000
processors.
 Einstein Toolkit: XiRel will investigate the development of a community oriented toolkit for numerical relativity, learning from
experiences with the CactusEinstein set of thorns for Cactus.
