The evolution of the Universe is described by Einstein’s equations of general relativity. These equations are very hard to solve, so cosmologists typically use approximations to deal with the clumpiness, or inhomogeneity, of the Universe. Two separate groups have, for the first time, simulated a clumpy universe using fully relativistic numerical methods. This is an important step towards improving the accuracy of cosmological models of galaxy formation and growth.

Researchers have generally dealt with the inhomogeneity of the Universe using N-body simulations based on Newton’s theory of gravity. These methods have given good fits to data from cosmic microwave background observations and galaxy surveys. But doubts have always remained over the predictions from these models because they neglect some features of general relativity.

James Mertens and Glenn Starkman from Case Western Reserve University and Tom Giblin of Kenyon College, both in Ohio, have taken numerical relativity methods developed for black holes and other compact objects and applied them to the problem of an inhomogeneous Universe. They start with a “toy universe” containing a distribution of matter density perturbations that is consistent with observations, and let it evolve according to general relativity. They find localized differences in the evolution compared to models that aren’t fully relativistic.

To read more, click here.