Ever since computers took shape — first filling rooms, then office desks, then pockets — they have been designed by human minds. Over the years, plenty of people have asked: What would happen if computers designed themselves?

Someday soon, an intelligent computer might create a machine far more powerful than itself. That new computer would likely make another, even more powerful, and so on. Machine intelligence would ride an exponential upward curve, attaining heights of cognition inconceivable to humans. This, broadly speaking, is the singularity.

The term dates back over 50 years, when scientists were just beginning to tinker with binary code and the circuitry that made basic computing possible. Even then, the singularity was a formidable proposition. Superintelligent computers might leap forward from nanotechnology to immersive virtual reality to superluminal space travel. Instead of being left behind with our puny, cell-based brains, humans might merge themselves with AI, augmenting our brains with circuits, or even digitally uploading our minds to outlive our bodies. The result would be a supercharged humanity, capable of thinking at the speed of light and free of biological concerns.

Philosopher Nick Bostrom thinks this halcyon world could bring a new age entirely. “It might be that, in this world, we would all be more like children in a giant Disneyland — maintained not by humans, but by these machines that we have created,” says Bostrom, the director of Oxford University’s Future of Humanity Institute and the author of Superintelligence: Paths, Dangers, Strategies.

To read more, click here.