AGI is the new AI, promoted by tech leaders and AI experts, all promising its imminent arrival, for better or for worse. Anyone frightened by Elon Musk’s warning that “AGI poses a grave threat to humanity, perhaps the greatest existential threat we face today,” should first study the evolution of AGI from science-fiction to real-world fiction.

The term AGI was coined in 2007 when a collection of essays on the subject was published. The book, titled Artificial General intelligence, was co-edited by Ben Goertzel and Cassio Pennachin. In their introduction, they provided a definition:

“AGI is, loosely speaking, AI systems that possess a reasonable degree of self-understanding and autonomous self-control, and have the ability to solve a variety of complex problems in a variety of contexts, and to learn to solve new problems that they didn’t know about at the time of their creation.” The rationale for “christening” AGI for Goertzel and Pennachin was to distinguish it from “run-of-the-mill ‘artificial intelligence’ research,” as AGI is “explicitly focused on engineering general intelligence in the short term.”

In 2007, “run-of-the-mill” research focused on narrow challenges and AI programs of the time could only “generalize within their limited context.” While “work on AGI has gotten a bit of a bad reputation,” according to Goertzel and Pennachin, “AGI appears by all known science to be quite possible. Like nanotechnology, it is ‘merely an engineering problem’, though certainly a very difficult one.”

AGI is considered by Goertzel and Pennachin as only an engineering challenge because “we know that general intelligence is possible, in the sense that humans – particular configurations of atoms – display it. We just need to analyze these atom configurations in detail and replicate them in the computer.”

To read more, click here.