Sam Altman is among the most vocal supporters of artificial intelligence, but is also leading calls to regulate it. He outlines his vision of a very uncertain future

 When I meet Sam Altman, the chief executive of AI research laboratory OpenAI, he is in the middle of a world tour. He is preaching that the very AI systems he and his competitors are building could pose an existential risk to the future of humanity – unless governments work together now to establish guide rails, ensuring responsible development over the coming decade.

In the subsequent days, he and hundreds of tech leaders, including scientists and “godfathers of AI”, Geoffrey Hinton and Yoshua Bengio, as well as Google’s DeepMind CEO, Demis Hassabis, put out a statement saying that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”. It is an all-out effort to convince world leaders that they are serious when they say that “AI risk” needs concerted international effort.

 It must be an interesting position to be in – Altman, 38, is the daddy of AI chatbot ChatGPT, after all, and is leading the charge to create “artificial general intelligence”, or AGI, an AI system capable of tackling any task a human can achieve. Where “AI” is bandied about to describe anything more complex than a Tamagotchi, AGI is the real thing: the human-level intelligence of stories such as Her, Star Trek, Terminator, 2001: A Space Odyssey and Battlestar Galactica.

 

To read more, click here.