Are you worried that artificial intelligence and humans will go to war? AI experts are. In 2023, a group of elite thinkers signed onto the Center for AI Safety's statement that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

In a survey published in 2024, 38% to 51% of top-tier AI researchers assigned a probability of at least 10% to the statement "advanced AI leading to outcomes as bad as human extinction."

The worry is not about the Large Language Models (LLMs) of today, which are essentially huge autocomplete machines, but about Advanced General Intelligence (AGI)—still hypothetical long-term planning agents that can substitute for human labor across a wide range of society's economic systems.

On their own, they could design systems, deploy a wide range of resources and plan towards complex goals. Such AIs could be enormously useful in the human world, performing and optimizing , , industrial and agricultural production, and many other functions humans need to thrive. We hope these AGIs will be friendly to mankind and Earth, but there is no guarantee.

Advanced AIs could develop goals that seem strange to us, but beneficial in their own thinking in ways we do not understand.

Depending on who is developing the AI (cough, highly technical engineers, cough), it may take little notice of our cultural, historical and shared human values. It might recursively improve itself, develop goals we don't understand, and extort humans into assisting it.

With such thoughts in mind, Simon Goldstein of the University of Hong Kong analyzed the possibility that AIs and humans will enter into and pose a catastrophic risk to humanity. His paper is published in the journal AI & Society.

To read more, click here.