The artificial intelligence hype machine has hit fever pitch and it’s starting to cause some weird headaches for everybody.

Ever since OpenAI launched ChatGPT late last year, AI has been at the center of America’s discussions about scientific progress, social change, economic disruption, education, heck, even the hfuture of porn. With its pivotal cultural role, however, has come a fair amount of bullshit. Or, rather, an inability for the average listener to tell whether what they’re hearing qualifies as bullshit or is, in fact, accurate information about a bold new technology.

A stark example of this popped up this week with a viral news story that swiftly imploded. During a defense conference hosted in London, a Colonel Tucker “Cinco” Hamilton, the chief of AI test and operations with the USAF, told a very interesting story about a recent “simulated test” involving an AI-equipped drone. Tucker told the conference’s audience that, during the course of the simulation—the purpose of which was to train the software to target enemy missile installations—the AI program randomly went rogue, rebelled against its operator, and proceeded to “kill” him. Hamilton said:

“We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

 In other words: Hamilton seemed to be saying the USAF had effectively turned a corner and put us squarely in the territory of dystopian nightmare—a world where the government was busy training powerful AI software which, someday, would surely go rogue and kill us all.

To read more, click here.