A few months ago, I was working on an article about oceans across the solar system. Having read my fill about oceans of water, I turned to Google for a quick refresher on oceans made of other stuff, liquid hydrocarbons, for example. For better or worse, I searched “oceans in the solar system not water.” I sought a reliable link, maybe from NASA. Instead, Google’s AI Overviews feature served up Enceladus as one suggestion. This Saturn moon is famous for its subsurface sea — of saltwater. I shut my laptop in frustration.

That’s one small example of how AI fails. Arvind Narayanan and Sayash Kapoor collect dozens of others in their new book, AI Snake Oil — many with consequences far more concerning than irking one science journalist. They write about AI tools that purport to predict academic success, the likelihood someone will commit a crime, disease risk, civil wars and welfare fraud (SN: 2/20/18). Along the way, the authors weave in many other issues with AI, covering misinformation, a lack of consent for images and other training data, false copyright claims, deepfakes, privacy and the reinforcement of social inequities (SN: 10/24/19). They address whether we should be afraid of AI, concluding: “We should be far more concerned about what people will do with AI than with what AI will do on its own.”

To read more, click here.