Right now artificial intelligence is everywhere. When you write a document, you’ll probably be asked whether you need your “AI assistant.” Open a PDF and you might be asked whether you want an AI to provide you with a summary. But if you have used ChatGPT or similar programs, you’re probably familiar with a certain problem—it makes stuff up, causing people to view things it says with suspicion.

It has become common to describe these errors as “hallucinations.” But talking about ChatGPT this way is misleading and potentially damaging. Instead call it bullshit.

We don’t say this lightly. Among philosophers, “bullshit” has a specialist meaning, one popularized by the late American philosopher Harry Frankfurt. When someone bullshits, they’re not telling the truth, but they’re also not really lying. What characterizes the bullshitter, Frankfurt said, is that they just don’t care whether what they say is true. ChatGPT and its peers cannot care, and they are instead, in a technical sense, bullshit machines.

We can easily see why this is true and why it matters. Last year, for example, one lawyer found himself in hot water when he used ChatGPT in his research while writing a legal brief. Unfortunately, ChatGPT had included fictitious case citations. The cases it cited simply did not exist.

This isn’t rare or anomalous. To understand why, it’s worth thinking a bit about how these programs work. OpenAI’s ChatGPT, Google’s Gemini chatbot and Meta’s Llama all work in structurally similar ways. At their core is an LLM—a large language model. These models all make predictions about language. Given some input, ChatGPT will make some prediction about what should come next or what is an appropriate response. It does so through an analysis of enormous amounts of text (its “training data”). In ChatGPT’s case, the initial training data included billions of pages of text from the Internet.

Like father, like son.

To read more, click here.