The text-generating program ChatGPT, developed by artificial intelligence research company OpenAI, can write about many subjects in a variety of prose and even poetry styles. It can also opine about itself. When we asked ChatGPT to generate a Scientific American editorial, it produced a thoughtful-sounding essay.

ChatGPT functions much like any chatbot. Users go to the OpenAI website, type in a query, or “prompt,” such as “Suggest some prompts to test out a chatbot,” and quickly receive an AI-generated response. The program produces its answers based on text prediction: its AI was trained on a vast corpus of human writing available online, which allows it to predict which word should follow the previous one in order to appear like a reasoning entity. Despite sounding more sophisticated and realistic than perhaps any language model before it, ChatGPT cannot actually think for itself—and it can produce falsehoods and illogical statements that merely look reasonable.

That said, when we further tested the chatbot by asking it to explain some of its own flaws, it provided coherent answers. Many of the bot’s responses were long and repetitive, though, so we edited them for length. And please take their accuracy with a grain of salt: ChatGPT is capable of spouting incorrect information with apparent confidence. Our prompts and the program’s shortened responses are below.

To read more, click here.