OpenAI's large language models (LLMs) are trained on a vast array of datasets, pulling information from the internet's dustiest and cobweb-covered corners.

But what if such a model were to crawl through the dark web — the internet's seedy underbelly where you can host a site without your identity being public or even available to law enforcement — instead? A team of South Korean researchers did just that, creating an AI model dubbed DarkBERT to index some of the sketchiest domains on the internet.

To read more, click here.