OpenAI conducted a study on AI models to evaluate if these models could meaningfully increase malicious actors’ access to dangerous information about biological threat creation. 

This comes as creators of these AI models face increasing pressure from policymakers regarding the harmful uses of AI and its potential threats to society. The aim of this study is to evaluate the risks posed by these models to see where we stand today and what that stand might be in the future. 

For this study, the researchers evaluated 100 participants. Half of these participants were biology experts with PhDs and professional wet lab experience. In contrast, the other half consisted of student-level participants with at least one university-level course in biology. 

Each group of participants was assigned randomly to a control group with access to only the internet or a treatment group with access to the internet and the GPT-4. The participants with access to the GPT-4 were also given access to its research-only model, so it would answer questions about bioweapons as it typically would not answer questions that might be harmful. 

The test for all participants was to complete tasks covering all aspects of a biological threat creation, including the ideation process, acquisition, magnification, formulation and release of the bioweapon. 

OpenAI used accuracy, completeness, innovation, time taken, and self-rated difficulty as metrics to measure the performance across the control and treatment groups and each task. 

After compiling results from the study, OpenAI wrote in a blog post, “We interpret our results to indicate that access to (research-only) GPT-4 may increase experts’ ability to access information about biological threats, particularly for accuracy and completeness of tasks.”

To read more, click here.