Could proteins designed by artificial intelligence (AI) ever be used as bioweapons? In the hope of heading off this possibility — as well as the prospect of burdensome government regulation — researchers today launched an initiative calling for the safe and ethical use of protein design.
“The potential benefits of protein design [AI] far exceed the dangers at this point,” says David Baker, a computational biophysicist at the University of Washington in Seattle, who is part of the voluntary initiative. Dozens of other scientists applying AI to biological design have signed the initiative’s list of commitments.
“It’s a good start. I’ll be signing it,” says Mark Dybul, a global health policy specialist at Georgetown University in Washington DC who led a 2023 report on AI and biosecurity for the think tank Helena in Los Angeles, California. But he also thinks that “we need government action and rules, and not just voluntary guidance”.
The initiative comes on the heels of reports from US Congress, think tanks and other organizations exploring the possibility that AI tools — ranging from protein-structure prediction networks such as AlphaFold to large language models such as the one that powers ChatGPT — could make it easier to develop biological weapons, including new toxins or highly transmissible viruses.
Worthwhile effort, but that won't stop bad state actors from doing bad things.
To read more, click here.