Microsoft bioengineer Bruce Wittmann normally uses artificial intelligence (AI) to design proteins that could help fight disease or grow food. But last year, he used AI tools like a would-be bioterrorist: creating digital blueprints for proteins that could mimic deadly poisons and toxins such as ricin, botulinum, and Shiga.
Wittmann and his Microsoft colleagues wanted to know what would happen if they ordered the DNA sequences that code for these proteins from companies that synthesize nucleic acids. Borrowing a military term, the researchers called it a “red team” exercise, looking for weaknesses in biosecurity practices in the protein engineering pipeline.
The effort grew into a collaboration with many biosecurity experts, and according to their new paper, published today in Science, one key guardrail failed. DNA vendors typically use screening software to flag sequences that might be used to cause harm. But the researchers report that this software failed to catch many of their AI-designed genes—one tool missed more than 75% of the potential toxins. Scientists involved in the exercise kept these vulnerabilities secret until the screening software was upgraded—but even now, it’s not foolproof, they warn.
Jaime Yassif, vice president for global biological policy and programs at the Nuclear Threat Initiative, says the study is a model for the future. “It’s just the beginning,” she says. “AI capabilities are going to evolve and be able to design more and more complex living systems, and our DNA synthesis screening capabilities are going to have to continue to evolve to keep up with that.”