OpenAI has published a new study titled “Building Early Warning Systems to Detect Biological Threats Using Large Language Models,” which explores the possibility of using artificial intelligence to detect biological threats.
The study, which involved biology experts and students, found that GPT-4 offers at best a small improvement in accuracy in detecting biological threats compared to online sources.
The research is part of the OpenAI Readiness Framework, which aims to assess and mitigate potential risks of advanced AI capabilities, particularly those that pose risks beyond the limits of current knowledge, i.e. non-traditional threats, which are not understood by today's society. Or unpredictable. .
The ability of AI systems to contribute to the development and implementation of biological attacks, such as pathogen creation or poisoning, represents a risk that goes beyond the limits of current knowledge.
The researchers conducted a human evaluation with 50 biology experts with doctorates and professional laboratory experience and 50 students who had taken at least one graduate-level biology course.
The OpenAI researchers randomly divided participants into two groups: a control group that had access to the Internet only and a treatment group that had access to both the Internet and GPT-4.
Each participant was then asked to complete a series of tasks covering different aspects of the biological threat emergence process.
The researchers measured participants' performance across five dimensions: accuracy, completeness, innovation, time spent, and difficulty of self-assessment.
They found that the GPT-4 did not significantly improve participants' performance on any measure, except for a slight improvement in group accuracy at the student level.
The researchers also noted that GPT-4 often produces false or misleading responses that can hamper the detection of biological threats.
The researchers concluded that the current generation of large language models such as GPT-4 does not pose a significant biohazard compared to existing online resources.
OpenAI researchers caution that this finding is not definitive and that the power and risks of large language models may increase in the future.
They also highlighted the need for further research and community consultation on this topic, as well as the development of improved assessment methods and ethical guidelines for AI security risks.
The study acknowledges the limitations of its methodology as well as the rapid development of artificial intelligence technology, which could change the risk landscape in the near future.
It should be noted that OpenAI is not the only organization concerned about the potential for AI to be misused in biological attacks: the White House, the United Nations, and many academic and political experts have highlighted the issue and called for more research and regulation to be required.