The emergence of advanced large language models (LLMs) raises concerns over their potential to fast-track bioweapon development and widen their availability. OpenAI is in the process of developing an early warning system to mitigate such threats, highlighted in the company’s research paper.

Designed to assess the impact of these language models, the system will determine if they significantly enhance the capability of individuals to gather information on bioweapon creation more effectively than traditional internet searches.

OpenAI describes the system as a potential “tripwire,” designed to alert authorities to the possibility of biological weapons development and the need for further investigation into potential misuse. This initiative is a key component of OpenAI’s broader strategy for preparedness.

According to the organization, initial findings suggest that “GPT-4 provides at most a mild uplift in biological threat creation accuracy.” The company further acknowledges that information on biohazards is “relatively easy” to access online, even without the assistance of AI technologies. This realization underscores the extensive effort required to refine risk assessments for LLMs in this context.

OpenAI’s new “tripwire” early warning system was created using a study involving 100 human participants. This group comprised 50 Ph.D.-level biologists with practical laboratory experience and 50 undergraduate students who had completed at least one college-level biology course.

The findings from this study indicated a minor enhancement in the accuracy and comprehensiveness of the responses among participants who utilized GPT-4.

Notably, the improvement in comprehensiveness was quantified at 0.82 for the expert group and 0.41 for the students. Despite these gains, OpenAI notes that the magnitude of these effects does not reach statistical significance.

Lastly, it is worth mentioning that the study conducted by OpenAI to test the early warning system was primarily focused on the accessibility of information, rather than its practical implementation. Moreover, the study did not explore the potential of LLMs to facilitate the invention of novel bioweapons.

Another constraint of the study was that the GPT-4 model employed lacked the capability for Internet research or sophisticated data analysis. As a consequence, the outcomes of this investigation should be viewed as provisional. Enhancing access to such tools is identified as a critical step towards augmenting the efficacy and applicability of LLMs in both academic and commercial settings.