Fbioe 1 1537471
Fbioe 1 1537471
Responsible AI in biotechnology:
OPEN ACCESS balancing discovery, innovation
EDITED BY
Segaran P. Pillai,
United States Department of Health and Human
and biosecurity risks
Services, United States
*CORRESPONDENCE
Nicole E. Wheeler, The integration of artificial intelligence (AI) in protein design presents unparalleled
[email protected]
opportunities for innovation in bioengineering and biotechnology. However, it
RECEIVED 30 November 2024 also raises significant biosecurity concerns. This review examines the changing
ACCEPTED 03 January 2025
PUBLISHED 05 February 2025
landscape of bioweapon risks, the dual-use potential of AI-driven bioengineering
tools, and the necessary safeguards to prevent misuse while fostering innovation.
CITATION
Wheeler NE (2025) Responsible AI in It highlights emerging policy frameworks, technical safeguards, and community
biotechnology: balancing discovery, innovation responses aimed at mitigating risks and enabling responsible development and
and biosecurity risks. application of AI in protein design.
Front. Bioeng. Biotechnol. 13:1537471.
doi: 10.3389/fbioe.2025.1537471
KEYWORDS
COPYRIGHT
© 2025 Wheeler. This is an open-access article artificial intellegence, biotechnology, AI safety, protein design and engineering,
distributed under the terms of the Creative synthetic biology
Commons Attribution License (CC BY). The use,
distribution or reproduction in other forums is
permitted, provided the original author(s) and Introduction
the copyright owner(s) are credited and that the
original publication in this journal is cited, in
accordance with accepted academic practice. The convergence of artificial intelligence (AI) and biotechnology is rapidly transforming
No use, distribution or reproduction is the landscape of scientific research, promising groundbreaking advancements in medicine,
permitted which does not comply with these
agriculture, and environmental science (OECD, 2023; AI Policy Perspectives, 2024).
terms.
However, these same tools also present unique biosecurity challenges. The ability of AI
to accelerate drug discovery and design novel proteins can, if misused, lower the barriers to
developing biological weapons with unprecedented precision and potency (Carter et al.,
2023; Sandbrink, 2023; Drexel and Withers, 2024).
This dual-use potential—where innovations designed for beneficial purposes may also
enable harm—demands urgent attention from the biotechnology community. This review
explores the history and evolving threat model of bioweapons development, outlines
specific concerns raised in the light of AI, highlights key mitigations and safeguards
already in place, and outlines actionable pathways for biotechnologists to engage with this
critical issue. By addressing these risks, the field can ensure that the transformative power of
AI is harnessed responsibly, minimising dangers while maximising its potential for good.
et al., 2024) have significantly contributed to structural biology by transformative impact of AI is the recognition of its
accurately predicting protein 3D structures and complementing contributions to protein science through the Nobel Prize in
experimental methods. Similarly, ESM3 has excelled in protein Chemistry. The developers of AlphaFold and Rosetta were
sequence analysis, helping to bridge the gap between sequence, awarded the prize for their groundbreaking work in
structure and function (Hayes et al., 2024), and other deep understanding and designing proteins, highlighting how AI tools
learning models have made further progress helping to address are enabling researchers to solve problems previously thought
the major scientific challenge of protein function prediction insurmountable (Nature, 2024). This recognition underscores the
(Bileschi et al., 2022). These tools collectively address one of the pivotal role AI is playing across disciplines, driving progress not only
central challenges of modern biology: understanding DNA and in protein science but also in areas like climate modeling, materials
protein functions and their roles in biological life, marking a new science, and personalised medicine. By enhancing the speed, scale,
era of AI-enabled scientific discovery. and precision of scientific inquiry, AI is unlocking new frontiers of
The integration of artificial intelligence (AI) into bioengineering knowledge and reshaping the future of innovation.
is helping to transform biological design into a systematic
engineering discipline, sparking rapid progress and innovation.
AI-driven tools are revolutionising protein design, unlocking The dual-use dilemma: risks in AI-driven
opportunities in therapeutics, diagnostics, and synthetic biology. bioengineering
For instance, tools like RFDiffusion (Watson et al., 2023) and
Chroma (Ingraham et al., 2023) allow the creation of proteins The transformative power of AI in bioengineering comes with
with desired structures and properties, while DiffDock (Corso inherent risks, particularly due to the dual-use nature of
et al., 2022) significantly improves the prediction of protein- biotechnology—where tools intended for beneficial purposes can
ligand binding interactions. Tools like DeepBind (Alipanahi also be exploited for malicious ends (National Research Council US,
et al., 2015) accurately predict protein binding to DNA and 2007). The incorporation of AI amplifies these concerns by lowering
RNA, AlphaProteo (Zambaldi et al., 2024) allows the design of some technical barriers to advanced bioengineering, potentially
novel binders, and AI tools are being used to engineer proteins with enabling misuse by malicious actors (Carter et al., 2023; Drexel
greater stability and functionality (Sumida et al., 2024). These and Withers, 2024). Historical precedents, such as the development
innovations tackle long-standing challenges such as engineering and deployment of biological weapons during the 20th century,
enzymes for industrial applications and combating antimicrobial underscore the potentially catastrophic consequences of
resistance, while also paving the way for transformative biotechnological misuse.
advancements in personalised medicine, biomanufacturing, and Recognising these risks, the international community has
environmental sustainability. By harnessing the power of AI, initiated frameworks such as the Biological Weapons Convention
bioengineering can strive to achieve levels of precision and (BWC) (UNODA, 2024) to establish norms against the misuse of
reproducibility akin to traditional engineering disciplines, biotechnology. Recent efforts, including global AI safety summits,
heralding a new era of scientific discovery and application. have sought to extend these principles to the intersection of AI and
AI and robotic scientists represent a cutting-edge application of biosecurity. However, these initiatives face significant challenges,
AI in biotechnology and other scientific fields (OECD, 2023). These including inadequate funding, weak enforcement mechanisms, and
systems automate and accelerate the scientific discovery process, the rapid pace of technological advancements (Cropper et al., 2023).
from hypothesis generation to experimentation and data analysis. This review delves into the evolving landscape of biosecurity
One notable example is Adam, a robot scientist designed to risks associated with AI-powered protein design. It examines the
autonomously identify gene functions in yeast, pioneering the dual-use potential of these tools, evaluates existing and proposed
integration of AI with laboratory automation (Sparkes et al., safeguards, and highlights actionable roles for scientists. By
2010). Building on Adam’s success, Eve was developed to addressing these challenges head-on, we can build a secure and
accelerate drug discovery and has identified existing compounds resilient ecosystem that maximises the benefits of AI-driven
with potential applications for treating neglected tropical diseases bioengineering while safeguarding against its misuse.
(Williams et al., 2015). In metabolic engineering, automated systems
have shown significant promise in optimizing biological processes.
For instance, a study by HamediRad et al. (2019) demonstrated the The changing biorisk landscape
use of automated tools to enhance experimental success rates and
improve yields in the production of valuable biomolecules. The landscape of biological risks has transformed dramatically
Additionally, digital AI systems like the data-to-paper platform over the past century, driven by scientific advancements, changing
(Ifargan et al., 2024), independently analyze large datasets to geopolitical contexts, and the emergence of disruptive technologies
identify priority findings, providing researchers with actionable like artificial intelligence (AI). Historically, the development and
insights and reducing time spent on data interpretation. deployment of biological weapons were constrained by significant
AI is revolutionizing how scientists generate, process, and technical and logistical barriers (Ben Ouagrham-Gormley, 2014;
disseminate knowledge, fundamentally transforming the research Revill and Jefferson, 2014). However, there are concerns that the
landscape. Its applications span the entire scientific workflow, from growing accessibility of cutting-edge biotechnological tools,
gathering and annotating data to modeling complex systems and particularly those powered by AI, has begun to erode these
devising solutions to some of humanity’s most pressing challenges barriers (Carter et al., 2023; Sandbrink, 2023; Drexel and
(AI Policy Perspectives, 2024). One striking example of the Withers, 2024). AI-driven applications in synthetic biology,
protein design, and genetic engineering have not only accelerated in biotechnology, artificial intelligence, and shifting geopolitical
legitimate scientific progress but also expanded the potential for dynamics (Juling, 2023; Brent et al., 2024; Berg and Kappler,
misuse, enabling actors with limited expertise to pursue 2024). These developments expand the potential scope and
sophisticated biological capabilities. sophistication of potential threats, elevating biological weapons to
This section examines the evolution of biological risks, a central concern in contemporary security discussions.
beginning with the historical context of bioweapons development The rise of hybrid warfare—characterised by the integration of
and the global responses to these threats. It then explores the conventional military tactics with unconventional
contemporary challenges posed by the convergence of AI and methods—further complicates the biological risk landscape. A
biotechnology, focusing on how these technologies are reshaping biological weapons attack, for example, could be coordinated
the threat landscape. Finally, it assesses the implications of AI- with cyberattacks targeting health infrastructure, undermining
driven advancements for biosecurity, highlighting the need for emergency response efforts, or paired with disinformation
proactive measures to address the dual-use nature of these campaigns to sow public panic and distrust (Smith, 2019;
powerful tools. Chatham House–International Affairs Think Tank, 2024). These
strategies could amplify the impact of a biological assault, rendering
traditional mitigation measures inadequate and necessitating more
Historical context: bioweapons comprehensive, integrated security frameworks.
development and challenges
Biological weapons have posed a persistent threat since their Challenges in detecting and attributing
initial development in the 20th century. During World War II, malicious use
several nations invested substantial resources into bioweapons
programs, but technical and logistical hurdles often precluded Detecting malicious intent in the development of biological
successful deployment. These challenges included the selection of weapons presents unique challenges, particularly when compared
pathogenic strains, difficulties in scaling up production, and to other weapons of mass destruction. The raw materials and
ensuring properties such as heat stability and effective dispersal technologies required for bioweapons often overlap substantially
under operational conditions (Ben Ouagrham-Gormley, 2014). with those used in legitimate fields, such as medical research, public
While large-scale bioweapons programs have become less health, and agriculture, complicating efforts to distinguish misuse
common, sporadic incidents highlight the enduring risks from beneficial applications (Koblentz, 2009). AI is also likely to
associated with these technologies. Notable examples include the enable the design of novel sequences with pathogenic or toxic
Amerithrax attacks of 2001, where anthrax spores were mailed to functions, challenging existing frameworks for detecting threats
targets in the United States, causing public panic and five deaths based on similarity to historical hazards (US National Science
(Rasko et al., 2011). Another instance occurred in 1984, when a and Technology Council, 2024; U.S. HHS, 2023). Advances in
religious commune deliberately contaminated salad bars with laboratory automation and the self-replicating nature of
Salmonella in an Oregon town, resulting in 751 cases of food biological agents further exacerbate these challenges, enabling
poisoning (Török et al., 1997). rapid scale-up with minimal infrastructure. Additionally, the
These incidents underscore the challenges of identifying and democratisation of synthetic biology tools, which are increasingly
attributing bioweapon use, as well as the widespread public fear such accessible to a global audience, reduces the technical and logistical
attacks can provoke. Notably, they have primarily involved naturally barriers to bioweapon development (Lee et al., 2023). Together,
occurring agents or those subjected to relatively minor these factors underscore the need for robust biosecurity frameworks
modifications, such as engineering drug resistance. However, the to mitigate the dual-use risks of biotechnology (WHO, 2022a).
future of biological and toxin weapons may deviate significantly
from these historical patterns. Advances in biotechnology and
artificial intelligence raise concerns that future threats could The role of AI in shaping the
involve entirely novel agents designed for specific characteristics, biorisk landscape
such as enhanced transmissibility or pathogenicity, targeted effects
on particular populations, or resistance to existing detection and AI has emerged as a potentially transformative factor reshaping
countermeasure systems or existing immunity (Sandbrink, 2023; the scope and sophistication of biological threats (Carter et al., 2023;
Drexel and Withers, 2024; Pannu et al., 2024). Such developments Sandbrink, 2023; Drexel and Withers, 2024). In protein design and
could render traditional preparedness strategies insufficient, bioengineering, AI-driven tools could streamline the creation of
highlighting the urgent need for proactive measures and bioweapons by enabling the design of proteins with tailored
adaptable biosecurity frameworks to address these emerging risks. properties, such as enhanced heat stability, solubility, or binding
specificity—traits that could increase their efficacy as weapons
(Drexel and Withers, 2024; Watson et al., 2023; Sumida et al.,
Emerging challenges in the 2024). For instance, AI can be used to develop novel proteins
contemporary landscape capable of binding to specific targets (Zambaldi et al., 2024), a
capability with potential applications for toxins and biologics in
The modern era is witnessing a renewed awareness of the military contexts. Beyond biological design, AI-powered systems
possibilities of biological warfare in light of rapid advancements like chatbots can enhance logistical aspects of bioweapons
lower floating-point operations per second (FLOP) threshold academic community faces particular challenges in securely sharing
compared to other AI applications, reflecting their heightened sensitive results on the safety and security of AI capabilities without the
dual-use concerns. These initiatives mark significant progress in inappropriate proliferation of dual-use information. Collectively, these
balancing the opportunities of AI-enabled biotechnology with the efforts exemplify how the academic community is balancing the
critical need for global biosecurity. imperative of innovation with the need to ensure global biosecurity.
Major AI companies are taking proactive steps to assess and While significant progress has been made in addressing the risks
mitigate the risks associated with their technologies. Participation in associated with AI in biotechnology, notable challenges remain. The
collaborative platforms like the Frontier Model Forum (Frontier demand for scientific expertise to support policymakers and
Model Forum, 2024c) and AIxBio Global Forum (NTIbio, 2024) international bodies, such as the Biological Weapons Convention
which bring together key stakeholders to share best practices, (BWC) (The InterAcademy Partnership, 2024), continues to grow,
develop guidelines, and promote the safe deployment of tools has placing pressure on the availability of knowledgeable advisors.
become a critical process for AI companies to share learnings and Policymakers and the scientific community must also navigate
best practices and inform their internal policies (Frontier Model the delicate balance between fostering innovation and
Forum, 2024a; Frontier Model Forum, 2024b). Companies like implementing necessary regulations. Despite these challenges, the
OpenAI (OpenAI, 2024), Meta (Dubey et al., 2024; Anthropic, current landscape offers significant opportunities. Strengthening the
2024) and DeepMind (Grin et al., 2024) have also commissioned biosecurity framework can foster interdisciplinary collaboration
comprehensive safety evaluations for their models, often including between AI developers, biologists, and policymakers, creating a
red-teaming exercises designed to uncover vulnerabilities and more unified approach to managing dual-use risks.
identify potential misuse. However, these efforts demand careful
oversight to prevent accidental disclosure of sensitive findings or
unintended misuse. Some companies have placed their models Safeguards for mitigating risks: current
behind an interface that allows them to control and monitor state, opportunities and challenges
access, but others have released their models fully in the public
domain, precluding the implementation of robust governance Safeguards to mitigate risks in AI and biotechnology are
mechanisms. A new industry is forming around providing risk evolving, yet significant gaps persist. While a range of potential
assessments and safety evaluations for AI and biotechnology safeguards have been proposed, such as refusal mechanisms, tiered
applications. While best practices for this sector are still being access controls, and enhanced monitoring systems (The Nuclear
developed, there is growing demand for experts in biotechnology Threat Initiative. NTI, 2024a), their application in AI tools specific
to design robust safety protocols and capability benchmarks. to biotechnology remains largely uncharted. Many of these tools
operate in a dual-use space, where their capabilities can advance
both beneficial applications, like therapeutic development, and
Academic and research community efforts potential misuse, such as bioweapon design, creating substantial
challenges in mitigating risks of misuse while harnessing
The academic community has increasingly recognised its their benefits.
responsibility to mitigate biosecurity risks while continuing to
advance scientific discovery. One significant step has been the protein
design community’s issuance of a statement on the responsible use of AI Data controls
in biodesign, which underscores the importance of ethical considerations
in deploying these powerful tools (Responsible AI x Biodesign, 2024). In Some model developers have taken proactive steps to withhold
parallel, some research funders now require applicants to submit data they deem risky, such as certain pathogen genomes, from AI
statements detailing how potential dual-use applications of their models. Examples include the exclusion of sensitive datasets in tools
work, including bioweapons risks, will be mitigated (Wellcome, like ESM3 (Hayes et al., 2024) and Evo (Nguyen et al., 2024).
2024). The Biofunders Compact has further encouraged bioscience However, the open nature of many AI models has allowed fine-
and biotechnology funders to make public commitments to tuning with restricted data, potentially undermining these
integrating biosecurity and biosafety into their funding decisions, precautions (PathoLM, 2024; Workman and LatchBio, 2024).
promoting accountability and transparency (The Nuclear Threat Policy proposals to limit access to future pathogen genome data
Initiative. NTI, 2024b). Academic contributions to defensive measures have also emerged (Carter et al., 2023; Maxmen, 2021), aiming to
have also been notable. Research into methods for detecting genetic preempt misuse. These proposals are not entirely new (Committee
engineering (Wang et al., 2019; Alley et al., 2020) and attributing the on Genomics Databases for Bioterrorism Threat Agents and Board
origins of engineered organisms (Wang et al., 2021), has advanced on Life Sciences, 2004), but have gained renewed urgency in the
significantly. Progress in DNA synthesis screening technologies context of AI’s rapid development and the increasing
(Godbold et al., 2021; Wheeler et al., 2024), and microbial forensics democratisation of biotechnology. While these proposals have
(Inglis, 2024; Tripathi et al., 2024) further illustrates the research garnered support in some policy circles, they have been met with
community’s proactive role in addressing dual-use risks. The criticism from experts who warn that restrictions could impede
scientific progress and global collaboration (Committee on analysis, which will inevitably raise difficult trade-offs in
Genomics Databases for Bioterrorism Threat Agents and Board promoting beneficial science, preventing harm, and financing the
on Life Sciences, 2004). The World Health Organization (WHO) resources required for safety measures. This highlights the
emphasises that sharing pathogen genome data is crucial for importance of further investigation into refusal mechanisms
preventing, detecting, and responding to epidemics and tailored to the unique challenges of biological design tools.
pandemics, as well as for monitoring endemic diseases and
tracking antimicrobial resistance (WHO, 2022b). Moreover, some
experts question whether excluding pathogen data from AI training Managed access frameworks
would significantly limit the development of concerning capabilities,
noting that design methodologies often rely on unrelated or widely Open-weight models are highly valued by academic and
accessible data (The Nuclear Threat Initiative. NTI, 2024a). This research communities for their ability to foster transparency,
ongoing debate reflects the complex balance between biosecurity reproducibility, and innovation. By allowing researchers to
and the need for scientific openness. replicate studies, extend existing work, and democratise advanced
technologies, these models have become indispensable tools in
scientific progress. However, their open-access nature introduces
Built-in safeguards in AI tools significant risks of misuse, particularly in fields like biotechnology
where dual-use applications can enable harmful purposes (The
Built-in safeguards are a critical component of risk mitigation Nuclear Threat Initiative. NTI, 2024a).
strategies for general-purpose AI tools, designed to prevent misuse Efforts to restrict access to open-weight models frequently
and ensure responsible deployment. These safeguards include encounter resistance from the academic and open-source
mechanisms such as refusal systems that block harmful or communities. Advocates of open access argue that transparency
unethical requests and hard-coded constraints to limit specific is essential for scientific integrity, promoting collaboration, and
high-risk functionalities. However, the implementation and accelerating innovation (Anonymous, 2024). This tension is
effectiveness of these measures vary significantly across AI further compounded by the rapid emergence of open-source
domains. For large language models (LLMs), extensive safeguards versions of closed models, driven by demand for their
have been adopted. Systems such as ChatGPT (OpenAI, 2024) and functionality (Callaway, 2024). As a result, restricting access often
Gemini (Google Cloud, 2024) feature refusal mechanisms that proves challenging, as it may lead to the proliferation of unofficial
prevent responses to potentially harmful queries, including and less-regulated alternatives.
instructions for creating weapons or performing unsafe chemical Managed access frameworks offer a potential solution to these
reactions. Yet even these safeguards can be stripped out if sufficiently challenges by enabling the controlled distribution of AI tools while
open access is provided to the models, such as through the open maintaining some degree of accessibility. Platforms like Together AI
release of model weights. (2024) and Huggingface (2024) provide repositories and APIs that
In contrast, the adoption of built-in safeguards for balance openness with accountability by requiring users to comply
biotechnology-focused AI tools remains underdeveloped, despite with ethical guidelines or community standards. Kaggle (2024), a
their dual-use potential (The Nuclear Threat Initiative. NTI, 2024a). popular platform for data science competitions, exemplifies another
For example, AlphaFold3 were early adopters of experimental managed-access approach by providing datasets, models and
refusal mechanisms to block misuse, but these efforts were computational resources in a controlled environment. These
preliminary and highlighted the challenges of balancing frameworks promote responsible use while still democratising AI
functionality with security (Grin et al., 2024). The tools. However, managed-access frameworks come with caveats.
appropriateness of implementing refusal mechanisms in They would demand substantial resources for implementation and
biological design tools remains a complex and underexplored oversight, including providing cloud computing resources,
issue. Unlike in large language models, where refusals can monitoring usage, vetting users, and enforcing compliance. A key
effectively block queries with clear malicious intent, biological component of managed-access frameworks for biosecurity may
design often resides in a dual-use space. Many therapeutic and involve Know Your Customer (KYC) processes, which are widely
diagnostic efforts inherently involve predictions and designs that used in industries like finance to verify user identities. Applying
overlap with weaponisation potential. For instance, the development KYC principles to AI access might include identity verification,
of treatments for infectious diseases may require modelling institutional affiliation checks, and risk assessments of intended use.
pathogenic structures or engineering highly specific proteins, Such measures could leverage existing frameworks and technologies
which could also be repurposed for harmful applications to enhance security while preserving accessibility. Using ORCIDs
(Thadani et al., 2023). This overlap makes it challenging to (Open Researcher and Contributor IDs) (ORCID, 2024) as part of a
delineate legitimate use cases from misuse solely through managed-access framework offers an efficient and scalable way to
automated refusal systems (The Nuclear Threat Initiative. NTI, implement KYC principles in the life sciences. ORCIDs provide a
2024a). Therefore, refusal mechanisms, if overly restrictive, could persistent, unique identifier for researchers, allowing platforms to
impede critical research, such as designing countermeasures for verify users’ identities and affiliations while minimising
bioterrorism or creating synthetic vaccines. Balancing the need for administrative burdens. By integrating ORCIDs into access
accuracy and functionality in therapeutic and diagnostic efforts with protocols, AI developers and data providers could ensure that
biosecurity safeguards will require a nuanced approach, combining only authenticated researchers with credible affiliations gain
automated mechanisms with human oversight and contextual access to sensitive tools or datasets. This approach leverages an
existing, widely adopted system, reducing barriers to could promote best practices in DNA and protein engineering,
implementation while enhancing accountability and traceability creating a foundation for collaborative innovation while
in the use of dual-use technologies. mitigating risks of misuse. This approach aligns with efforts to
Without adequate resources, these managed access systems risk balance the need for transparency and openness in research with the
being bypassed or failing to address security concerns imperative of safeguarding against the dual-use potential of
comprehensively, and even with sufficient resources, they cannot emerging biotechnologies.
eliminate the possibility of an “inside threat” who passes the various
vetting requirements but still has nefarious intent.
Current limitations and pathways for
enhancement
Evaluations and red teaming
The dual-use nature of AI in biotechnology raises urgent and
Effective evaluations and red-teaming efforts are vital for complex questions about the effectiveness and consequences of
advancing AI applications in biotechnology responsibly, yet the proposed safeguards. For instance, could refusal mechanisms
current landscape is hindered by inconsistent benchmarks, making effectively block harmful applications without obstructing critical
it challenging to measure progress or compare capabilities across research? Might managed access models balance the need for
tools. Treaty commitments (UNODA, 2024) and ethical security with the imperative of fostering open collaboration?
considerations often restrict direct assessments of harmful Furthermore, what are the logistical and financial implications of
applications, leaving a critical gap in standardised testing implementing these safeguards at scale, especially for smaller
protocols and evidence-based evaluations. Benign proxy tasks can institutions or researchers in resource-constrained settings?
shine light on capabilities of concern and the effectiveness of Addressing these challenges will require a concerted effort
safeguards while avoiding the creation of harmful products involving rigorous testing of safeguard mechanisms, broad
(RAND, 2024). However, there is still ongoing debate about stakeholder engagement, and the creation of context-specific
which proxy tasks are most effective for assessing risks related to frameworks tailored to the unique risks and opportunities in
biological weapons. biotechnology. Central to these efforts is the active participation
DiEuliis et al. (2024) highlight the need for robust and of the biotechnology research community. Researchers must help to
multifaceted approaches to evaluating biosecurity risks. Experts ground risk assessments in demonstrated and realistic future
in biotechnology are needed to engage in red-teaming exercises, capabilities, ensuring that mitigation strategies are both effective
where participants simulate misuse scenarios to uncover at reducing risks and compatible with enabling beneficial research.
vulnerabilities and inform mitigation strategies. They are also By striking this balance, the community can help ensure that AI in
needed to feed in to ongoing, dynamic assessments of factors like biotechnology advances responsibly, maximising its potential for
technological readiness, accessibility, and the expertise required for global good while minimising the risk of misuse.
potential exploitation of a tool. A particularly pressing need exists
for benchmarks to evaluate design capabilities. Reliable metrics for
assessing the efficacy and safety of AI-driven design tools are scarce, Discussion
leaving gaps in understanding how these systems might be leveraged
for dual-use purposes and presenting opportunities for research and The dual-use nature of AI in biotechnology underscores the
engagement from the biotechnology community. delicate balance between fostering innovation and implementing
safeguards. Over-regulation risks stifling progress, particularly in
areas like therapeutic discovery and synthetic biology, where
Capture of design metadata access to advanced tools can drive breakthroughs. On the other
hand, insufficient safeguards leave the door open to potential
Capturing and standardising activity from DNA and protein misuse, from bioweapon development to accidental creation of
sequence design tools offers a significant opportunity to enhance harmful agents. Striking this balance requires policies that are
both the traceability and accountability of biological work. By flexible enough to adapt to evolving technologies while robust
cataloging the design process, including the steps taken and enough to address emerging threats. A proactive, interdisciplinary
decisions made, researchers can create a transparent audit trail approach is essential to address the challenges posed by AI in
that not only strengthens biosecurity but also fosters trust and biotechnology. Engagement between AI researchers,
collaboration within the scientific community (The Nuclear bioengineers, and security experts can foster a deeper
Threat Initiative. NTI, 2024a). Such audit trails would be understanding of dual-use risks and enable the development of
invaluable for DNA synthesis providers, enabling them to better practical safeguards.
assess the intent behind novel sequences submitted for synthesis. By A proactive approach, modelled on the success of the Asilomar
understanding the design process, providers could more effectively Conference on recombinant DNA (Grace, 2015), can set the stage
evaluate potential risks and ensure compliance with biosecurity for responsible innovation in AI-powered biotechnology. Involving
standards. Additionally, these records could serve as an diverse stakeholders early in the conversation ensures that safety and
important resource for publishing the methods used in scientific ethical concerns are addressed without stifling progress. Unlike early
research, offering reproducibility and clarity in peer-reviewed genetic engineering efforts, current AI applications must prioritise
studies. Moreover, sharing standardised data on sequence design addressing dual-use risks, including potential misuse for
References
Anthropic (2024). A new initiative for developing third-party model evaluations. Callaway, E. (2024). Who will make AlphaFold3 open source? Scientists race to crack
Available at: https://www.anthropic.com/news/a-new-initiative-for-developing-third- AI model. Nature 630 (8015), 14–15. doi:10.1038/d41586-024-01555-x
party-model-evaluations (Accessed November 30, 2024)
Carter, S., Wheeler, N. E., Chwalek, S., Isaac, C., and Yassif, J. M. (2023). The
Huggingface (2024). The AI community building the future Available at: https:// convergence of artificial intelligence and the life sciences. United States: NTI | bio.
huggingface.co/ (Accessed November 30, 2024)
Chatham House – International Affairs Think Tank (2024). Russian cyber and
AISI (2024). The AI Safety Institute (AISI). Available at: https://www.aisi.gov.uk/ information warfare in practice. Available at: https://www.chathamhouse.org/2023/
(Accessed November 30, 2024) 12/russian-cyber-and-information-warfare-practice/04-information-confrontation-
human-effects (Accessed November 30, 2024).
AI Policy Perspectives (2024). A new golden age of discovery. Available at: https://www.
aipolicyperspectives.com/p/a-new-golden-age-of-discovery (Accessed November 30, 2024). Cheng, J., Novati, G., Pan, J., Bycroft, C., Žemgulytė, A., Applebaum, T., et al. (2023).
Accurate proteome-wide missense variant effect prediction with AlphaMissense.
AI Safety Summit (2023). Capabilities and risks from frontier AI. United Kingdom:
Science 381 (6664), eadg7492. doi:10.1126/science.adg7492
DSIT. Available at: https://assets.publishing.service.gov.uk/media/
65395abae6c968000daa9b25/frontier-ai-capabilities-risks-report.pdf. CLTR (2024). The near-term impact of AI on biological misuse. Available at: https://
www.longtermresilience.org/reports/the-near-term-impact-of-ai-on-biological-
AISI Japan (2024). AISI Japan - AI safety Institute. Available at: https://aisi.go.jp/
misuse/ (Accessed October 14, 2024).
(Accessed November 30, 2024).
Committee on Genomics Databases for Bioterrorism Threat Agents, Board on Life
Alipanahi, B., Delong, A., Weirauch, M. T., and Frey, B. J. (2015). Predicting the
Sciences (2004). Division on earth and life studies, national research Council, national
sequence specificities of DNA- and RNA-binding proteins by deep learning. Nat.
academy of sciences. Seeking security: pathogens, open access, and genome databases.
Biotechnol. 33 (8), 831–838. doi:10.1038/nbt.3300
Washington, DC: National Academies Press, 88. Available at: https://nap.
Alley, E. C., Turpin, M., Liu, A. B., Kulp-McDowall, T., Swett, J., Edison, R., et al. nationalacademies.org/download/11087 (Accessed November 30, 2024).
(2020). A machine learning toolkit for genetic engineering attribution to facilitate
Corso, G., Stärk, H., Jing, B., Barzilay, R., and Jaakkola, T. (2022). DiffDock: diffusion
biosecurity. Nat. Commun. 11 (1), 6293. doi:10.1038/s41467-020-19612-0
steps, twists, and turns for molecular docking. arXiv [q-bio.BM]. doi:10.48550/arXiv.
Anonymous, (2024). AlphaFold3 - why did Nature publish it without its code?. 2210.01776
Nature. 629 (8013), 728. doi:10.1038/d41586-024-01463-0
Cropper, N. R., Rath, S., Teo, R. J. C., Warmbrod, K. L., and Lancaster, M. J. (2023). A
Baker, D., and Church, G. (2024). Protein design meets biosecurity. Science 383 modular-incremental approach to improving compliance verification with the
(6681), 349. doi:10.1126/science.ado1671 biological weapons convention. Health Secur. 21 (5), 421–427. doi:10.1089/hs.2023.
0078
Ben Ouagrham-Gormley, S. (2014). Barriers to bioweapons: the challenges of expertise
and organization for weapons development. Ithaca, NY: Cornell University Press. DiEuliis, D., Imperiale, M. J., and Berger, K. M. (2024). Biosecurity assessments for
Available at: https://academic.oup.com/cornell-scholarship-online/book/16600 emerging transdisciplinary biotechnologies: revisiting biodefense in an age of synthetic
(Accessed September 29, 2024). biology. Appl. Biosaf. 29 (3), 123–132. doi:10.1089/apb.2024.0005
Berg, F., and Kappler, S. (2024). “Future biological and chemical weapons,” in Ciottone’s Drexel, B., and Withers, C. (2024). AI and the evolution of biological national security
disaster medicine (Elsevier), 520–530. doi:10.1016/B978-0-323-80932-0.00083-5 risks: capabilities, thresholds, and interventions. Available at: https://s3.us-east-1.
amazonaws.com/files.cnas.org/documents/AIBiologicalRisk_2024_Final.pdf.
Bileschi, M. L., Belanger, D., Bryant, D. H., Sanderson, T., Carter, B., Sculley, D., et al.
(2022). Using deep learning to annotate the protein universe. Nat. Biotechnol. 40 (6), Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., Letman, A., et al. (2024).
932–937. doi:10.1038/s41587-021-01179-w The Llama 3 herd of models. arXiv [cs.AI]. doi:10.48550/arXiv.2407.21783
Brent, R., Greg McKelvey, T., and Matheny, A. J. (2024). The new bioweapons: how Frazer, J., Notin, P., Dias, M., Gomez, A., Min, J. K., Brock, K., et al. (2021). Disease
synthetic biology could destabilize the world essays 103. Available at: https://www. variant prediction with deep generative models of evolutionary data. Nature 599 (7883),
foreignaffairs.com/world/new-bioweapons-covid-biology. 91–95. doi:10.1038/s41586-021-04043-8
Frontier Model Forum (2024a). Issue brief: early best practices for frontier AI safety Press US). Available at: https://www.ncbi.nlm.nih.gov/books/NBK11496/(Accessed
evaluations. Available at: https://www.frontiermodelforum.org/updates/early-best- October 21, 2024).
practices-for-frontier-ai-safety-evaluations/(Accessed November 30, 2024).
Nguyen, E., Poli, M., Durrant, M. G., Thomas, A. W., Kang, B., Sullivan, J., et al.
Frontier Model Forum (2024b). Issue brief: foundational security practices. Available (2024). Sequence modeling and design from molecular to genome scale with Evo.
at: https://www.frontiermodelforum.org/updates/issue-brief-foundational-security- bioRxiv. doi:10.1101/2024.02.27.582234
practices/(Accessed November 30, 2024).
NIST, R. F. (2023). U.S. Artificial intelligence safety Institute. United States: NIST.
Frontier Model Forum (2024c). Progress update: advancing frontier AI safety in Available at: https://www.nist.gov/aisi (Accessed November 30, 2024).
2024 and beyond. Available at: https://www.frontiermodelforum.org/updates/progress-
NTIbio (2024). AIxBio global Forum structure and goals. Available at: https://www.
update-advancing-frontier-ai-safety-in-2024-and-beyond/(Accessed October 15,
nti.org/wp-content/uploads/2024/07/AI_Bio-Global-Forum-Structure-and-Goals_
2024).
White-Paper.pdf.
Godbold, G. D., Kappell, A. D., LeSassier, D. S., Treangen, T. J., and Ternus, K. L.
Nature (2024). AI pioneers win 2024 Nobel prizes. Nat. Mach. Intell. 6(11), 1271. Available
(2021). Categorizing sequences of concern by function to better assess mechanisms of
at: https://www.nature.com/articles/s42256-024-00945-0 (Accessed November 30, 2024).
microbial pathogenesis. Infect. Immun. 15, e0033421. doi:10.1128/IAI.00334-21
OECD (2023). Artificial intelligence in science: challenges, opportunities and the future of
GOV.UK (2024a). New commitment to deepen work on severe AI risks concludes AI
research. Paris: OECD. Available at: https://www.oecd-ilibrary.org/science-and-technology/
Seoul Summit. Department for Science, Technology. Available at: https://www.gov.uk/
artificial-intelligence-in-science_a8d820bd-en (Accessed January 6, 2024).Refstyled.html
government/news/new-commitmentto-deepen-work-on-severe-ai-risks-concludes-ai-
seoul-summit (Accessed November 30, 2024). OpenAI (2024). OpenAI o1 system card. Available at: https://assets.ctfassets.net/
kftzwdyauwt9/67qJD51Aur3eIc96iOfeOP/71551c3d223cd97e591aa89567306912/o1_
Google Cloud (2024). Gemini for google cloud and responsible AI. Available at:
system_card.pdf.
https://cloud.google.com/gemini/docs/discover/responsible-ai (Accessed November 30,
2024). ORCID (2024). ORCID’s Global Participation Fund. Available at: https://orcid.org/
(Accessed October 21, 2024).
GOV.UK (2023). About the AI safety summit 2023. Available at: https://www.gov.uk/
government/topical-events/ai-safety-summit-2023/about (Accessed November 30, Pannu, J., Bloomfield, D., Zhu, A., MacKnight, R., Gomes, G., Cicero, A., et al. (2024).
2024). Prioritizing high-consequence biological capabilities in evaluations of artificial
intelligence models. arXiv [cs.CY]. doi:10.48550/arXiv.2407.13059
GOV.UK (2024b). UK screening guidance on synthetic nucleic acids for users and
providers. Available at: https://www.gov.uk/government/publications/uk-screening- PathoLM (2024). Identifying pathogenicity from the DNA sequence through the
guidance-on-synthetic-nucleic-acids/uk-screening-guidance-on-synthetic-nucleic- genome foundation. Model. doi:10.48550/arXiv.2406.13133
acids-for-users-and-providers (Accessed November 30, 2024).
Phuong, M., Aitchison, M., Catt, E., Cogan, S., Kaskasoli, A., Krakovna, V., et al.
Grace, K. (2015). The Asilomar conference: a case study in risk mitigation. Berkeley, (2024). Evaluating frontier models for dangerous capabilities. arXiv [cs.LG]. doi:10.
CA: Machine Intelligence Research Institute. Available at: https://intelligence.org/files/ 48550/arXiv.2403.13793
TheAsilomarConference.pdf.
Poplin, R., Chang, P. C., Alexander, D., Schwartz, S., Colthurst, T., Ku, A., et al. (2018).
Grin, C., Howard, H., Paterson, A., Swanson, N., Bloxwich, D., Jumper, J., et al. A universal SNP and small-indel variant caller using deep neural networks. Nat.
(2024). Our approach to biosecurity for AlphaFold 3. Available at: https://storage. Biotechnol. 36 (10), 983–987. doi:10.1038/nbt.4235
googleapis.com/deepmind-media/DeepMind.com/Blog/alphafold-3-predicts-the-
RAND (2024). On the responsible development and use of chem-bio AI models.
structure-and-interactions-of-all-lifes-molecules/Our-approach-to-biosecurity-for-
Available at: https://www.rand.org/content/dam/rand/pubs/perspectives/PEA3600/
AlphaFold-3-08052024.
PEA3674-1/RAND_PEA3674-1.pdf.
HamediRad, M., Chao, R., Weisberg, S., Lian, J., Sinha, S., and Zhao, H. (2019).
Rasko, D. A., Worsham, P. L., Abshire, T. G., Stanley, S. T., Bannan, J. D., Wilson, M.
Towards a fully automated algorithm driven platform for biosystems design. Nat.
R., et al. (2011). Bacillus anthracis comparative genome analysis in support of the
Commun. 10 (1), 5150. doi:10.1038/s41467-019-13189-z
Amerithrax investigation. Proc. Natl. Acad. Sci. U. S. A. 108 (12), 5027–5032. doi:10.
Hayes, T., Rao, R., Akin, H., Sofroniew, N. J., Oktay, D., Lin, Z., et al. (2024). 1073/pnas.1016657108
Simulating 500 million years of evolution with a language model. bioRxiv. doi:10.1101/
Responsible AI x Biodesign (2024). Community Values, Guiding Principles, and
2024.07.01.600583
Commitments for the Responsible Development of AI for Protein Design. Available at:
Ifargan, T., Hafner, L., Kern, M., Alcalay, O., and Kishony, R. (2024). Autonomous https://responsiblebiodesign.ai/ (Accessed September 29, 2024).
LLM-driven research from data to human-verifiable research papers. arXiv [q-bio.OT].
Revill, J., and Jefferson, C. (2014). Tacit knowledge and the biological weapons
doi:10.48550/arXiv.2404.17605
regime. Sci. Public Policy 41 (5), 597–610. doi:10.1093/scipol/sct090
Inglis, T. J. J. (2024). A systematic approach to microbial forensics. J. Med. Microbiol.
Sandbrink, J. B. (2023). Artificial intelligence and biological misuse: differentiating risks of
73 (2), 001802. doi:10.1099/jmm.0.001802
language models and biological design tools. arXiv [cs.CY]. doi:10.48550/arXiv.2306.13952
Ingraham, J. B., Baranov, M., Costello, Z., Barber, K. W., Wang, W., Ismail, A., et al.
Shaping Europe’s digital future (2024). European AI office. Available at: https://
(2023). Illuminating protein space with a programmable generative model. Nature 623
digital-strategy.ec.europa.eu/en/policies/ai-office#ecl-inpage-tasks-of-the-ai-office
(7989), 1070–1078. doi:10.1038/s41586-023-06728-8
(Accessed November 30, 2024).
Juling, D. (2023). Future bioterror and biowarfare threats for NATO’s armed forces
Smith, H. (2019). Countering hybrid threats. Democracy 95 (2), 255–77.
until 2030. J. Adv. Mil. Stud. 14 (1), 118–143. doi:10.21140/mcuj.20231401005
Soice, E. H., Rocha, R., Cordova, K., Specter, M., and Esvelt, K. M. (2023). Can large
Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., et al.
language models democratize access to dual-use biotechnology? arXiv [cs.CY]. doi:10.
(2021). Highly accurate protein structure prediction with AlphaFold. Nature 596
48550/arXiv.2306.03809
(7873), 583–589. doi:10.1038/s41586-021-03819-2
Sparkes, A., King, R. D., Aubrey, W., Benway, M., Byrne, E., Clare, A., et al. (2010). An
Kaggle (2024). Your machine learning and data science community. Available at:
integrated laboratory robotic system for autonomous discovery of gene function. J. Lab.
https://www.kaggle.com/ (Accessed November 30, 2024).
Autom. 15 (1), 33–40. doi:10.1016/j.jala.2009.10.001
Koblentz, G. D. (2009). Living weapons: biological warfare and international security.
Sumida, K. H., Núñez-Franco, R., Kalvet, I., Pellock, S. J., Wicky, B. I. M., Milles, L. F.,
Ithaca, NY: Cornell University Press. Available at: https://www.jstor.org/stable/10.7591/
et al. (2024). Improving protein expression, stability, and function with ProteinMPNN.
j.ctt7z9s0 (Accessed November 30, 2024).
J. Am. Chem. Soc. 146 (3), 2054–2061. doi:10.1021/jacs.3c10941
Krishna, R., Wang, J., Ahern, W., Sturmfels, P., Venkatesh, P., Kalvet, I., et al. (2024).
Thadani, N. N., Gurev, S., Notin, P., Youssef, N., Rollins, N. J., Ritter, D., et al. (2023).
Generalized biomolecular modeling and design with RoseTTAFold All-Atom. Science
Learning from prepandemic data to forecast viral escape. Nature 622 (7984), 818–825.
384 (6693), eadl2528. doi:10.1126/science.adl2528
doi:10.1038/s41586-023-06617-0
Lee, D. H., Kim, H., Sung, B. H., Cho, B. K., and Lee, S. G. (2023). Biofoundries:
The InterAcademy Partnership (IAP) (2024). Proof of concept meeting on a BWC
bridging automation and biomanufacturing in synthetic biology. Biotechnol. Bioprocess
scientific advisory body procedural report. Available at: https://www.interacademies.
Eng. 28 (6), 892–904. doi:10.1007/s12257-023-0226-x
org/publication/bwc-proof-concept-procedural-report (Accessed November 30, 2024).
Maxmen, A. (2021). Why some researchers oppose unrestricted sharing of coronavirus
The Nuclear Threat Initiative. NTI (2024a). Developing guardrails for AI biodesign
genome data. Nature 593 (7858), 176–177. doi:10.1038/d41586-021-01194-6
tools. Available at: https://www.nti.org/analysis/articles/developing-guardrails-for-ai-
Mouton, C. A., Lucas, C., and Guest, E. (2024). The operational risks of AI in large- biodesign-tools/ (Accessed November 30, 2024).
scale biological attacks A red-team approach. United States: RAND. Available at: https://
The Nuclear Threat Initiative. NTI (2024b). International bio funders Compact.
www.rand.org/content/dam/rand/pubs/research_reports/RRA2900/RRA2977-2/
Available at: https://www.nti.org/about/programs-projects/project/bio-funders-
RAND_RRA2977-2.pdf.
compact/ (Accessed November 30, 2024).
National Research Council (US) (2007). “Committee on a new government-university
The White House (2023). Executive order on the safe, secure, and trustworthy
partnership for science, security. Biosecurity and dual-use research in the life sciences,”
development and use of artificial intelligence. Available at: https://www.whitehouse.gov/
in Science and security in a post 9/11 world: a report based on regional discussions
briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-
between the science and security communities (Washington, DC: National Academies
and-trustworthy-development-and-use-of-artificial-intelligence/ (Accessed November Wang, Q., Kille, B., Liu, T. R., Elworth, R. A. L., and Treangen, T. J. (2021).
24, 2023). PlasmidHawk improves lab of origin prediction of engineered plasmids using
sequence alignment. Nat. Commun. 12 (1), 1167. doi:10.1038/s41467-021-21180-w
Together AI (2024). Together AI – the AI acceleration cloud - fast inference, fine-tuning
and training Available at: https://www.together.ai/ (Accessed November 30, 2024). Watson, J. L., Juergens, D., Bennett, N. R., Trippe, B. L., Yim, J., Eisenach, H. E., et al.
(2023). De novo design of protein structure and function with RFdiffusion. Nature 620
Török, T. J., Tauxe, R. V., Wise, R. P., Livengood, J. R., Sokolow, R., Mauvais, S., et al.
(7976), 1089–1100. doi:10.1038/s41586-023-06415-8
(1997). A large community outbreak of salmonellosis caused by intentional contamination
of restaurant salad bars. JAMA 278 (5), 389. doi:10.1001/jama.1997.03550050051033 Wellcome (2024). Managing risks of research misuse. Available at: https://wellcome.
org/grant-funding/guidance/policies-grant-conditions/managing-risks-research-
Tripathi, P., Render, R., Nidhi, S., and Tripathi, V. (2024). Microbial genomics: a
misuse?utm_source=chatgpt.com (Accessed November 30, 2024).
potential toolkit for forensic investigations. Forensic Sci. Med. Pathol. doi:10.1007/
s12024-024-00830-7 Wheeler, N. E., Bartling, C., Carter, S. R., Clore, A., Diggans, J., Flyangolts, K., et al.
(2024). Progress and prospects for a nucleic acid screening test set. Appl. Biosaf. 29,
UNODA (2024). Biological weapons – UNODA. Available at: https://disarmament.
133–141. doi:10.1089/apb.2023.0033
unoda.org/biological-weapons/ (Accessed November 30, 2024).
WHO (2022a). Global guidance framework for the responsible use of the life sciences:
U.S. Department of Homeland Security (2024). FACT sheet and report: DHS
mitigating biorisks and governing dual-use research. Geneva: WHO. Available at: https://
advances efforts to reduce the risks at the intersection of artificial intelligence and
play.google.com/store/books/details?id=vUiKEAAAQBAJ.
chemical, biological, radiological, and nuclear (CBRN) threats. Available at: https://
www.dhs.gov/publication/fact-sheet-and-report-dhs-advances-efforts-reduce-risks- WHO (2022b). WHO guiding principles for pathogen genome data sharing. World
intersection-artificial (Accessed November 30, 2024). Health Organization. Available at: https://www.who.int/publications/i/item/
9789240061743?utm_source=chatgpt.com (Accessed November 30, 2024).
U.S. HHS (2023). Screening framework guidance for providers and users of synthetic
nucleic acids. Available at: https://aspr.hhs.gov/legal/synna/Documents/SynNA- Williams, K., Bilsland, E., Sparkes, A., Aubrey, W., Young, M., Soldatova, L. N., et al. (2015).
Guidance-2023.pdf (Accessed November 24, 2023). Cheaper faster drug development validated by the repositioning of drugs against neglected
tropical diseases. J. R. Soc. Interface 12 (104), 20141289. doi:10.1098/rsif.2014.1289
US National Science and Technology Council (2024). Framework for nucleic acid
synthesis screening. Available at: https://www.whitehouse.gov/wp-content/uploads/ Workman, K., and LatchBio. (2024). Engineering AAVs with Evo and AlphaFold.
2024/04/Nucleic-Acid_Synthesis_Screening_Framework.pdf. Available at: https://blog.latch.bio/p/engineering-aavs-with-evo-and-alphafold
(Accessed October 14, 2024).
Wang, Q., Elworth, R. A. L., Liu, T. R., and Treangen, T. J. (2019). Faster pan-genome
construction for efficient differentiation of naturally occurring and engineered plasmids Zambaldi, V., La, D., Chu, A. E., Patani, H., Danson, A. E., Kwan, T. O. C., et al.
with plaster. Schloss Dagstuhl - Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs. (2024). De novo design of high-affinity protein binders with AlphaProteo. arXiv [q-
WABI.2019.19 bio.BM]. doi:10.48550/arXiv.2409.08022