Artificial Intelligence
NIH promotes the safe and responsible use of AI in biomedical research through programs that support the development and use of algorithms and models for research, contribute to AI-ready datasets that accelerate discovery, and encourage multi-disciplinary partnerships that drive transparency, privacy, and equity.
Artificial Intelligence in Research: Policy Considerations and Guidance
Advancements in artificial intelligence (AI) are spurring tremendous progress in medical research to enhance human health and longevity. To that end, NIH has a robust system of policies and practices that guide stakeholders across the biomedical and behavioral research ecosystem. While AI may not be explicitly mentioned, NIH’s policy framework is designed to responsibly guide and govern advancing science and emerging technologies, including development and use of AI technologies in research.
The policies, best practices, and regulations listed below reflect this framework and should be considered before, during, and after development and use of AI in research. This is not an exhaustive list of all policies and requirements that may apply to any NIH-supported research project but can serve as a guide for the research community.
Please note: Unauthorized data disclosures violate several of the policies listed below. Investigators should be cognizant that research data used as input or training for AI could result in their unintentional disclosure if the data is sent to an AI provider external to NIH.
Research Participant Protections
The following establish expectations and best practices for protecting the welfare, privacy, and autonomy of research participants. The ethical considerations embedded in these policies, regulations, and best practices (e.g., privacy) address key issues relevant to the development and use of AI in research. In adhering to them, investigators can mitigate potential harms and inequities arising from the use and development of AI.
Protection of Human Subjects (45 CFR 46): Outlines basic provisions for Institutional Review Boards, informed consent, and assurance of compliance for NIH-supported research involving human participants and their data, including considerations of risks & benefits.
For clinical investigations that are also regulated by the Food and Drug Administration, see:
21 CFR 50 Protection of Human Subjects
21 CFR 56 Institutional Review Boards
Certificates of Confidentiality: Prohibits the disclosure of identifiable, sensitive research information to anyone not connected to the research except when the participant consents or in a few other specific situations.
NIH Information about Protecting Privacy When Sharing Human Research Participant Data: Provides a set of principles and best practices for protecting the privacy of human research participants when sharing data in NIH-supported research. (Issued under the NIH Data Management and Sharing policy.)
NIH Informed Consent for Secondary Research with Data and Biospecimens: Provides points to consider, instructions for use, and optional sample language that is designed for informed consent documents for research studies that include plans to store and share collected data and biospecimens for future use.
Data Management and Sharing
The following seek to maximize the responsible management and sharing of scientific data while ensuring that researchers consider how the privacy, rights, and confidentiality of human research participants will be protected. Increasing the availability of data through data sharing allows for more accurate development and use of AI models. These policies help ensure that investigators remain good stewards of data used in or produced by AI models.
NIH Data Management & Sharing (DMS) Policy: Establishes the requirement to submit a DMS Plan and comply with NIH-approved plans. In addition, NIH Institutes, Centers, and Offices can request additional or specific information be included within the plan to support programmatic priorities or to expand the utility of the scientific data generated from the research. Also see DMS Policy Frequently Asked Questions.
Responsible Management and Sharing of American Indian/Alaska Native (AI/AN) Participant Data: Describes considerations and best practices for the responsible and respectful management and sharing of AI/AN participant data under the DMS Policy.
NIH Genomic Data Sharing Policy: Promotes and facilitates responsible sharing of large-scale genomic data generated with NIH funds. Also see Genomic Data Sharing Frequently Asked Questions.
Health Information Privacy
Health Insurance Portability and Accountability Act (HIPAA) helps protect the privacy and security of health data used in research, including research involving AI, thereby fostering trust in healthcare research activities.
HIPAA Privacy Rule: Establishes the conditions under which protected health information may be used or disclosed by covered entities for research purposes.
Licensing, Intellectual Property, & Technology Transfer
The following establish guidance, expectations, and best practices related to intellectual property and software sharing. They complement NIH’s data sharing initiatives, delineate investigator rights under the SBIR and STTR programs, and provide USPTO guidance on AI-related inventions. While many are not specific to AI, the policies and programs below are relevant to investigators who have developed software and source code under NIH research grants or who intend to commercialize their NIH-supported research products, including those related to development and use of AI.
NIH Best Practices for Sharing Research Software: Best practices for sharing research software and source code in a free and open format.
NIH Small Business Innovation Research (SBIR) & Small Business Technology Transfer (STTR): Unique policies and approaches may apply in the context of NIH’s Small Business Innovation Research (SBIR) & Small Business Technology Transfer (STTR) program. For example, recipients may retain the rights to data generated during the performance of an SBIR or STTR award.
NIH Research Tools Policy: NIH expects funding recipients to appropriately disseminate propagate and allow open access to research tools developed with NIH funding.
US Patent and Trademark Office information about AI: Provides AI-related patent resources and important information concerning AI IP policy.
Peer Review
The following clarifies NIH’s stance on the use of generative AI tools during peer review.
NOT-OD-23-149: Informs the extramural community that the NIH prohibits NIH scientific peer reviewers from using natural language processors, large language models, or other generative AI technologies for analyzing and formulating peer review critiques for grant applications and R&D contract proposals. Also see Open Mike blog on Using AI in Peer Review Is a Breach of Confidentiality.
Biosecurity and Biosafety
The following establish and are part of a comprehensive biosecurity and biosafety oversight system. Research funded by NIH, including research using the tools and technologies enabled or informed by AI, fall under this oversight framework. While some of these policies do not explicitly address AI, they are still applicable to development and use of AI in research involving biological agents, toxins, or nucleic acid molecules if such research involves physical experiments that are covered under these policies.
United States Government Policy for Oversight of Life Sciences Dual Use Research of Concern: Describes practices and procedures to ensure that dual use research of concern (DURC) is identified at the institutional level and risk mitigation measures are implemented as necessary for U.S. Government-funded research. DURC is “life sciences research that, based on current understanding, can be reasonably anticipated to provide knowledge, information, products, or technologies that could be directly misapplied to pose a significant threat with broad potential consequences to public health and safety, agricultural crops and other plants, animals, the environment, materiel, or national security.” The United States Government Policy for Institutional Oversight of Life Sciences Dual Use Research of Concern complements the aforementioned policy and addresses institutional oversight of DURC, which includes policies, practices, and procedures to ensure DURC is identified and risk mitigation measures are implemented, where applicable.
HHS Framework for Guiding Funding Decisions about Proposed Research Involving Enhanced Potential Pandemic Pathogens (HHS P3CO Framework): Guides Department of Health and Human Services funding decisions on individual proposed research that is reasonably anticipated to create, transfer, or use enhanced potential pandemic pathogens (ePPP). ePPP research is research that “may be reasonably anticipated to create, transfer or use potential pandemic pathogens resulting from the enhancement of a pathogen’s transmissibility and/or virulence in humans.” The HHS P3CO Framework is responsive to and in accordance with the Recommended Policy Guidance for Departmental Development of Review Mechanisms for Potential Pandemic Pathogen Care and Oversight issued in 2017 by the White House Office of Science and Technology Policy.
United States Government Policy for Oversight of Dual Use Research of Concern and Pathogens with Enhanced Pandemic Potential: On May 6, 2024, the White House Office of Science and Technology Policy released this new policy along with associated Implementation Guidance. This will supersede the DURC and P3CO policy frameworks on May 6, 2025. It provides a unified federal oversight framework for conducting and managing certain types of federally funded life sciences research on biological agents and toxins that have the potential to pose risks to public health, agriculture, food security, economic security, or national security. The policy “encourages institutional oversight of in silico research, regardless of funding source, that could result in the development of potential dual-use computational models directly enabling the design of a [pathogen with enhanced pandemic potential] or a novel biological agent or toxin.”
NIH Guidelines for Research Involving Recombinant or Synthetic Nucleic Acid Molecules: Establish safety practices and containment procedures for institutions that receive NIH funding for “basic and clinical research involving recombinant or synthetic nucleic acid molecules, including the creation and use of organisms and viruses containing recombinant or synthetic nucleic acid molecules.”
Resources
- Use of Generative AI in Peer Review FAQs (NIH Office of Extramural Research)
- NIH Office of Data Science Strategy
- US Department of Health and Human Services Artificial Intelligence Use Cases Inventory
- Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence
- PCAST Report to the President – Supercharging Research: Harnessing Artificial Intelligence to Meet Global Challenges
- NIH STRIDES Initiative | NIH STRIDES
For regulatory questions related to AI, see: