Open Loop’s first policy prototyping program in the United States

Meta’s Open Loop program is excited to have launched its first policy prototyping research program in the United States, which is focused on testing the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) 1.0 to ensure that it is clear, implementable and effective at helping companies to identify and manage risks arising from generative AI.

This program gives participating companies the opportunity to learn about NIST's AI RMF, and to understand how it can be applied to managing risks associated with developing and deploying generative AI systems. At the same time, the program will gather evidence on current practices and provide valuable insights and feedback to NIST, which can inform future iterations of the RMF.

Read the first phase report now!

This report presents the findings and recommendations of the first phase of the Open Loop US program on Generative AI Risk Management, launched in November of 2023 in partnership with Accenture. The first phase focused around two topics that are key to generative AI risk management and of particular interest for NIST, namely AI red-teaming and synthetic content risk mitigation. This report shares the results of the first phase of the program which took place from January to April 2024 and involved 40 companies.

Through desk research, interviews, surveys and workshops, we investigated:

  • how companies currently approach or plan AI red-teaming and/or synthetic content risk management efforts.
  • the key challenges to efficient and successful implementation of AI red-teaming and/or synthetic content risk management.
  • how the NIST AI RMF can be leveraged to resolve those challenges, enhance efficiencies, and support cross-value-chain collaboration.

Main findings and recommendations

The first phase of the Open Loop US program found that companies acknowledge the importance of risk management for emerging generative AI technologies and prioritize red-teaming and synthetic content risk practices to build trust, comply with regulations, and manage these risks. However, challenges exist, especially for smaller companies who may need more support to understand and address risks with more limited resources at their disposal. Specifically, they need clearer guidance from NIST on risk categories, mitigation techniques, and evaluation methods. Open-source tools and collaboration platforms are also seen as crucial to the establishment and maintenance of best practices. Finally, while beyond NIST's scope, workforce training and budget limitations are key factors  for successful AI risk management, and should be taken into consideration in the formulation of practical guidance.

The program's findings resulted in several notable recommendations, including:


Understanding generative AI risks and AI actors

A taxonomy of generative AI risks and harms which takes into account risks specific to certain sectors.

Define the different roles and possible risk management activities across the generative AI value chain.

Continue leveraging and driving development of existing taxonomies and terminology e.g. (EU-US TTC).


Flexible, interoperable guidance

Provide flexible, modularized guidance to AI actors on AI red teaming and synthetic content authentication within a broader risk management system.

Guidance in prioritizing risks, and communicating about risk management (governance artifacts), especially for smaller less resourced companies is very valuable — systems categorization and defining the scope of (bounded) assessments supports prioritization.


A community of practice and knowledge sharing

Supporting and enabling the creation and sustainability of communities of practice and dedicated peer-to-peer exchange fora.

Ongoing sourcing and production of case studies.

Leverage existing frameworks and initiatives.

Open source tooling should be supported and a taxonomy of tools recommended e.g. OECD Tools Catalogue.

Open Loop Program structure

Leveraging the policy prototyping methodology developed by Open Loop the program incorporates both qualitative and quantitative testing methodology in two phases:


Kick-off + Capacity building session
Synthetic Content Risk WorkshopGenerative AI Red Teaming Workshop


Survey + Interviews Generative AI Governance
Adoption of the NIST AI RMF

Open Loop will produce two policy reports containing initial recommendations for product and policymakers aimed at supporting effective AI governance and risk management measures.

Program Objectives

Tick icon

Leverage collaborative policy prototyping (testing proposed, hypothetical or real policy guidance within a structured program) methodologies to enable cohort members to apply and provide feedback on the NIST AI RMF in order to inform its future iterations as a practical tool for managing AI-related risks.

Tick icon

Inform the practical application of the NIST AI RMF among a diverse group of developers and users of Generative AI products and services, by unlocking insights, showcasing best practices and lessons learned, and by pinpointing gaps and opportunities.

Tick icon

Facilitate exchanges of ideas and solutions among AI companies, experts, and policymakers to drive the evolution of responsible and accountable AI practice.

The participating companies

AI companies across various sectors joined the program, including AI startups, AI risk and assurance companies, and established multinational enterprises across various industries. Individual participants represented a diverse range of expertise, with both senior-level decision-makers and individuals involved in operational aspects of safety, compliance, and technology development.

As a participant, you will:

Provide feedback on how the NIST AI RMF helps companies integrate trustworthiness considerations into the development, deployment, and assessment of AI products, services, and systems, to better manage risks to individuals, organizations, and society associated with artificial intelligence.

Gain insights from peers, industry leaders, and policy experts during a series of collaborative discussions and deep dive workshops on specific AI topics.

Inform potential future iterations of the NIST AI RMF to foster innovative policies and effective implementation, considering the evolving AI landscape.

Showcase your company’s approaches to AI governance and share experiences in developing and implementing the AI RMF and/or other risk management frameworks.

Network with a vibrant community of responsible AI practitioners, thought leaders and AI companies.

Influence future approaches to (international) AI policy beyond the United States.

Expression of interest

To express your interest and learn more about getting involved as a participating company or expert in future Open Loop programs, please fill out the short application form below. 

By filling in your details you consent to receiving email updates about the Open Loop US program.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

About Open Loop

Meta’s Open Loop is a global program that connects policymakers and technology companies to help develop effective and evidence-based policies around AI and other emerging technologies. 

Through experimental governance methods, Meta’s Open Loop members co-create policy prototypes and test new or existing approaches to policy, guidance frameworks, regulations, and laws. These multi-stakeholder efforts improve the quality of rulemaking processes by ensuring that new guidance and regulation aimed at emerging technology are effective and implementable.

Open Loop has been running theme-specific programs to operationalize  trustworthy AI across multiple verticals, such as Transparency and Explainability in Singapore and Mexico, and Human Centered-AI with an emphasis on stakeholder engagement in India. Beyond AI, we are also testing a playbook to promote the adoption of Privacy Enhancing Technologies in Brazil and Uruguay.

Play video

Be a part of Open Loop’s first US program