FacebookXLinkedIn

O

P

E

N

L

O

O

P

LAUNCHING IN 2023

Open Loop’s first policy prototyping program in the United States

Meta’s Open Loop program is excited to have launched its first policy prototyping research program in the United States, which is focused on testing the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) 1.0 to ensure that it is clear, implementable and effective at helping companies to identify and manage risks arising from generative AI.

This program gives participating companies the opportunity to learn about NIST's AI RMF, and to understand how it can be applied to managing risks associated with developing and deploying generative AI systems. At the same time, the program will gather evidence on current practices and provide valuable insights and feedback to NIST, which can inform future iterations of the RMF.

Open Loop program structure

Leveraging the policy prototyping methodology developed by Open Loop the program incorporates both qualitative and quantitative testing methodology in two phases:

PHASE 1

Kick-off + Capacity building session
Generative AI Transparency WorkshopGenerative AI Safety Workshop

PHASE 2

Survey + Interviews Generative AI Governance
Adoption of the NIST AI RMF

Program Objectives

The Open Loop US program aims to:

Tick icon

Leverage collaborative policy prototyping (testing proposed, hypothetical or real policy guidance within a structured program) methodologies to enable cohort members to apply and provide feedback on the NIST AI RMF in order to inform its future iterations as a practical tool for managing AI-related risks.

Tick icon

Inform the practical application of the NIST AI RMF among a diverse group of developers and users of Generative AI products and services, by unlocking insights, showcasing best practices and lessons learned, and by pinpointing gaps and opportunities.

Tick icon

Facilitate exchanges of ideas and solutions among AI companies, experts, and policymakers to drive the evolution of responsible and accountable AI practice.

Call for AI companies to join the program

AI companies across various sectors are encouraged to join the program. If your company is operating in the United States and developing or deploying Generative AI solutions, or if you’re considering doing so, you're an ideal candidate for participation. Join the program and stand alongside your peers in demonstrating your commitment to creating safe and trustworthy generative AI systems and products!

As a participant, you will:

Provide feedback on how the NIST AI RMF helps companies integrate trustworthiness considerations into the development, deployment, and assessment of AI products, services, and systems, to better manage risks to individuals, organizations, and society associated with artificial intelligence.

Gain insights from peers, industry leaders, and policy experts during a series of collaborative discussions and deep dive workshops on specific AI topics.

Inform potential future iterations of the NIST AI RMF to foster innovative policies and effective implementation, considering the evolving AI landscape.

Showcase your company’s approaches to AI governance and share experiences in developing and implementing the AI RMF and/or other risk management frameworks.

Network with a vibrant community of responsible AI practitioners, thought leaders and AI companies.

Expression of interest

We invite AI companies based and operating in the US to join the Open Loop US Program. By participating, you become part of a select cohort of AI leaders committed to co-creating and improving emerging AI policy solutions. You will work with industry leaders, experts, and policymakers to drive evidence-based, inclusive, and innovative governance models. 

To express your interest and learn more about getting involved as a participating company or observer, please fill out the short application form below. We will follow up over email shortly.

The first phase will commence in early 2024. We will release an interim report with preliminary findings. A final report with key learnings and recommendations arising from the program will be published at the end of the program.

Please select your preferred role as part of the Open Loop US program consortium. By filling in your details and selecting one of the following roles, you consent to receiving email updates about the Open Loop US program.

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

About Open Loop

Meta’s Open Loop is a global program that connects policymakers and technology companies to help develop effective and evidence-based policies around AI and other emerging technologies. 

Through experimental governance methods, Meta’s Open Loop members co-create policy prototypes and test new or existing approaches to policy, guidance frameworks, regulations, and laws. These multi-stakeholder efforts improve the quality of rulemaking processes by ensuring that new guidance and regulation aimed at emerging technology are effective and implementable.

Open Loop has been running theme-specific programs to operationalize  trustworthy AI across multiple verticals, such as Transparency and Explainability in Singapore and Mexico, and Human Centered-AI with an emphasis on stakeholder engagement in India. Beyond AI, we are also testing a playbook to promote the adoption of Privacy Enhancing Technologies in Brazil and Uruguay.

Play video

Frequently asked questions

What does the program involve? 

The program consists of online meetings and surveys over the course of Q1 and Q2 2024. If you would like to join the program as a participating company, we ask you to commit to attending at least one online “deep dive” workshop session in Q1 and to completing a multi-part online survey in Q2. This equates to approximately 1-2 hours of online work per month for two AI leaders within your organization who drive and implement Generative AI or AI risk management at your company. The structure of the company will determine which “functions” these people should be drawn from, but they should be individuals with practical experience of implementing risk management at your company at the product level. 

Are there any participation requirements?

Your company will need to:

  • Identify and enable one or more employees to participate in the deep dive workshops and structured online survey as per the above outline.
  • Be operating (delivering products or services) in the United States of America, or planning to expand into this market within the next 12 months.
  • Be developing generative AI models or utilizing them within your product stack in the next 12 months (on your product roadmap).
Can I / my company participate anonymously?

We understand that it can be complicated, particularly in large organizations, to gather all of the required approvals needed to participate in a open program such as this. In order to facilitate the inclusion of as many companies as possible in this research, the entire program will be conducted under “Chatham House Rules''. 

In practice this means:

  • Companies and individuals have the option of participating in the research without having their name or their company’s name published. All of the responses we receive during the workshops (deep dives), interviews and questionnaires will be aggregated, de-identified and synthesized, so that no particular quote or insight is connected to any individual person.
  • We will seek explicit permission to disclose a participant’s name or their company name in the case where we feel it would benefit our research and reporting. You/your company is free to decline this request. 
Who should join the program from my company?

The ideal company representative leads or is responsible for AI Risk Management efforts within their organization, whether from policy, legal or product teams.

Be a part of Open Loop’s first US program