LAUNCHING IN 2023
Meta’s Open Loop program is excited to be launching its first policy prototyping program in the United States, which is focused on the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) 1.0. The program will give consortium participants the opportunity to explore how the NIST AI RMF could help manage risks while developing and deploying Generative AI systems. At the same time, the program will seek to provide valuable insights and feedback to NIST as they work on future iterations of the RMF.
When NIST announced their AI RMF in January 2023, they included a call for suggestions from industry and the community to improve their playbook. The Open Loop program will bring AI participants together to explore how it is being used in practice by companies as a voluntary framework in the US.
Leveraging the policy prototyping methodology developed by Open Loop, in partnership with Accenture, the program incorporates both qualitative and quantitative testing methodology in two phases:
Leverage collaborative policy prototyping (testing proposed, hypothetical or real policy guidance within a structured program) methodologies to enable cohort members to apply and provide feedback on the NIST AI RMF in order to inform its future iterations as a practical tool for managing AI-related risks.
Inform the practical application of the NIST AI RMF among a diverse group of developers and users of Generative AI products and services, by unlocking insights, showcasing best practices and lessons learned, and by pinpointing gaps and opportunities.
Facilitate exchanges of ideas and solutions among AI companies, experts, and policymakers to drive the evolution of responsible and accountable AI practic.
AI companies across various sectors — particularly startups and small and medium-size enterprises (SMEs) — are encouraged to join the program. If your company is developing or deploying Generative AI solutions that impact different industries, or if you’re considering doing so, you're an ideal candidate for participation.
Provide feedback on how the NIST AI RMF helps companies integrate trustworthiness considerations into the development, deployment, and assessment of AI products, services, and systems, to better manage risks to individuals, organizations, and society associated with artificial intelligence.
Gain insights from peers, industry leaders, and policy experts during a series of collaborative discussions and deep dive workshops on specific AI topics.
Inform potential future iterations of the NIST AI RMF to foster innovative policies and effective implementation, considering the evolving AI landscape.
Showcase your company’s approaches to AI governance and share experiences in developing and implementing the AI RMF and/or other risk management frameworks.
Network with a vibrant community of responsible AI practitioners, thought leaders and AI companies.
We invite AI companies based and operating in the US to join the Open Loop US Program. By participating, you become part of a select cohort of AI leaders committed to co-creating and improving emerging AI policy solutions. You will work with industry leaders, experts, and policymakers to drive evidence-based, inclusive, and innovative governance models.
To express your interest and learn more about getting involved as a participating company or observer, please fill out the short application form below. We will follow up over email shortly.
The program will commence in November 2023. We will release an interim report with preliminary findings. A final report with key learnings and recommendations arising from the program will be published at the end of the program.
The launch of the program will take place on November 28th in Washington D.C. More information to follow.
Meta’s Open Loop is a global program that connects policymakers and technology companies to help develop effective and evidence-based policies around AI and other emerging technologies.
Through experimental governance methods, Meta’s Open Loop members co-create policy prototypes and test new or existing approaches to policy, guidance frameworks, regulations, and laws. These multi-stakeholder efforts improve the quality of rulemaking processes by ensuring that new guidance and regulation aimed at emerging technology are effective and implementable.
Open Loop has been running theme-specific programs to operationalize trustworthy AI across multiple verticals, such as Transparency and Explainability in Singapore and Mexico, and Human Centered-AI with an emphasis on stakeholder engagement in India. Beyond AI, we are also testing a playbook to promote the adoption of Privacy Enhancing Technologies in Brazil and Uruguay.
Meta’s Open Loop program has partnered with Accenture and will work closely with other prominent industry players and other organizations in the US. Our collaborative efforts extend to include experts from international organizations, NIST, civil society organizations, academia, and more. Each of these contribute to the program's comprehensive knowledge base and holistic approach. Through these strategic partnerships, we aim to collectively drive the advancement of AI risk management and foster a well-rounded understanding of responsible AI practices.
Should you have any questions, please feel free to get in touch
The program consists of both in-person events and online meetings, and kicks off formally at our Launch Event on November 28th, 2023, in Washington, DC. If you want to join the program as a participating company, we ask you to commit to attending a minimum of 2 out of 3 in-person events in either Washington D.C. or San Francisco and completing the weekly survey questions provided to you in Q2 of next year in a span of 6 weeks. This equates to approximately 1-2 hours of online work per month for 1-2 AI leaders within your organization who drive Generative AI or AI risk management.
Your company will need to:
The ideal company representative leads or is responsible for AI Risk Management efforts within their organization, whether from policy, legal or product teams.