responsible AI practices
Image credit: Pixabay

The Responsible AI Institute (RAI Institute), a non-profit dedicated to facilitating the responsible use of AI worldwide, has launched the AI Policy Template to help businesses develop their own enterprise-wide responsible AI policies and governance.

The template was developed with RAI Institute’s deep expertise and understanding of emerging business and assurance environments for AI use cases and is informed by evolving global and local standards, including the NIST AI Risk Management Framework (RMF) and ISO/IEC 42001.

AI technology adoption is already widespread, with  97% of organisations are actively engaging with AI and 74% already incorporating generative AI (GenAI) technologies in production. However, responsible AI frameworks consistently lag behind the speed of innovation. Amidst the surge of complex AI solutions, businesses struggle with executing a critical step in their AI journeys: developing and establishing AI guardrails and ethical policies within their organisation. Notably, 74% of organisations admit they still lack a comprehensive, organisation-wide approach to responsible AI and only 44% of companies using AI are developing ethical AI policies.

To address this, RAI Institute’s AI Policy Template is an industry-agnostic and detailed “plug-and-play” policy document that organizations can adapt to quickly establish foundational responsible AI policies aligned with their business needs and risks. The Template also helps enterprises calibrate organizational policies according to their objectives to procure, develop, use and/or sell AI systems.

BUSINESS PRIORITY & ETHICAL IMPERATIVE

“RAI Institute recognises that the development of responsible AI management processes is both an ethical imperative and a strategic business priority,” said Hadassah Drukarch, Director of Policy and Delivery at the RAI Institute. “Our new AI Policy Template provides organizations with a comprehensive framework for establishing robust internal AI governance policies and practices from the ground up that align with evolving global and local AI policies and recommendations. Our team looks forward to iterating on the current version of the Template with members and plans to release an updated version in late August.”

Leaders can utilise the AI Policy Template to compose their organisation’s AI principles, objectives and overall management strategy. The Template also covers common policy and governance functions and processes, including data management, risk management and procurement, enabling enterprises to more readily integrate AI-specific guidance from the NIST AI Risk Management Framework (RMF) and ISO/IEC 42001 into their existing corporate policy structures. The policy elements in the Template align with RAI Institute’s Organisational Maturity Assessment framework and are inclusive of the majority of RAI Institute’s recommendations for building baseline responsible AI organisational maturity. As informed by RAI Institute and other authoritative sources, organisations that use the Template, either directly or as inspiration, early in their RAI journey can accelerate their progress to a fully-developed and effective RAI organisational strategy.

RESPONSIBLE AI GUIDELINES

The template is available inside the RAI Institute’s recently launched the Responsible AI Hub, a comprehensive portal for individual and corporate members. The RAI Hub gives members access to cutting-edge assessments to benchmark their responsible AI maturity, in-depth guidebooks to help them navigate the evolving AI governance landscape, as well as curated educational resources to keep members apprised of the latest AI regulations and policies.

The RAI Institute team will be accepting feedback from members on the current version of the AI Policy Template until the end of July.  Visit this page to download the AI Policy Template. Founded in 2016, Responsible AI Institute (RAI Institute) is a global and member-driven non-profit dedicated to enabling successful responsible AI efforts in organisations.

A new study highlights the need for responsible AI practices.

Companies are struggling to find talent to ensure their workforce remains competitive in an AI-driven world of work, according to a new report.

Employers are failing to educate staff about AI risks, reveals study.

Almost all CEOs (98%) believe that their organisations would benefit from implementing AI, however, trust remains a concern, revealed a new study.

Sign up for our newsletter