In the swiftly changing landscape of technology, a growing reliance on AI-driven solutions by organizations aiming to streamline operations and elevate customer experiences is cropping up nearly everywhere you look. Though there are many advantages to be had in this integration of AI, this movement also highlights a new array of risks that demand careful navigation. Though your organization’s security risks in general may already be assessed and attested to in SOC2 or ISO/IEC 27001 reports, risks associated with the use of AI by your organization may not be since they encompass many new domains—such as ethical quandaries, the need for transparency in model training and development, and the preservation of the integrity of AI-generated output. As organizations continue to adopt AI integration as a strategy to enable their service delivery, conventional customer assurance reporting mechanisms like SOC2 or ISO/IEC 27001 reports may need to be supplemented in addressing the need to demonstrate controlled and responsible use of AI to your customers, stakeholders, and business partners.
Could my existing SOC 2 or ISO/IEC 27001 certification be used to demonstrate controlled use of AI?
While you may have already obtained attestation reports on your IT infrastructure and applications, the distinctiveness of AI poses complexities that may extend beyond the scope of these existing reports. Generally speaking, if you are considering or are already utilizing AI to enhance your current service offering, a supplemental attestation report may be needed to demonstrate a robust and secure implementation of AI to your clients and customers going forward.
Enter ISO/IEC 42001:2023, published in December 2023 (“ISO/IEC 42001:2023”), which outlines the concept of the AI Management System (AIMS), bridging a crucial gap in the technological landscape. This standard equips organizations with the tools to navigate the intricate nuances of AI while fostering trust and accountability. Though there is some overlap with general security frameworks (such as SOC2 and ISO/IEC 27001), the ISO/IEC 42001:2023 certification will enable businesses to demonstrate their commitment to responsible AI practices while contributing to the building of a trustworthy AI ecosystem.
Why is ISO/IEC 42001:2023 important and what are its goals?
ISO/IEC 42001:2023 is unique in its framework for managing risks and opportunities entwined with AI. It achieves this by striking a balance between innovation and governance, enabling your organization to make well-informed decisions regarding development and deployment of AI systems. The details of the standard underscore responsible AI development while championing transparency, traceability, and reliability in AI systems across various industries involved in crafting, delivering, or utilizing AI-based products or services.
ISO/IEC 42001:2023 has been designed with the wide array of unique risks that exist for organizations that choose to integrate AI technology into their service offerings in mind. These risks include but are not limited to examples such as: the complexity of the environment in which your AI system operates, which can affect the AI’s ability to act predictably; the inability to provide interested parties with accurate information, reducing the trustworthiness of your organization; or even the effects of transferring trained machine learning models between different systems within your organization.
How could the new ISO/IEC 42001:2023 certification help demonstrate responsible AI development and use?
With such complex risks at hand, the central feature of the ISO/IEC 42001:2023 standard is the AIMS—similar to how the long-standing ISO/IEC 27001 features the “Information Security Management System” (ISMS) for information security controls. The AIMS comprises interrelated elements aimed at establishing policies, objectives, and processes pertinent to the responsible deployment, provision, or utilization of your AI systems. In short, this framework has been created with organizations who want to attest to a responsible use of AI in mind, giving their customers peace of mind regarding the use of a technology already in the crosshairs of popular scrutiny.
Key guidance introduced in ISO/IEC 42001:2023 (Annex A) are 38 controls spread over nine control objectives, designed to provide guidance on crafting and sustaining an AIMS aligned with organizational goals and ethical standards. The Annex is defined as a “reference for meeting organizational objectives and addressing risks related to the design and operation of AI systems.” The list of controls is not a checklist and not all the controls are required to be used—each organization can customize the controls based on risk and impact assessments. Annex B goes further to provide information to support the implementation of the controls in Annex A.
What does the ISO/IEC 42001:2023 certification audit address?
The ISO/IEC 42001:2023 certification is a valuable indicator that your organization understands the importance of data protection within the context of AI security. The areas that would form part of the audit are as follows:
- A.2 Policies related to AI – Does the organization have policies that provide management direction and support for AI systems according to business requirements?
- A.3 Internal organization – Does the organization establish accountability to uphold its responsible approach for the implementation, operation, and management of AI systems?
- A.4 Resources for AI systems – Does the organization account for the resources (including AI system components and assets) of the AI system in order to fully understand and address risks and impacts?
- A.5 Assessing impacts of AI systems – Does the organization assess AI system impacts to individuals or groups of individuals, or both, and societies affected by the AI system throughout its life cycle?
- A.6 AI system life cycle
- 6.1 Management guidance for AI system development – Does the organization identify and document objectives and implement processes for the responsible design and development of AI systems?
- 6.2 AI system life cycle – Has the organization defined the criteria and requirements for each stage of the AI system life cycle?
- A.7 Data for AI systems – Does the organization understand the role and impacts of data in AI systems in the application and development, provision, or use of AI systems throughout their life cycles?
- A.8 Information for interested parties of AI systems – Does the organization have processes to help ensure that relevant interested parties have the necessary information to understand and assess the risks and their impacts (both positive and negative)?
- A.9 Use of AI systems – Does the organization use AI systems responsibly and per organizational policies?
- A.10 Third-party and customer relationships – Does the organization have processes to help ensure that it understands its responsibilities and remains accountable and that risks are appropriately apportioned when third parties are involved at any stage of the AI system life cycle?
By upholding transparency in AI decision-making processes, implementing the ISO/IEC 42001:2023 standard can pave the way for building trust and fostering accountability in your AI endeavors, ultimately propelling responsible and ethical AI development and usage not only within your own industry but also throughout the world.
What Other Resources Are There?
In the journey of navigating the complexities of AI integration, it’s essential to explore additional resources that complement and enhance your understanding of responsible AI management. Among these resources, a comprehensive risk assessment stands as a cornerstone in any IT compliance audit, including audits conducted under ISO/IEC 42001:2023. ISO/IEC 42001:2023 Annex C provides guidance on risk sources and definitions and references to other related standards.
Another resource that can complement an ISO/IEC 42001:2023 implementation is the NIST AI Risk Management Framework (AI RMF). Released on January 26, 2023, this risk assessment framework by the National Institute of Standards and Technology (NIST) offers a robust structure to manage risks associated with AI. Designed for voluntary use, the AI RMF aims to elevate trustworthiness considerations in the design, development, deployment, and evaluation of AI products, services, and systems. The NIST AI RMF could help guide an organization in performing the risk assessment that needs to be documented and will be addressed as part of the ISO/IEC 42001:2023 certification audit.
We are here to help you!
As certified ISO standard auditors, AARC-360 extends an invitation to reach out to us for further insights, guidance, or collaboration opportunities on your AI journey—including obtaining ISO/IEC 42001:2023 certification. Whether you’re seeking consultation, certification, or just have questions in navigating AI compliance landscapes, our team stands ready to assist you in fostering responsible AI practices and driving innovation with integrity.
Please contact us today as you embark on your next steps towards realizing the full potential of AI while upholding trust and accountability in your endeavors.
Joseph Thorin (Manager, AARC-360)