The ISO/IEC AI Standards
Section 1: The Philosophical and Structural Foundation of AI Standardisation
The rapid proliferation of Artificial Intelligence (AI) has created a global imperative for standardized frameworks that can foster innovation while managing profound risks. In response, the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) established a central body, ISO/IEC JTC 1/SC 42, in 2017 to lead global AI standardization efforts (15, 17). The philosophy and structure of this committee reveal a deliberate and strategic approach to AI governance, positioning it not merely as a creator of technical specifications, but as a global integrator for a complex technological and societal shift.
The Holistic Ecosystem Approach
The foundational philosophy guiding SC 42 is a "holistic ecosystem approach" (3, 6). This principle recognizes that AI cannot be standardized in a vacuum. It necessitates a framework that looks beyond pure technology capability to integrate non-technical requirements, including business and regulatory needs, application domain context, and pressing ethical and societal concerns (6, 16). This approach is designed to provide the "glue" between high-level principles and concrete technical requirements, thereby accelerating responsible technology adoption (16). The committee's mandate is not only to develop standards but also to serve as the primary proponent for AI standardization and to provide guidance to all other ISO and IEC committees developing AI applications, underscoring its central coordinating role (1, 2).
This broad perspective is a direct acknowledgment that AI is a horizontal, enabling technology that impacts nearly every technical and societal domain, rather than a siloed IT vertical. The formal designation of SC 42 as a "systems integration entity" is a strategic choice reflecting this reality (2, 5). Its primary function is to standardize the integration of AI into existing systems, industries, and societal frameworks in a responsible manner. This foundational philosophy explains why the resulting portfolio of standards is so comprehensive, spanning from fundamental terminology and ethics to certifiable management systems.
A Structure Reflecting Philosophy
The organizational structure of SC 42 directly mirrors its ecosystem philosophy, with five core Working Groups (WGs) and several joint working groups, each addressing a critical component of the AI landscape (1, 6, 14):
This structure is supported by a global and inclusive development process. Based on a "one country, one vote" system, SC 42 involves active participation from over 60 countries, with more than a third classified as developing nations (3, 5). This consensus-driven model ensures that the resulting standards are universally applicable and not biased toward a single regional perspective. In a world where AI regulations are emerging rapidly, this global standardization effort serves as a proactive strategy to mitigate regulatory fragmentation. By building a consensus-based international framework, ISO/IEC provides a globally recognized common ground that can inform and harmonize national legislation, offering a stable, predictable pathway for organizations to demonstrate due diligence and responsible governance (12, 11).
Section 2: Foundational Standards - Establishing a Common Global Language
Effective governance and interoperability in a complex field like AI depend on a shared, unambiguous understanding of its core concepts and structures. Recognizing this, SC 42 has published two bedrock standards that serve as the essential prerequisite for the entire AI standardization ecosystem. These documents provide the common language and conceptual map upon which all higher-level frameworks for risk, management, and trustworthiness are built.
ISO/IEC 22989: Artificial intelligence concepts and terminology
Published in 2022, ISO/IEC 22989 establishes a common global vocabulary for AI (18). Its primary purpose is to enable clear and consistent communication among the diverse stakeholders involved in the AI lifecycle, from technical developers and data scientists to business leaders, regulators, and auditors (18, 19). The standard defines over 110 key terms, creating a unified lexicon that reduces ambiguity and fosters a shared understanding (19).
The scope of the terminology is comprehensive. It covers foundational concepts such as "AI system," "machine learning," and "training data," providing precise definitions that anchor technical discussions (18, 20). Critically, it also defines the terminology for key AI properties essential for building trust, including "explainability" (the extent to which an AI system's mechanics can be explained in human terms), "robustness" (the ability to maintain performance under varying conditions), and "transparency" (ensuring information about the system is available to stakeholders) (18, 20). This standard is explicitly designed to underpin all other AI standards from SC 42, serving as a normative reference for documents on risk management, data quality, and management systems (19, 20).
ISO/IEC 23053: Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)
While ISO/IEC 22989 defines the words, ISO/IEC 23053, also published in 2022, provides the "map" by establishing a standardized framework for describing a generic AI system that uses machine learning (21, 22). This standard is applicable to any organization, regardless of size or sector, that is implementing or using AI systems (21).
The framework decomposes a typical AI/ML system into its logical functional blocks and core components, creating a common reference architecture. This conceptual model helps stakeholders visualize and understand the entire AI ecosystem and its constituent parts (21). Key components identified and described by the framework include:
Together, these two foundational standards are the non-negotiable first step toward any meaningful form of AI governance. Without a common vocabulary (ISO/IEC 22989) and a shared architectural model (ISO/IEC 23053), higher-level concepts like risk assessment and management systems would be built on ambiguous and shifting foundations, rendering them ineffective. They create the stable, shared understanding of "what we are talking about" before other standards can address "what we should do about it."
Section 3: The Core of Governance - The AI Management System
While foundational standards provide the language and maps for AI, ISO/IEC 42001 provides the operational engine for governance. Published in December 2023, it is the world's first international, certifiable management system standard for Artificial Intelligence (7, 10). It offers a structured, systematic, and auditable framework for organizations to govern their development, provision, or use of AI systems responsibly.
ISO/IEC 42001: AI management system (AIMS)
The core purpose of ISO/IEC 42001 is to specify the requirements for establishing, implementing, maintaining, and continually improving an AI Management System (AIMS) (7, 8). It is a voluntary standard designed to be applicable to any organization, regardless of size, type, or nature, that interacts with AI-based products or services (7). Its primary objective is to help organizations manage AI-related risks and opportunities, balancing innovation with robust governance (7, 10).
The standard is structured to align with other prominent ISO management system standards, such as ISO/IEC 27001 for information security, following the high-level structure known as Annex SL (7, 9). This facilitates the integration of an AIMS with an organization's existing governance processes, such as those for security, privacy, or quality management (8, 9). Like its counterparts, it is built on the Plan-Do-Check-Act (PDCA) model, a cycle that promotes continuous improvement (11).
Key requirements of the standard mandate that an organization:
The value of ISO/IEC 42001 is significantly amplified by its certifiability. Achieving certification from an accredited third party provides tangible assurance to customers, regulators, investors, and the public that an organization has effectively implemented a robust system for responsible AI governance (3, 10). This serves as a powerful tool to build trust, demonstrate due diligence, gain a competitive advantage, and prepare for emerging regulations like the EU AI Act (8, 12). The standard functions as an operationalization engine, translating high-level principles of fairness, transparency, and accountability into the concrete, documented, and auditable business processes required for day-to-day operational reality.
Section 4: The Pillars of Trustworthy AI
An effective AI Management System, as defined by ISO/IEC 42001, relies on a deep understanding of the multifaceted nature of AI trustworthiness. To support this, SC 42 has developed a suite of standards and technical reports that provide detailed guidance on the specific components of trust. These documents function as pillars, offering specialized knowledge that informs the risk assessments and controls within an AIMS. They follow a deliberate "drill-down" structure, moving from broad principles to highly specific technical guidance.
ISO/IEC 23894: Guidance on risk management
This standard provides guidance on managing risks specifically related to AI, complementing and tailoring general enterprise risk management frameworks like ISO 31000 (23). It recognizes that AI introduces unique challenges stemming from its ability to learn, adapt, and make autonomous decisions (23). The standard guides organizations through a process of identifying, assessing, and treating AI-specific risks across the entire system lifecycle, from planning and data collection to deployment and retirement (23). It provides informative annexes detailing common AI-related objectives and risk sources, such as data quality issues, model transparency failures, algorithmic bias, and security vulnerabilities (23).
ISO/IEC TR 24028: Overview of trustworthiness in artificial intelligence
This technical report serves as the high-level conceptual framework for the entire topic of AI trustworthiness (25). It provides an overview of the key attributes that constitute a trustworthy system, creating a shared understanding of this complex and often abstract concept. The report surveys and defines critical properties, including:
ISO/IEC TR 24027: Bias in AI systems and AI aided decision making
Drilling down into one of the most significant risks to trustworthiness, this technical report provides focused guidance on identifying and mitigating unwanted bias (28, 29). It describes the primary sources of bias, categorizing them as human cognitive bias (prejudices of developers), data bias (unrepresentative or flawed datasets), and engineering bias (technical choices that inadvertently favor certain outcomes) (28). The report details quantitative metrics for assessing bias, such as demographic parity (ensuring outcomes are independent of sensitive attributes) and equalized odds (ensuring error rates are equal across different groups) (28). Finally, it outlines mitigation strategies that can be applied throughout the AI lifecycle, from data collection and model development to post-deployment monitoring (28, 29).
ISO/IEC 24029-2: Assessment of the robustness of neural networks — Part 2: Methodology for the use of formal methods
This standard addresses the technical challenge of verifying the robustness of neural networks, which are often considered "black boxes" (41). It specifies a methodology for using formal methods—rigorous mathematical and logical tools—to prove that a neural network satisfies specific properties (41). The process involves creating a mathematical abstraction of the network, defining robustness attributes in logical formulas (e.g., "for any input within this range, the output will not change by more than "), and using verification tools to formally prove or disprove the property (41). This provides a much higher level of assurance than traditional statistical testing, which is critical for high-stakes applications.
ISO/IEC TR 5469: Functional safety and AI systems
For safety-critical domains like autonomous vehicles or medical devices, this technical report provides guidance on integrating AI into functionally safe systems (39, 40). It addresses the inherent tension between the non-deterministic nature of some AI models and the stringent predictability requirements of functional safety standards like IEC 61508 (40). The report discusses AI-specific risk factors, such as the lack of a complete a-priori specification and the potential for model drift over time (39). It proposes mitigation measures, including architectural patterns like using a safe back-up function (a non-AI system that takes over if the AI fails), supervision (a safety monitor that constrains the AI's outputs), and redundancy with diversity (using multiple, different AI models to reduce the likelihood of common-mode failures) (39, 40).
Section 5: The Data Imperative - Quality, Lifecycle, and Analytics
The ISO/IEC standards ecosystem is built on the fundamental premise that the trustworthiness, reliability, and performance of most AI systems are inextricably linked to the quality and governance of the data they consume. Recognizing data as a primary source of both value and risk, SC 42 has developed a comprehensive suite of standards dedicated to data management. This rigorous focus on data provides the tools to address the root cause of many AI failures, such as bias and lack of robustness, rather than merely treating symptoms at the model level.
ISO/IEC 5259 series: Data quality for analytics and machine learning (ML)
This multi-part standard provides a detailed framework for managing data quality specifically in the context of AI and ML (33). It adapts and extends the foundational ISO/IEC 25012 data quality model to address the unique demands of AI applications (30, 32). The series is designed to be a comprehensive toolkit, with individual parts covering the entire spectrum of data quality management:
ISO/IEC 8183: Data life cycle framework
This standard defines a structured framework for managing data processing throughout the entire AI system life cycle (34). It provides a clear, ten-stage model that maps the journey of data from its inception to its eventual retirement, ensuring that governance and quality are considered at every step (34). The ten stages are:
This lifecycle perspective helps organizations avoid common pitfalls like poor data quality and compliance violations by providing a roadmap for effective data management and establishing clear control points at each phase of the AI system's existence (34, 35).
ISO/IEC 24668: Process management framework for big data analytics
This standard provides a framework for managing the processes required to effectively leverage big data analytics across an organization (42, 43). It is designed to help organizations structure their analytics practices, ensure different functional groups can interplay effectively, and assess their process capabilities (42, 44). The framework specifies several key process categories, including organization stakeholder processes, competency development processes, data management processes, analytics development processes, and technology integration processes (42). By providing a structured approach based on global best practices, the standard helps organizations improve decision-making, reduce errors, and gain a competitive advantage from their data assets (42).
Section 6: Implementation in Practice - Lifecycle Processes and Use Cases
To bridge the gap between abstract principles and concrete implementation, the ISO/IEC AI standards portfolio includes documents that provide practical guidance on the end-to-end development process and draw lessons from real-world applications. These standards are designed to integrate responsible AI practices into established engineering workflows and ensure that the standardization process itself remains grounded in market realities.
ISO/IEC 5338: AI system life cycle processes
This standard defines a comprehensive set of processes for describing the life cycle of AI systems, particularly those based on machine learning and heuristic systems (37). A key feature of this standard is its pragmatic approach of extending existing, widely adopted international standards for systems engineering (ISO/IEC/IEEE 15288) and software engineering (ISO/IEC/IEEE 12207) rather than creating a new framework from scratch (37, 38). This strategy is designed to lower the barrier to adoption by allowing organizations to build upon their mature development processes, tooling, and talent.
ISO/IEC 5338 categorizes its processes into three types, reflecting this evolutionary approach (38):
By encouraging organizations to evolve their existing System Development Life Cycle (SDLC) instead of creating a separate "AI development" silo, the standard promotes efficiency, better adoption of AI, and mutual understanding among all stakeholders.
ISO/IEC TR 24030: Use cases
This technical report serves a unique and critical role within the standardization ecosystem by providing a curated collection of representative AI use cases from a diverse range of application domains, including healthcare, finance, manufacturing, and agriculture (36). The 2024 edition of the report includes 81 in-operation use cases selected from a larger pool of submissions (36).
The purpose of this document is not to set requirements, but to fulfill several strategic functions. It illustrates the practical applicability of AI standardization work, provides concrete examples for other committees to reference, and helps identify new technical requirements emerging from the market (36). The collection and analysis of these use cases create a vital feedback loop that keeps the standardization process relevant and market-driven. It is a two-way street: the use cases demonstrate how existing standards are being applied in the real world, and, more importantly, they reveal challenges, gaps, and new requirements that can then feed back into the development of future standards. This formalized mechanism ensures that ISO/IEC standards do not become purely academic exercises but remain connected to the practical needs and evolving challenges of the global AI industry.
Section 7: Conclusion
The portfolio of standards for Artificial Intelligence developed by ISO/IEC JTC 1/SC 42 represents a comprehensive, deliberate, and deeply interconnected ecosystem. It is not a disparate collection of documents but a strategically designed framework guided by a philosophy of holistic and responsible systems integration. This ecosystem provides a flexible yet robust toolkit for organizations of all sizes to navigate the complexities of AI innovation, governance, and compliance in a globally recognized manner.
The framework is anchored by the certifiable ISO/IEC 42001 AI Management System, which functions as the central operational hub for implementing and demonstrating responsible AI governance. This core is supported and informed by a series of interlocking standards that address every critical facet of the AI landscape:
This modular and hierarchical structure allows organizations to adopt and apply the standards in a way that is tailored to their specific context, scale, and risk profile. By providing a clear, consensus-based path toward responsible AI, the ISO/IEC standards ecosystem serves as a critical enabler of trust, a facilitator of global trade, and a foundational pillar for the future of AI regulation and innovation. Pharma Best practices will cover highlights of each standard in the next series of articles.
Section 8: Citations
Entrepreneur | Chief Executive Officer, HTL BioPharma | Vice Chairman of ISPE, India | Rotary member
3wVery knowledgeable, in-depth article. Thanks for sharing
Business Owner | Entrepreneurial Leadership, Strategy
3w👍