The ISO/IEC AI Standards

The ISO/IEC AI Standards

Section 1: The Philosophical and Structural Foundation of AI Standardisation

The rapid proliferation of Artificial Intelligence (AI) has created a global imperative for standardized frameworks that can foster innovation while managing profound risks. In response, the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) established a central body, ISO/IEC JTC 1/SC 42, in 2017 to lead global AI standardization efforts (15, 17). The philosophy and structure of this committee reveal a deliberate and strategic approach to AI governance, positioning it not merely as a creator of technical specifications, but as a global integrator for a complex technological and societal shift.

The Holistic Ecosystem Approach

The foundational philosophy guiding SC 42 is a "holistic ecosystem approach" (3, 6). This principle recognizes that AI cannot be standardized in a vacuum. It necessitates a framework that looks beyond pure technology capability to integrate non-technical requirements, including business and regulatory needs, application domain context, and pressing ethical and societal concerns (6, 16). This approach is designed to provide the "glue" between high-level principles and concrete technical requirements, thereby accelerating responsible technology adoption (16). The committee's mandate is not only to develop standards but also to serve as the primary proponent for AI standardization and to provide guidance to all other ISO and IEC committees developing AI applications, underscoring its central coordinating role (1, 2).

This broad perspective is a direct acknowledgment that AI is a horizontal, enabling technology that impacts nearly every technical and societal domain, rather than a siloed IT vertical. The formal designation of SC 42 as a "systems integration entity" is a strategic choice reflecting this reality (2, 5). Its primary function is to standardize the integration of AI into existing systems, industries, and societal frameworks in a responsible manner. This foundational philosophy explains why the resulting portfolio of standards is so comprehensive, spanning from fundamental terminology and ethics to certifiable management systems.

A Structure Reflecting Philosophy

The organizational structure of SC 42 directly mirrors its ecosystem philosophy, with five core Working Groups (WGs) and several joint working groups, each addressing a critical component of the AI landscape (1, 6, 14):

  • WG 1: Foundational standards is tasked with creating the common language and conceptual frameworks necessary for coherent global dialogue.
  • WG 2: Data (formerly Big Data) addresses the quality, governance, and management of data, the essential fuel for most modern AI systems.
  • WG 3: Trustworthiness focuses on the core ethical and performance pillars, including bias, fairness, robustness, and explainability.
  • WG 4: Use cases and applications grounds the committee's work in real-world context, collecting and analyzing practical implementations of AI.
  • WG 5: Computational approaches and characteristics of AI systems examines the underlying technical methods and performance metrics.

This structure is supported by a global and inclusive development process. Based on a "one country, one vote" system, SC 42 involves active participation from over 60 countries, with more than a third classified as developing nations (3, 5). This consensus-driven model ensures that the resulting standards are universally applicable and not biased toward a single regional perspective. In a world where AI regulations are emerging rapidly, this global standardization effort serves as a proactive strategy to mitigate regulatory fragmentation. By building a consensus-based international framework, ISO/IEC provides a globally recognized common ground that can inform and harmonize national legislation, offering a stable, predictable pathway for organizations to demonstrate due diligence and responsible governance (12, 11).

Section 2: Foundational Standards - Establishing a Common Global Language

Effective governance and interoperability in a complex field like AI depend on a shared, unambiguous understanding of its core concepts and structures. Recognizing this, SC 42 has published two bedrock standards that serve as the essential prerequisite for the entire AI standardization ecosystem. These documents provide the common language and conceptual map upon which all higher-level frameworks for risk, management, and trustworthiness are built.

ISO/IEC 22989: Artificial intelligence concepts and terminology

Published in 2022, ISO/IEC 22989 establishes a common global vocabulary for AI (18). Its primary purpose is to enable clear and consistent communication among the diverse stakeholders involved in the AI lifecycle, from technical developers and data scientists to business leaders, regulators, and auditors (18, 19). The standard defines over 110 key terms, creating a unified lexicon that reduces ambiguity and fosters a shared understanding (19).

The scope of the terminology is comprehensive. It covers foundational concepts such as "AI system," "machine learning," and "training data," providing precise definitions that anchor technical discussions (18, 20). Critically, it also defines the terminology for key AI properties essential for building trust, including "explainability" (the extent to which an AI system's mechanics can be explained in human terms), "robustness" (the ability to maintain performance under varying conditions), and "transparency" (ensuring information about the system is available to stakeholders) (18, 20). This standard is explicitly designed to underpin all other AI standards from SC 42, serving as a normative reference for documents on risk management, data quality, and management systems (19, 20).

ISO/IEC 23053: Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)

While ISO/IEC 22989 defines the words, ISO/IEC 23053, also published in 2022, provides the "map" by establishing a standardized framework for describing a generic AI system that uses machine learning (21, 22). This standard is applicable to any organization, regardless of size or sector, that is implementing or using AI systems (21).

The framework decomposes a typical AI/ML system into its logical functional blocks and core components, creating a common reference architecture. This conceptual model helps stakeholders visualize and understand the entire AI ecosystem and its constituent parts (21). Key components identified and described by the framework include:

  • Data Management Components: Responsible for the collection, preparation, and processing of data used to train and operate models.
  • Model Development Components: Covering the processes and tools used to create, train, and validate ML models.
  • Deployment and Integration Components: Focusing on the implementation of AI systems within existing organizational infrastructures.
  • Monitoring and Governance Components: Establishing mechanisms for ongoing oversight, performance tracking, and compliance management (21).

Together, these two foundational standards are the non-negotiable first step toward any meaningful form of AI governance. Without a common vocabulary (ISO/IEC 22989) and a shared architectural model (ISO/IEC 23053), higher-level concepts like risk assessment and management systems would be built on ambiguous and shifting foundations, rendering them ineffective. They create the stable, shared understanding of "what we are talking about" before other standards can address "what we should do about it."

Section 3: The Core of Governance - The AI Management System

While foundational standards provide the language and maps for AI, ISO/IEC 42001 provides the operational engine for governance. Published in December 2023, it is the world's first international, certifiable management system standard for Artificial Intelligence (7, 10). It offers a structured, systematic, and auditable framework for organizations to govern their development, provision, or use of AI systems responsibly.

ISO/IEC 42001: AI management system (AIMS)

The core purpose of ISO/IEC 42001 is to specify the requirements for establishing, implementing, maintaining, and continually improving an AI Management System (AIMS) (7, 8). It is a voluntary standard designed to be applicable to any organization, regardless of size, type, or nature, that interacts with AI-based products or services (7). Its primary objective is to help organizations manage AI-related risks and opportunities, balancing innovation with robust governance (7, 10).

The standard is structured to align with other prominent ISO management system standards, such as ISO/IEC 27001 for information security, following the high-level structure known as Annex SL (7, 9). This facilitates the integration of an AIMS with an organization's existing governance processes, such as those for security, privacy, or quality management (8, 9). Like its counterparts, it is built on the Plan-Do-Check-Act (PDCA) model, a cycle that promotes continuous improvement (11).

Key requirements of the standard mandate that an organization:

  • Define its context and identify stakeholders: Understand the internal and external factors influencing its AI systems and the needs of interested parties (9).
  • Demonstrate leadership and establish an AI policy: Top management must commit to the AIMS and create a policy that provides a framework for setting AI objectives (9, 11).
  • Conduct AI risk and impact assessments: Systematically identify, analyze, and treat risks associated with AI, including potential impacts on individuals and society (8, 11). This includes addressing issues like bias, data protection, and accountability (8, 11).
  • Manage the AI system lifecycle: Implement controls and processes throughout the entire lifecycle, from requirements and data acquisition through to model training, deployment, monitoring, and decommissioning (8, 12).
  • Provide support and resources: Allocate necessary resources, ensure personnel are competent, and promote awareness of the AIMS and responsible AI practices (7, 12).
  • Monitor, measure, and audit: Continuously evaluate the performance of the AIMS and the AI systems it governs through monitoring and internal audits (11).

The value of ISO/IEC 42001 is significantly amplified by its certifiability. Achieving certification from an accredited third party provides tangible assurance to customers, regulators, investors, and the public that an organization has effectively implemented a robust system for responsible AI governance (3, 10). This serves as a powerful tool to build trust, demonstrate due diligence, gain a competitive advantage, and prepare for emerging regulations like the EU AI Act (8, 12). The standard functions as an operationalization engine, translating high-level principles of fairness, transparency, and accountability into the concrete, documented, and auditable business processes required for day-to-day operational reality.

Section 4: The Pillars of Trustworthy AI

An effective AI Management System, as defined by ISO/IEC 42001, relies on a deep understanding of the multifaceted nature of AI trustworthiness. To support this, SC 42 has developed a suite of standards and technical reports that provide detailed guidance on the specific components of trust. These documents function as pillars, offering specialized knowledge that informs the risk assessments and controls within an AIMS. They follow a deliberate "drill-down" structure, moving from broad principles to highly specific technical guidance.

ISO/IEC 23894: Guidance on risk management

This standard provides guidance on managing risks specifically related to AI, complementing and tailoring general enterprise risk management frameworks like ISO 31000 (23). It recognizes that AI introduces unique challenges stemming from its ability to learn, adapt, and make autonomous decisions (23). The standard guides organizations through a process of identifying, assessing, and treating AI-specific risks across the entire system lifecycle, from planning and data collection to deployment and retirement (23). It provides informative annexes detailing common AI-related objectives and risk sources, such as data quality issues, model transparency failures, algorithmic bias, and security vulnerabilities (23).

ISO/IEC TR 24028: Overview of trustworthiness in artificial intelligence

This technical report serves as the high-level conceptual framework for the entire topic of AI trustworthiness (25). It provides an overview of the key attributes that constitute a trustworthy system, creating a shared understanding of this complex and often abstract concept. The report surveys and defines critical properties, including:

  • Reliability, Resilience, and Robustness: The ability of the system to perform consistently and maintain its integrity under varying or adverse conditions (25, 26).
  • Transparency and Explainability: The property of making information about the system available and ensuring its decisions can be understood by humans (25, 26).
  • Accountability and Ethics: Ensuring compliance with legal and ethical norms, including fairness and human oversight (25).
  • Safety, Security, and Privacy: Protecting against harm, unauthorized access, and data misuse (25, 27). The report also discusses common vulnerabilities, threats, and potential mitigation measures, acting as a foundational guide for organizations beginning to structure their approach to AI trust (26, 27).

ISO/IEC TR 24027: Bias in AI systems and AI aided decision making

Drilling down into one of the most significant risks to trustworthiness, this technical report provides focused guidance on identifying and mitigating unwanted bias (28, 29). It describes the primary sources of bias, categorizing them as human cognitive bias (prejudices of developers), data bias (unrepresentative or flawed datasets), and engineering bias (technical choices that inadvertently favor certain outcomes) (28). The report details quantitative metrics for assessing bias, such as demographic parity (ensuring outcomes are independent of sensitive attributes) and equalized odds (ensuring error rates are equal across different groups) (28). Finally, it outlines mitigation strategies that can be applied throughout the AI lifecycle, from data collection and model development to post-deployment monitoring (28, 29).

ISO/IEC 24029-2: Assessment of the robustness of neural networks — Part 2: Methodology for the use of formal methods

This standard addresses the technical challenge of verifying the robustness of neural networks, which are often considered "black boxes" (41). It specifies a methodology for using formal methods—rigorous mathematical and logical tools—to prove that a neural network satisfies specific properties (41). The process involves creating a mathematical abstraction of the network, defining robustness attributes in logical formulas (e.g., "for any input within this range, the output will not change by more than "), and using verification tools to formally prove or disprove the property (41). This provides a much higher level of assurance than traditional statistical testing, which is critical for high-stakes applications.

ISO/IEC TR 5469: Functional safety and AI systems

For safety-critical domains like autonomous vehicles or medical devices, this technical report provides guidance on integrating AI into functionally safe systems (39, 40). It addresses the inherent tension between the non-deterministic nature of some AI models and the stringent predictability requirements of functional safety standards like IEC 61508 (40). The report discusses AI-specific risk factors, such as the lack of a complete a-priori specification and the potential for model drift over time (39). It proposes mitigation measures, including architectural patterns like using a safe back-up function (a non-AI system that takes over if the AI fails), supervision (a safety monitor that constrains the AI's outputs), and redundancy with diversity (using multiple, different AI models to reduce the likelihood of common-mode failures) (39, 40).

Section 5: The Data Imperative - Quality, Lifecycle, and Analytics

The ISO/IEC standards ecosystem is built on the fundamental premise that the trustworthiness, reliability, and performance of most AI systems are inextricably linked to the quality and governance of the data they consume. Recognizing data as a primary source of both value and risk, SC 42 has developed a comprehensive suite of standards dedicated to data management. This rigorous focus on data provides the tools to address the root cause of many AI failures, such as bias and lack of robustness, rather than merely treating symptoms at the model level.

ISO/IEC 5259 series: Data quality for analytics and machine learning (ML)

This multi-part standard provides a detailed framework for managing data quality specifically in the context of AI and ML (33). It adapts and extends the foundational ISO/IEC 25012 data quality model to address the unique demands of AI applications (30, 32). The series is designed to be a comprehensive toolkit, with individual parts covering the entire spectrum of data quality management:

  • Part 1: Overview, terminology, and examples establishes the foundational concepts and shared language for the series (33).
  • Part 2: Data quality measures provides specific, quantifiable metrics for assessing data quality characteristics like accuracy, completeness, consistency, and credibility (30, 32).
  • Part 3: Data quality management process specifies requirements for establishing, implementing, and continually improving a data quality management process. This part is certifiable, allowing organizations to demonstrate their commitment to data quality (33).
  • Part 4: Data quality process framework outlines standardized processes for tasks like data labeling and validation, tailored for different ML approaches (supervised, unsupervised, etc.) (31).
  • Part 5: Data quality governance framework provides guidance on establishing governance structures to direct and oversee data quality initiatives (31, 33).

ISO/IEC 8183: Data life cycle framework

This standard defines a structured framework for managing data processing throughout the entire AI system life cycle (34). It provides a clear, ten-stage model that maps the journey of data from its inception to its eventual retirement, ensuring that governance and quality are considered at every step (34). The ten stages are:

  1. Idea Conception
  2. Business Requirements
  3. Data Planning
  4. Data Acquisition
  5. Data Preparation
  6. Building a Model
  7. System Deployment
  8. System Operation
  9. Data Decommissioning
  10. System Decommissioning

This lifecycle perspective helps organizations avoid common pitfalls like poor data quality and compliance violations by providing a roadmap for effective data management and establishing clear control points at each phase of the AI system's existence (34, 35).

ISO/IEC 24668: Process management framework for big data analytics

This standard provides a framework for managing the processes required to effectively leverage big data analytics across an organization (42, 43). It is designed to help organizations structure their analytics practices, ensure different functional groups can interplay effectively, and assess their process capabilities (42, 44). The framework specifies several key process categories, including organization stakeholder processes, competency development processes, data management processes, analytics development processes, and technology integration processes (42). By providing a structured approach based on global best practices, the standard helps organizations improve decision-making, reduce errors, and gain a competitive advantage from their data assets (42).

Section 6: Implementation in Practice - Lifecycle Processes and Use Cases

To bridge the gap between abstract principles and concrete implementation, the ISO/IEC AI standards portfolio includes documents that provide practical guidance on the end-to-end development process and draw lessons from real-world applications. These standards are designed to integrate responsible AI practices into established engineering workflows and ensure that the standardization process itself remains grounded in market realities.

ISO/IEC 5338: AI system life cycle processes

This standard defines a comprehensive set of processes for describing the life cycle of AI systems, particularly those based on machine learning and heuristic systems (37). A key feature of this standard is its pragmatic approach of extending existing, widely adopted international standards for systems engineering (ISO/IEC/IEEE 15288) and software engineering (ISO/IEC/IEEE 12207) rather than creating a new framework from scratch (37, 38). This strategy is designed to lower the barrier to adoption by allowing organizations to build upon their mature development processes, tooling, and talent.

ISO/IEC 5338 categorizes its processes into three types, reflecting this evolutionary approach (38):

  • Generic Processes: These are identical to the processes found in traditional system and software lifecycle standards.
  • Modified Processes: These are existing processes that have been adapted with AI-specific particularities, such as adding considerations for model retraining in the maintenance process.
  • AI-Specific Processes: These are new processes introduced to address unique characteristics of AI systems, such as a process for continuous validation to monitor for model drift after deployment (38).

By encouraging organizations to evolve their existing System Development Life Cycle (SDLC) instead of creating a separate "AI development" silo, the standard promotes efficiency, better adoption of AI, and mutual understanding among all stakeholders.

ISO/IEC TR 24030: Use cases

This technical report serves a unique and critical role within the standardization ecosystem by providing a curated collection of representative AI use cases from a diverse range of application domains, including healthcare, finance, manufacturing, and agriculture (36). The 2024 edition of the report includes 81 in-operation use cases selected from a larger pool of submissions (36).

The purpose of this document is not to set requirements, but to fulfill several strategic functions. It illustrates the practical applicability of AI standardization work, provides concrete examples for other committees to reference, and helps identify new technical requirements emerging from the market (36). The collection and analysis of these use cases create a vital feedback loop that keeps the standardization process relevant and market-driven. It is a two-way street: the use cases demonstrate how existing standards are being applied in the real world, and, more importantly, they reveal challenges, gaps, and new requirements that can then feed back into the development of future standards. This formalized mechanism ensures that ISO/IEC standards do not become purely academic exercises but remain connected to the practical needs and evolving challenges of the global AI industry.

Section 7: Conclusion

The portfolio of standards for Artificial Intelligence developed by ISO/IEC JTC 1/SC 42 represents a comprehensive, deliberate, and deeply interconnected ecosystem. It is not a disparate collection of documents but a strategically designed framework guided by a philosophy of holistic and responsible systems integration. This ecosystem provides a flexible yet robust toolkit for organizations of all sizes to navigate the complexities of AI innovation, governance, and compliance in a globally recognized manner.

The framework is anchored by the certifiable ISO/IEC 42001 AI Management System, which functions as the central operational hub for implementing and demonstrating responsible AI governance. This core is supported and informed by a series of interlocking standards that address every critical facet of the AI landscape:

  • Foundational standards (ISO/IEC 22989 and 23053) establish the essential common language and conceptual architecture, ensuring all stakeholders can communicate and collaborate effectively.
  • A suite of trustworthiness standards (including ISO/IEC 23894 on risk, TR 24028 on trustworthiness principles, TR 24027 on bias, and TR 5469 on functional safety) provides deep-dive guidance on identifying, assessing, and mitigating the unique risks posed by AI systems.
  • Rigorous data governance frameworks (the ISO/IEC 5259 series on data quality and ISO/IEC 8183 on the data lifecycle) address the root source of many AI challenges, building trust from the ground up by ensuring the integrity of the data that fuels AI.
  • Practical implementation guidance (ISO/IEC 5338 on lifecycle processes and TR 24030 on use cases) bridges the gap between theory and practice, integrating AI development into established engineering disciplines and ensuring the standards remain relevant to real-world applications.

This modular and hierarchical structure allows organizations to adopt and apply the standards in a way that is tailored to their specific context, scale, and risk profile. By providing a clear, consensus-based path toward responsible AI, the ISO/IEC standards ecosystem serves as a critical enabler of trust, a facilitator of global trade, and a foundational pillar for the future of AI regulation and innovation. Pharma Best practices will cover highlights of each standard in the next series of articles.

Section 8: Citations

  1. Wikipedia. ISO/IEC JTC 1/SC 42.((https://en.wikipedia.org/wiki/ISO/IEC_JTC_1/SC_42))
  2. JTC 1 Information Site. SC 42 - Artificial Intelligence. https://jtc1info.org/sd-2-history/jtc1-subcommittees/sc-42/
  3. UNESCO. How ISO and IEC are developing international standards for the responsible adoption of AI. https://www.unesco.org/ethics-ai/en/articles/how-iso-and-iec-are-developing-international-standards-responsible-adoption-ai
  4. American National Standards Institute (ANSI). ISO/IEC JTC 1 Information Technology. https://www.ansi.org/iso/ansi-activities/iso-iec-jtc-1-information-technology
  5. Scribd. ISO/IEC JTC 1/SC 42.((https://www.scribd.com/document/809288947/ISO-IEC-JTC-1-SC-42))
  6. International Telecommunication Union (ITU). ISO/IEC JTC1 SC 42 Keynote by Wael Diab.((https://www.itu.int/en/ITU-T/extcoop/ai-data-commons/Documents/ISO_IEC%20JTC1%20SC%2042%20Keynote_Wael%20Diab.pdf))
  7. Prompt Security. Understanding the ISO/IEC 42001. https://www.prompt.security/blog/understanding-the-iso-iec-42001
  8. ISMS.online. Understanding ISO 42001 and Its Importance. https://www.isms.online/iso-42001/
  9. Wolf & Company, P.C. Implementing ISO standards for quality management of AI systems. https://www.wolfandco.com/resources/white-paper/implementing-iso-standards-quality-management-ai-systems/
  10. Deloitte. ISO 42001 standard for AI governance and risk management. https://www.deloitte.com/us/en/services/consulting/articles/iso-42001-standard-ai-governance-risk-management.html
  11. KPMG. ISO/IEC 42001: The new global standard for AI governance. https://kpmg.com/ch/en/insights/artificial-intelligence/iso-iec-42001.html
  12. A-LIGN. Understanding ISO 42001: The New AI Management Systems Standard. https://www.a-lign.com/articles/understanding-iso-42001
  13. International Telecommunication Union (ITU). ISO/IEC 24030:2019(E) Information technology — Artificial Intelligence (AI) — Use cases.((https://www.itu.int/en/ITU-T/focusgroups/ai4h/Documents/all/FGAI4H-H-025-A02.pdf))
  14. iTeh Standards. ISO/IEC JTC 1/SC 42 - Artificial intelligence. https://standards.iteh.ai/catalog/tc/iso/a8b53a70-2bb4-40a8-abf1-f42dde4432c5/iso-iec-jtc-1-sc-42
  15. AISECTraining. ISO/IEC JTC 1/SC 42: The Global Hub for AI Standardization. https://www.aisectraining.com/iso-iec-jtc1-sc42
  16. ETSI. Overview of ISO/IEC JTC 1/SC 42 Artificial Intelligence.((https://docbox.etsi.org/Workshop/2024/02_ETSIAICONFERENCE/S03_STANDARDIZ_AIACT_LEGALFMK/AI_Overview_ISO_SC42_DIAB_Wael.pdf))
  17. United Language Group. Regulating Big Data with ISO/IEC JTC 1/SC 42. https://www.unitedlanguagegroup.com/blog/technology/regulating-big-data-sc-42-artificial-intelligence
  18. Nemko. ISO/IEC 22989: Standardized Terminology for Artificial Intelligence. https://digital.nemko.com/standards/iso-iec-22989
  19. What is AI?. ISO/IEC 22989: A Comprehensive Overview of AI Concepts and Terminology. https://www.whatisai.info/blog/isoiec-22989-overview-of-ai-concepts/
  20. arc42. ISO/IEC 22989: Artificial intelligence — concepts and terminology. https://quality.arc42.org/standards/iso-iec-22989
  21. Nemko. ISO/IEC 23053: AI Systems Framework for Machine Learning. https://digital.nemko.com/standards/iso-iec-23053
  22. ANSI Blog. Why Should Organizations Adhere to Standards for AI?. https://blog.ansi.org/ansi/why-should-organizations-adhere-to-ai-standards/
  23. Stendard. A Comprehensive Guide to ISO/IEC 23894:2023 for AI Risk Management. https://stendard.com/en-sg/blog/iso-23894/
  24. IT-Certs. Artificial Intelligence Risk Management (ISO/IEC 23894) Lead Implementer. https://www.itcerts.ca/certification-programs/artificial-intelligence-risk-management-iso-iec-23894-lead-implementer/
  25. Pacific Certification. ISO/IEC TR 24028:2020 – A Guide to Trustworthiness in AI. https://pacificcert.com/iso-iec-tr-24028-2020-artificial-intelligence/
  26. Standards New Zealand. SNZ TR ISO/IEC 24028:2025. https://www.standards.govt.nz/shop/snz-tr-isoiec-240282025
  27. OECD.AI. ISO/IEC TR 24028:2020 - Information technology - Artificial intelligence - Overview of trustworthiness in artificial intelligence. https://oecd.ai/en/catalogue/tools/isoiec-tr-240282020-information-technology-artificial-intelligence-overview-of-trustworthiness-in-artificial-intelligence
  28. Nemko. ISO/IEC TR 24027: A Comprehensive Approach to Bias Mitigation in AI Systems. https://digital.nemko.com/standards/iso-iec-24027
  29. iTeh Standards. ISO/IEC TR 24027:2021 Preview.((https://cdn.standards.iteh.ai/samples/77607/c0664994eace4bd597db80bb10c18dec/ISO-IEC-TR-24027-2021.pdf))
  30. ISO25000.com. ISO/IEC 5259. https://iso25000.com/index.php/en/iso-25000-standards/iso-5259
  31. Nemko. ISO/IEC 5259-4: Data Quality Processes for Machine Learning. https://digital.nemko.com/standards/iso-iec-5259-4
  32. CEUR Workshop Proceedings. Data Quality in the Age of Artificial Intelligence: The Role of ISO/IEC 5259. https://ceur-ws.org/Vol-3916/paper_03.pdf
  33. SGS. ISO/IEC 5259-3 Certification – Artificial Intelligence (AI) – Data Quality Management for Analytics and Machine Learning (ML). https://www.sgs.com/en-us/services/iso-iec-5259-3-certification-artificial-intelligence-ai-data-quality-management-for-analytics
  34. Nemko. ISO/IEC 8183: Data Lifecycle Framework for AI Systems. https://digital.nemko.com/standards/iso-iec-8183?hsLang=en
  35. NATO. Data Quality Framework. https://www.nato.int/cps/en/natohq/official_texts_237308.htm
  36. ANSI Webstore. ISO/IEC TR 24030:2024 Preview.((https://webstore.ansi.org/preview-pages/ISO/preview_ISO+IEC+TR+24030-2024.pdf))
  37. Pillar Security. Embracing Security in AI: Unpacking the New ISO/IEC 5338 Standard. https://www.pillar.security/blog/embracing-security-in-ai-unpacking-the-new-iso-iec-5338-standard
  38. JTC 1 Information Site. Introduction of AI system life cycle processes.((https://jtc1info.org/wp-content/uploads/2023/12/ISO_IEC_AI_Workshop_yuchang_cheng.pdf))
  39. DIN Media. ISO/IEC TR 5469. https://www.dinmedia.de/en/technical-rule/iso-iec-tr-5469/376739328
  40. JTC 1 Information Site. Functional Safety and AI technologies.((https://jtc1info.org/wp-content/uploads/2023/06/Riccardo_Mariani-Functional_Safety_and_AI_technologies.pdf))
  41. SZSTR. ISO/IEC TR 24029-2:2023 AI Neural Network Robustness Assessment. https://www.szstr.com/aiq-en/iso24029
  42. IEC. New international standard provides process framework for managing big data analytics. https://www.iec.ch/blog/new-international-standard-provides-process-framework-managing-big-data-analytics
  43. ResearchGate. The spider diagram of standard ISO/IEC 24668.((https://www.researchgate.net/figure/The-spider-diagram-of-standard-ISO-IEC-24668-showing-the-values-of-the-suitability-index_fig4_353706160))
  44. Big Data Framework. An overview of the Big Data Framework. https://www.bigdataframework.org/an-overview-of-the-big-data-framework/

Abhay Ranjan

Entrepreneur | Chief Executive Officer, HTL BioPharma | Vice Chairman of ISPE, India | Rotary member

3w

Very knowledgeable, in-depth article. Thanks for sharing

Like
Reply
SUDHAKAR POUL

Business Owner | Entrepreneurial Leadership, Strategy

3w

👍

Like
Reply

To view or add a comment, sign in

More articles by Uday Shetty

Others also viewed

Explore content categories