On August 1, 2024, the European Union's AI Act came into force, bringing in new regulations that will impact how AI technologies are developed and used within the E.U., with far-reaching implications for U.S. businesses. The AI Act represents a significant shift in how artificial intelligence is regulated within the European Union, setting standards to ensure that AI systems are ethical, transparent, and aligned with fundamental rights. This new regulatory landscape demands careful attention for U.S. companies that operate in the E.U. or work with E.U. partners. Compliance is not just about avoiding penalties; it's an opportunity to strengthen your business by building trust and demonstrating a commitment to ethical AI practices. This guide provides a detailed look at the key steps to navigate the AI Act and how your business can turn compliance into a competitive advantage. 🔍 Comprehensive AI Audit: Begin with thoroughly auditing your AI systems to identify those under the AI Act’s jurisdiction. This involves documenting how each AI application functions and its data flow and ensuring you understand the regulatory requirements that apply. 🛡️ Understanding Risk Levels: The AI Act categorizes AI systems into four risk levels: minimal, limited, high, and unacceptable. Your business needs to accurately classify each AI application to determine the necessary compliance measures, particularly those deemed high-risk, requiring more stringent controls. 📋 Implementing Robust Compliance Measures: For high-risk AI applications, detailed compliance protocols are crucial. These include regular testing for fairness and accuracy, ensuring transparency in AI-driven decisions, and providing clear information to users about how their data is used. 👥 Establishing a Dedicated Compliance Team: Create a specialized team to manage AI compliance efforts. This team should regularly review AI systems, update protocols in line with evolving regulations, and ensure that all staff are trained on the AI Act's requirements. 🌍 Leveraging Compliance as a Competitive Advantage: Compliance with the AI Act can enhance your business's reputation by building trust with customers and partners. By prioritizing transparency, security, and ethical AI practices, your company can stand out as a leader in responsible AI use, fostering stronger relationships and driving long-term success. #AI #AIACT #Compliance #EthicalAI #EURegulations #AIRegulation #TechCompliance #ArtificialIntelligence #BusinessStrategy #Innovation
How the AI Treaty Regulates AI Development
Explore top LinkedIn content from expert professionals.
-
-
This report provides the first comprehensive analysis of how the EU AI Act regulates AI agents, increasingly autonomous AI systems that can directly impact real-world environments. Our three primary findings are: 1. The AI Act imposes requirements on the general-purpose (AI GPAI) models underlying AI agents (Ch. V) and the agent systems themselves (Ch. III). We assume most agents rely on GPAI models with systemic risk (GPAISR) Accordingly, the applicability of various AI Act provisions depends on (a) whether agents proliferate systemic risks under Ch. V (Art. 55), and (b) whether they can be classified as high-risk systems under Ch. III. We find that (a) generally holds, requiring providers of GPAISRs to assess and mitigate systemic risks from AI agents. However, it is less clear whether AI agents will in all cases qualify as (b) high-risk AI systems, as this depends on the agent's specific use case. When built on GPAI models, AI agents should be considered high-risk GPAI systems, unless the GPAI model provider deliberately excluded high-risk uses from the intended purposes for which the model may be used. 2. Managing agent risks effectively requires governance along the entire value chain. The governance of AI agents illustrates the “many hands problemˮ, where accountability is obscured due to the unclear allocation of responsibility across a multi-stakeholder value chain. We show how requirements must be distributed along the value chain, accounting for the various asymmetries between actors, such as the superior resources and expertise of model providers and the context-specific information available to downstream system providers and deployers. In general, model providers must build the fundamental infrastructure, system providers must adapt these tools to their specific contexts, and deployers must adhere to and apply these rules during operation. 3. The AI Act governs AI agents through four primary pillars: risk assessment, transparency tools, technical deployment controls, and human oversight. We derive these complementary pillars by conducting an integrative review of the AI governance literature and mapping the results onto the EU AI Act. Underlying these pillars, we identify 10 sub-measures for which we note specific requirements along the value chain, presenting an interdependent view of the obligations on GPAISR providers, system providers, and system deployers. By Amin Oueslati, Robin Staes-Polet at The Future Society Read: https://lnkd.in/e6865zWq
-
The EU just said "no brakes" on AI regulation. Despite heavy pushback from tech giants like Apple, Meta, and Airbus, the EU pressed forward last week with its General-Purpose AI Code of Practice. Here's what's coming: → General-purpose AI systems (think GPT, Gemini, Claude) need to comply by August 2, 2025. → High-risk systems (biometrics, hiring tools, critical infrastructure) must meet regulations by 2026. → Legacy and embedded tech systems will have to comply by 2027. If you’re a Chief Data Officer, here’s what should be on your radar: 1. Data Governance & Risk Assessment: Clearly map your data flows, perform thorough risk assessments similar to those required under GDPR, and carefully document your decisions for audits. 2. Data Quality & Bias Mitigation: Ensure your data is high-quality, representative, and transparently sourced. Responsibly manage sensitive data to mitigate biases effectively. 3. Transparency & Accountability: Be ready to trace and explain AI-driven decisions. Maintain detailed logs and collaborate closely with legal and compliance teams to streamline processes. 4. Oversight & Ethical Frameworks: Implement human oversight for critical AI decisions, regularly review and test systems to catch issues early, and actively foster internal AI ethics education. These new regulations won’t stop at Europe’s borders. Like GDPR, they're likely to set global benchmarks for responsible AI usage. We're entering a phase where embedding governance directly into how organizations innovate, experiment, and deploy data and AI technologies will be essential.
-
Understanding AI Compliance: Key Insights from the COMPL-AI Framework ⬇️ As AI models become increasingly embedded in daily life, ensuring they align with ethical and regulatory standards is critical. The COMPL-AI framework dives into how Large Language Models (LLMs) measure up to the EU’s AI Act, offering an in-depth look at AI compliance challenges. ✅ Ethical Standards: The framework translates the EU AI Act’s 6 ethical principles—robustness, privacy, transparency, fairness, safety, and environmental sustainability—into actionable criteria for evaluating AI models. ✅Model Evaluation: COMPL-AI benchmarks 12 major LLMs and identifies substantial gaps in areas like robustness and fairness, revealing that current models often prioritize capabilities over compliance. ✅Robustness & Fairness : Many LLMs show vulnerabilities in robustness and fairness, with significant risks of bias and performance issues under real-world conditions. ✅Privacy & Transparency Gaps: The study notes a lack of transparency and privacy safeguards in several models, highlighting concerns about data security and responsible handling of user information. ✅Path to Safer AI: COMPL-AI offers a roadmap to align LLMs with regulatory standards, encouraging development that not only enhances capabilities but also meets ethical and safety requirements. 𝐖𝐡𝐲 𝐢𝐬 𝐭𝐡𝐢𝐬 𝐢𝐦𝐩𝐨𝐫𝐭𝐚𝐧𝐭? ➡️ The COMPL-AI framework is crucial because it provides a structured, measurable way to assess whether large language models (LLMs) meet the ethical and regulatory standards set by the EU’s AI Act which come in play in January of 2025. ➡️ As AI is increasingly used in critical areas like healthcare, finance, and public services, ensuring these systems are robust, fair, private, and transparent becomes essential for user trust and societal impact. COMPL-AI highlights existing gaps in compliance, such as biases and privacy concerns, and offers a roadmap for AI developers to address these issues. ➡️ By focusing on compliance, the framework not only promotes safer and more ethical AI but also helps align technology with legal standards, preparing companies for future regulations and supporting the development of trustworthy AI systems. How ready are we?
-
In a new article, Tea Mustać, Co-Author of the new book "AI Act Compact" and co-hosts of the podcast RegINTL: Decoding AI, together with Peter Hense 🇺🇦🇮🇱, provides an overview of Risk Categorization of AI Systems under the EU AI Act, comparing "skipping risk categorization or not giving it the attention it deserves" to "forgetting to check your parachute before skydiving - it’ll work out fine… unless it doesn’t. Knowing your AI’s risk level isn’t just about ticking boxes; it’s about keeping your innovation (and business) alive." Article "The AI Act Series: Risk Categorization" on AI Advances: https://lnkd.in/gzhjixyS Find infographic below in the comments to her latest blog post on the topic: https://lnkd.in/g-5wEH8T * * * AI risk categorization is a significant regulatory challenge, not just a simple task to be checked off. To comply with the EU AI Act, one needs to understand the risk classification rules laid out in Article 6(2). If you are a provider of a potential high-risk AI system, follow these steps: 1: Assess the AI System: --> The AI system is high-risk if it is used as a safety component or a product that is covered by EU safety legislation in Annex I ("List of Union Harmonisation Legislation") and required to undergo a third-party conformity assessment under these laws. --> The AI systems is high-risk if it is listed in Annex III ("High-Risk AI Systems Referred to in Article 6(2)"). These include AI systems used in critical sectors such as biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and the administration of justice. --> AI systems are always considered high-risk if it profiles individuals, i.e. automated processing of personal data to assess various aspects of a person’s life, such as work performance, economic situation, health, preferences, interests, reliability, behaviour, location or movement. 2: Exceptions to High-Risk Classification: An AI system in the Annex III categories may not be considered high-risk if it: - Performs a simple procedural task. - Aids human decision-making without replacing it. - Identifies patterns or deviations in decisions as a supplementary tool that includes human review. * * * Before launching a high-risk AI system on the EU market or putting it into service: 1) Document an assessment explaining its risk level, especially if it doesn't meet the high-risk criteria of Annex III. 2) Implement essential compliance measures, including data governance and transparency, and provide detailed documentation on the AI's capabilities and limitations. 3) Prepare for and complete any required third-party conformity assessments for safety-critical uses. See: https://lnkd.in/gzhjixyS in AI Advances
-
A GRC leader at a $5B revenue global fintech company asked me this about AI governance frameworks: "Do we start with the EU AI Act first or do we do all three [AI Act, ISO/IEC 42001, and NIST AI RMF] together?" Here's how I think of each: 1. EU AI Act Adopted in 2024, the European Union (EU) AI Act forbids: -> Inference of non-obvious traits from biometrics -> Real-time biometric identification in public -> Criminal profiling not on criminal behavior -> Purposefully manipulative or deceptive -> Inferring emotions in school/workplace -> Blanket facial image collection -> Social scoring It heavily regulates AI systems involved in: -> Intended to be use as safety component; and -> Underlying products already EU-regulated -> Criminal behavior risk assessment -> Education admissions/decisions -> Job recruitment/advertisement -> Exam cheating identification -> Public benefit decisions -> Emergency call routing -> Migration and asylum -> Election management -> Critical infrastructure -> Health/life insurance -> Law enforcement Fines can be up to 35,000,000 Euros or 7% of worldwide annual revenue. So ignoring the EU AI Act’s requirements can be costly. It's mandatory for anyone qualifying (according to the AI Act) as a: -> Provider -> Deployer -> Importer -> Distributor -> Product Manufacturer -> Authorized Representative 2. ISO/IEC 42001:2023 Published by the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) in December 2023. ISO 42001 requires building an AI management system (AIMS) to measure and treat risks to: -> Safety -> Privacy -> Security -> Health and welfare -> Societal disruption -> Environmental impact An external auditor can certify this. Also, compliance with a “harmonised standard” of the EU AI Act, which ISO 42001 may become, gives you a presumption of conformity with some AI Act provisions. But ISO 42001 is not a silver bullet. A U.S.-based company offering facial recognition for public places could be ISO 42001 certified but banned from operating in the EU. In any case, it's one of the few ways a third party can bless your AI governance program. It's best for: -> AI-powered B2B startups -> Companies training on customer data -> Heavily-regulated enterprises (healthcare/finance) 3. NIST AI RMF The National Institute of Standards and Technology (NIST) Artificial Intelligence (AI) Risk Management Framework (RMF) launched in January 2023. ISO 42001 also names it as a reference document. The AI RMF has four functions: -> Map -> Measure -> Manage -> Govern These lay out best practices at a high level. But like all NIST standards, there is no way to be “certified." But because of NIST’s credibility and the fact it was the first major AI framework published, using the AI RMF is a good way for any company to build trust. BOTTOM LINE Stack AI frameworks to meet: -> Regulatory requirements -> Customer demands -> Risk profile How are you doing it?
-
The EU draft AI Code of Practice could affect open models and small developers in unintended ways. The draft Code (which outlines how model developers can comply with the AI Act): 1. Defines systemic risk too broadly. An open model with systemic risk is not exempt from the AI Act, and a lot depends on how the EU defines systemic risk.* However, the Code endorses an impossibly nebulous list of risks, including: persuasion, "loss of trust in media", “large-scale discrimination”, and "oversimplification of knowledge". Yet these are not model-layer risks, and aren't amenable to precise evaluation. Any 2B model can be easily tuned into a disinformation parrot or spambot. We need to be careful lifting regulatory language from ethics literature. 2. Envisions pre-deployment audits. Developers of these models must submit to “independent testing” and file a safety report before deployment. But the Act did not mandate third-party testing or reporting before deployment. The Code would prevent an open release until the AI Office and “appropriate third party evaluators” have finished their work. 3. Requires developers to test the unforeseeable. Developers must test not just "reasonably foreseeable" applications but also applications that expose the model’s “maximum potential” for systemic risk. It's a costly and indeterminate obligation that means testing for possible risks—not just foreseeable or probable risks. And it becomes more difficult and expensive in an open source context, where developers can modify or integrate the model in ways that aren’t possible in a paywalled API environment. 4. Doesn't clarify the urgent obligations. All developers need to understand how to comply with e.g. opt-outs. The Code defers the question, requiring developers to "make best efforts" with "widely used standards". But there are still no widely used standards (especially for text data) and developers are already training models that will be subject to the Act. If it’s unclear how to comply, that exposes all developers to potential litigation—especially those with open datasets or auditable models. 5. Requires developers to draw a line in the sand. Developers must identify conditions under which they would pause, withdraw, or delete models. This isn’t the first attempt to crystallize risk thresholds (see e.g. WH or SB1047), and the Code doesn’t mandate a specific threshold. But if regulators disagree, or if thresholds vary widely—as they certainly will—that could trigger future intervention that adversely impacts open models. To be clear, no one expects the EU to enforce the Code in a plainly ridiculous or adverse way. I know from experience the AI Office is led by good people who value open innovation in Europe. But the unintended effects of well-meaning rules can be significant. We should use this opportunity to get it right from day one. * Still TBD whether the threshold will be 1E25 FLOP alone, or include other criteria e.g. audiovisual risks like NCII.
-
https://lnkd.in/g5ir6w57 The European Union has adopted the AI Act as its first comprehensive legal framework specifically for AI, effective from July 12, 2024. The Act is designed to ensure the safe and trustworthy deployment of AI across various sectors, including healthcare, by setting harmonized rules for AI systems in the EU market. 1️⃣ Scope and Application: The AI Act applies to all AI system providers and deployers within the EU, including those based outside the EU if their AI outputs are used in the Union. It covers a wide range of AI systems, including general-purpose models and high-risk applications, with specific regulations for each category. 2️⃣ Risk-Based Classification: The Act classifies AI systems based on their risk levels. High-risk AI systems, especially in healthcare, face stringent requirements and oversight, while general-purpose AI models have additional transparency obligations. Prohibited AI practices include manipulative or deceptive uses, though certain medical applications are exempt. 3️⃣ Innovation and Compliance: To support innovation, the AI Act includes provisions like regulatory sandboxes for testing AI systems and exemptions for open-source AI models unless they pose systemic risks. High-risk AI systems must comply with both the AI Act and relevant sector-specific regulations, like the Medical Device Regulation (MDR) and the In Vitro Diagnostic Medical Device Regulation (IVDR). 4️⃣ Global Impact and Challenges: The AI Act may influence global AI regulation by setting high standards, and its implementation within existing sector-specific regulations could create complexities. The evolving nature of AI technology necessitates ongoing updates to the regulatory framework to balance innovation with safety and fairness.
-
HERE WE GO! It's now February 2, 2025, which means that the first requirements under the EU AI Act are officially in force. 1. The following AI systems are now prohibited(I'm oversimplifying of course so for a deeper dive see Art.5 AI Act ➡️ https://lnkd.in/en_im5UU): - Predictive Policing Based on Profiling - Social Scoring - Exploitation of Vulnerabilities (age, disability, social/economic situations) - Manipulative/ Deceptive (Subliminal) Techniques - Untargeted Facial Recognition Databases (think Clearview) - Emotion Recognition (Workplace and Educational Institutions) - Biometric Categorisation - Real-Time Remote Biometric Identification for Law Enforcement Non-compliance will trigger significant fines plus AI systems can potentially be taken off the EU market. This also applies to businesses operating outside the EU as long as the model output is used in the EU or affects EU users. 2. AI literacy requirements kick in (see Art.4 of the AI Act). Providers and deployers of AI systems shall take measures to ensure a "sufficient level of AI literacy" among their staff and others using AI systems on their behalf. There is no one list of AI literacy requirements to follow so each organization should develop and tailor their AI literacy program depending on the level of technical knowledge, experience, and education of staff, the context AI systems are used, and AI systems users. AI literacy, like AI governance, isn't just a box you check once. It is an ongoing commitment that must evolve along with the changes in technology and regulation.
-
The EU AI act just made some AI systems ILLEGAL, and tech giants are already pivoting. As of February 2024, the EU AI Act has officially kicked in - and we're seeing the impact ripple through the tech world. → In September last year, Meta suspended future AI model releases in Europe due to regulatory concerns. → DeepSeek AI — that kicked off the Nvidia $593B selloff last Monday— just got COMPLETELY BLOCKED in Italy over data protection issues. → Giants like Google and SAP are expressing fears around this slowing down innovation. Here's what's now banned under the world's first major AI law: ❌ Cognitive manipulation – AI designed to exploit vulnerabilities (e.g., AI toys & apps influencing children's behavior). AMEN! ❌ Real-time biometric surveillance – No more live facial recognition in public spaces ❌ Biometric categorization – AI can't classify people based on race, gender, or personal traits ❌ Social scoring – No AI-driven ranking of individuals based on behavior or socioeconomic status And these rules have teeth! Companies violating them could face fines of up to €35 million or 7% of global revenue — whichever is higher. But this also raises tough questions: 1. Will this stifle AI innovation? Could strict regulations slow down progress? 2. Is the definition of "unacceptable risk" too broad or too narrow? Could transformative beneficial AI get caught in the crossfire? 3. How will enforcement play out? Who decides when AI crosses the line? The AI Wild West isn’t over yet… but we’re heading there. Businesses must adapt or risk being locked out of the EU market. Is this the right move, or is the EU going too far? What’s your take? #EU #AI #innovation
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development