The Artificial Intelligence Act, endorsed by the European Parliament yesterday, sets a global precedent by intertwining AI development with fundamental rights, environmental sustainability, and innovation. Below are the key takeaways: Banned Applications: Certain AI applications would be prohibited due to their potential threat to citizens' rights. These include: Biometric categorization and the untargeted scraping of images for facial recognition databases. Emotion recognition in workplaces and educational institutions. Social scoring and predictive policing based solely on profiling. AI that manipulates behavior or exploits vulnerabilities. Law Enforcement Exemptions: Use of real-time biometric identification (RBI) systems by law enforcement is mostly prohibited, with exceptions under strictly regulated circumstances, such as searching for missing persons or preventing terrorist attacks. Obligations for High-Risk Systems: High-risk AI systems, which could significantly impact health, safety, and fundamental rights, must meet stringent requirements. These include risk assessment, transparency, accuracy, and ensuring human oversight. Transparency Requirements: General-purpose AI systems must adhere to transparency norms, including compliance with EU copyright law and the publication of training data summaries. Innovation and SME Support: The act encourages innovation through regulatory sandboxes and real-world testing environments, particularly benefiting SMEs and start-ups, to foster the development of innovative AI technologies. Next Steps: Pending a final legal review and formal endorsement by the Council, the regulation will become enforceable 20 days post-publication in the official Journal, with phased applicability for different provisions ranging from 6 to 36 months after enforcement. It will be interesting to watch this unfold and the potential impact on other nations as they consider regulation. #aiethics #responsibleai #airegulation https://lnkd.in/e8dh7yPb
Key Changes in EU Tech Regulations
Explore top LinkedIn content from expert professionals.
-
-
On August 1, 2024, the European Union's AI Act came into force, bringing in new regulations that will impact how AI technologies are developed and used within the E.U., with far-reaching implications for U.S. businesses. The AI Act represents a significant shift in how artificial intelligence is regulated within the European Union, setting standards to ensure that AI systems are ethical, transparent, and aligned with fundamental rights. This new regulatory landscape demands careful attention for U.S. companies that operate in the E.U. or work with E.U. partners. Compliance is not just about avoiding penalties; it's an opportunity to strengthen your business by building trust and demonstrating a commitment to ethical AI practices. This guide provides a detailed look at the key steps to navigate the AI Act and how your business can turn compliance into a competitive advantage. 🔍 Comprehensive AI Audit: Begin with thoroughly auditing your AI systems to identify those under the AI Act’s jurisdiction. This involves documenting how each AI application functions and its data flow and ensuring you understand the regulatory requirements that apply. 🛡️ Understanding Risk Levels: The AI Act categorizes AI systems into four risk levels: minimal, limited, high, and unacceptable. Your business needs to accurately classify each AI application to determine the necessary compliance measures, particularly those deemed high-risk, requiring more stringent controls. 📋 Implementing Robust Compliance Measures: For high-risk AI applications, detailed compliance protocols are crucial. These include regular testing for fairness and accuracy, ensuring transparency in AI-driven decisions, and providing clear information to users about how their data is used. 👥 Establishing a Dedicated Compliance Team: Create a specialized team to manage AI compliance efforts. This team should regularly review AI systems, update protocols in line with evolving regulations, and ensure that all staff are trained on the AI Act's requirements. 🌍 Leveraging Compliance as a Competitive Advantage: Compliance with the AI Act can enhance your business's reputation by building trust with customers and partners. By prioritizing transparency, security, and ethical AI practices, your company can stand out as a leader in responsible AI use, fostering stronger relationships and driving long-term success. #AI #AIACT #Compliance #EthicalAI #EURegulations #AIRegulation #TechCompliance #ArtificialIntelligence #BusinessStrategy #Innovation
-
The EU just said "no brakes" on AI regulation. Despite heavy pushback from tech giants like Apple, Meta, and Airbus, the EU pressed forward last week with its General-Purpose AI Code of Practice. Here's what's coming: → General-purpose AI systems (think GPT, Gemini, Claude) need to comply by August 2, 2025. → High-risk systems (biometrics, hiring tools, critical infrastructure) must meet regulations by 2026. → Legacy and embedded tech systems will have to comply by 2027. If you’re a Chief Data Officer, here’s what should be on your radar: 1. Data Governance & Risk Assessment: Clearly map your data flows, perform thorough risk assessments similar to those required under GDPR, and carefully document your decisions for audits. 2. Data Quality & Bias Mitigation: Ensure your data is high-quality, representative, and transparently sourced. Responsibly manage sensitive data to mitigate biases effectively. 3. Transparency & Accountability: Be ready to trace and explain AI-driven decisions. Maintain detailed logs and collaborate closely with legal and compliance teams to streamline processes. 4. Oversight & Ethical Frameworks: Implement human oversight for critical AI decisions, regularly review and test systems to catch issues early, and actively foster internal AI ethics education. These new regulations won’t stop at Europe’s borders. Like GDPR, they're likely to set global benchmarks for responsible AI usage. We're entering a phase where embedding governance directly into how organizations innovate, experiment, and deploy data and AI technologies will be essential.
-
The EU AI Act isn’t theory anymore — it’s live law. And for Medical AI teams, it just became a business-critical mandate. If your AI product powers diagnostics, clinical decision support, or imaging you’re now officially building a high-risk AI system in the EU. What does that mean? ⚖️ Article 9 — Risk Management System Every model update must link to a live, auditable risk register. Tools like Arterys (Acquired by Tempus AI) Cardio AI automate cardiac function metrics. They must now log how model updates impact critical endpoints like ejection fraction. ⚖️ Article 10 — Data Governance & Integrity Your datasets must be transparent in origin, version, and bias handling. PathAI Diagnostics faced public scrutiny for dataset bias, highlighting why traceable data governance is now non-negotiable. ⚖️ Article 15 — Post-Market Monitoring & Control AI drift after deployment isn’t just a risk — it’s a regulatory obligation. Nature Magazine Digital Medicine published cases of radiology AI tools flagged for post-deployment drift. Continuous monitoring and risk logging are mandatory under Article 61. At lensai.tech, we make this real for medical AI teams: - Risk logs tied to model updates and Jira tasks - Data governance linked with Confluence and MLflow - Post-market evidence generation built into your dev workflow Why this matters: 76% of AI startups fail audits due to lack of traceability. The EU AI Act penalties can reach €35M or 7% of global revenue Want to know how the EU AI Act impacts your AI product? Tag your product below — I’ll share a practical white paper breaking it all down.
-
The EU draft AI Code of Practice could affect open models and small developers in unintended ways. The draft Code (which outlines how model developers can comply with the AI Act): 1. Defines systemic risk too broadly. An open model with systemic risk is not exempt from the AI Act, and a lot depends on how the EU defines systemic risk.* However, the Code endorses an impossibly nebulous list of risks, including: persuasion, "loss of trust in media", “large-scale discrimination”, and "oversimplification of knowledge". Yet these are not model-layer risks, and aren't amenable to precise evaluation. Any 2B model can be easily tuned into a disinformation parrot or spambot. We need to be careful lifting regulatory language from ethics literature. 2. Envisions pre-deployment audits. Developers of these models must submit to “independent testing” and file a safety report before deployment. But the Act did not mandate third-party testing or reporting before deployment. The Code would prevent an open release until the AI Office and “appropriate third party evaluators” have finished their work. 3. Requires developers to test the unforeseeable. Developers must test not just "reasonably foreseeable" applications but also applications that expose the model’s “maximum potential” for systemic risk. It's a costly and indeterminate obligation that means testing for possible risks—not just foreseeable or probable risks. And it becomes more difficult and expensive in an open source context, where developers can modify or integrate the model in ways that aren’t possible in a paywalled API environment. 4. Doesn't clarify the urgent obligations. All developers need to understand how to comply with e.g. opt-outs. The Code defers the question, requiring developers to "make best efforts" with "widely used standards". But there are still no widely used standards (especially for text data) and developers are already training models that will be subject to the Act. If it’s unclear how to comply, that exposes all developers to potential litigation—especially those with open datasets or auditable models. 5. Requires developers to draw a line in the sand. Developers must identify conditions under which they would pause, withdraw, or delete models. This isn’t the first attempt to crystallize risk thresholds (see e.g. WH or SB1047), and the Code doesn’t mandate a specific threshold. But if regulators disagree, or if thresholds vary widely—as they certainly will—that could trigger future intervention that adversely impacts open models. To be clear, no one expects the EU to enforce the Code in a plainly ridiculous or adverse way. I know from experience the AI Office is led by good people who value open innovation in Europe. But the unintended effects of well-meaning rules can be significant. We should use this opportunity to get it right from day one. * Still TBD whether the threshold will be 1E25 FLOP alone, or include other criteria e.g. audiovisual risks like NCII.
-
DeepSeek, AI Governance, and the Next Compliance Reckoning The recent notification to the Italian Data Protection Authority about DeepSeek’s data practices is more than a regulatory footnote—it’s a stress test for how the EU will enforce GDPR against global AI companies. Earlier today, I explored why DeepSeek matters—not just because of what it did, but because of what it represents. This notice highlights a growing tension between AI deployment at scale and compliance in an increasingly fractured regulatory landscape. Here’s the compliance picture that’s emerging: 🔹 Data Transfers Without Safeguards – DeepSeek stores EU user data in China without Standard Contractual Clauses (SCCs) or Binding Corporate Rules (BCRs). Given China’s data access laws and GDPR’s strict requirements, this creates a high-risk regulatory gap. 🔹 Opaque Legal Basis for Processing – GDPR requires a clear, specific legal basis for data processing. DeepSeek’s policy lacks transparency, making it difficult to determine if consent, contract necessity, or legitimate interest applies. 🔹 AI Profiling & Automated Decision-Making Risks – There’s no clarity on whether DeepSeek uses personal data for AI model training or algorithmic decision-making—a compliance red flag under GDPR Article 22. 🔹 Failure to Appoint an EU Representative – GDPR Article 27 mandates a local representative for companies targeting the EU market. DeepSeek hasn’t done so, further complicating enforcement. 🔹 Children’s Privacy Gaps – DeepSeek claims its service isn’t for minors but has no clear age verification measures—an issue regulators have aggressively pursued in recent enforcement actions. The key takeaways: ✅ Regulatory Blind Spots Can Derail Market Access – Without proactive governance, AI products risk being blocked from entire jurisdictions. ✅ Transparency and Accountability Are No Longer Optional – AI companies must clearly disclose profiling, data sharing, and user rights. ✅ AI Regulation Is Accelerating – Between GDPR enforcement trends and the upcoming EU AI Act, the compliance stakes are rising fast. DeepSeek may be the current example, but it won’t be the last. AI companies that build compliance and trust into their foundation will be the ones that thrive in this next era of AI governance. #AI #Privacy #GDPR #AICompliance #DataGovernance
-
In <16 months most of the EU AI Act comes into force. This is despite: -> huge gray areas in the law -> delays in publication of "Harmonised Standards" -> onerous requirements for "High-Risk AI Systems" Regulators don't care about your pain. But StackAware does, so we put together an actionable procedure addressing the law’s requirements. This applies only to private sector organizations operating as Deployers (and not doing so on behalf of public authorities, EU institutions, bodies, and offices). ⬛ BEGIN EU AI Act Deployer Compliance Procedure ⬛ 1. The CISO must: -> Ensure AI literacy of all personnel using AI Systems. -> For High-Risk AI Systems: -- Conduct an AI Model, System, Impact, and Risk Assessment per the StackAware SOP. -- Provide the Market Surveillance Authority(ies) the results. 2. Data owners must: -> For High-Risk AI Systems: -- Use and monitor systems per Provider instructions. -- Only use output of the system in the EU if the Provider has certified it for use there. -- Inform, prior to using the system, all persons subject to the system. --If the system produces legal (or similar) effects a person considers adverse, provide a the person a concise explanation of the: ---role of the AI system. ---main element(s) of the decision taken. -- Assign human oversight of the system. -- Ensure Input Data is relevant and sufficiently representative. -- Retain system logs for at least 6 months. -- Provide information via the Provider’s Post-Market Monitoring System. -- Upon identification of an AI System Presenting a Risk, cease use within 3 days. -- Upon identification of a Serious Incident, do not allow the AI system to be altered before complete investigation. -> For Emotion Recognition and Biometric Categorisation Systems, inform people whose Personal Data is processed by the system. -> For systems that generate or manipulate Deep Fakes, disclose in plain language—accessible to people with disabilities—the content has been so generated or manipulated. -> For systems that generate or manipulate text published to inform the public and where AI-generated content has not undergone human review, disclose in—plain language accessible to people with disabilities—the text has been so generated or manipulated. 3. The General Counsel must: -> Upon identification of an AI System Presenting a Risk, inform the Provider and Market Surveillance Authority within 30 days. -> Upon identification of a Serious Incident caused by the system: -- Inform the Provider within 3 days. -- If the Provider does not confirm receipt within 3 subsequent days, inform the Market Surveillance Authority of all European Union Member States where the incident occurred within 2 subsequent days. -- Inform the Importer or Distributor (if applicable) within 30 days. -- Investigate the Serious Incident and the AI system concerned, by: --- Conducting a revised Risk Assessment of the system and incident. --- Documenting a corrective action plan.
-
HERE WE GO! It's now February 2, 2025, which means that the first requirements under the EU AI Act are officially in force. 1. The following AI systems are now prohibited(I'm oversimplifying of course so for a deeper dive see Art.5 AI Act ➡️ https://lnkd.in/en_im5UU): - Predictive Policing Based on Profiling - Social Scoring - Exploitation of Vulnerabilities (age, disability, social/economic situations) - Manipulative/ Deceptive (Subliminal) Techniques - Untargeted Facial Recognition Databases (think Clearview) - Emotion Recognition (Workplace and Educational Institutions) - Biometric Categorisation - Real-Time Remote Biometric Identification for Law Enforcement Non-compliance will trigger significant fines plus AI systems can potentially be taken off the EU market. This also applies to businesses operating outside the EU as long as the model output is used in the EU or affects EU users. 2. AI literacy requirements kick in (see Art.4 of the AI Act). Providers and deployers of AI systems shall take measures to ensure a "sufficient level of AI literacy" among their staff and others using AI systems on their behalf. There is no one list of AI literacy requirements to follow so each organization should develop and tailor their AI literacy program depending on the level of technical knowledge, experience, and education of staff, the context AI systems are used, and AI systems users. AI literacy, like AI governance, isn't just a box you check once. It is an ongoing commitment that must evolve along with the changes in technology and regulation.
-
On December 8, 2024, the EU’s new Product Liability Directive (PLD) came into force, with its provisions set to apply fully to products placed on the market after 9 December 2026. The revised PLD has significant implications for AI. The Directive explicitly includes AI systems under its scope, to hold manufacturers liable for defects in AI applications, operating systems, or machine learning-enabled systems. The directive also extends liability to cover defects arising from updates, upgrades, or learning-based modifications made after release, addressing the evolving nature of AI technologies. Links: - European Commission Overview: https://lnkd.in/gn7yC6Cb - Text: https://lnkd.in/gh495jww * * * Who is in Scope? All economic operators involved in the design, manufacture, production, import, distribution, or substantial modification of products, including software and components, in the course of a commercial activity. This includes manufacturers, authorised representatives, importers, fulfilment service providers, and distributors. The Directive explicitly includes: - Products: Tangible goods, digital manufacturing files, software (e.g., AI systems), raw materials, and related services integrated into products. - Substantial Modifiers: Those who make significant modifications to products after their initial placement on the market. When Does It Apply to American Organizations? Any non-EU manufacturer or economic operator whose products or components are imported or made available in the EU market falls under this Directive. This includes: - American companies exporting to the EU. - Entities providing software, digital manufacturing files, or integrated services for products sold or distributed in the EU. * * * Key Points on the Product Liability Directive (EU) 2024/2853 Liability is strict (no-fault-based) and applies to all products, including software and AI systems integrated into or controlling tangible goods. Specific Inclusions: - Software is treated as a product if supplied in the course of commercial activity, regardless of how it is delivered (e.g., SaaS, cloud, or installed on devices). - AI providers are treated as manufacturers under the Directive. - Digital manufacturing files and integrated services (e.g., AI services enabling product functionality) are also in scope. Exemptions: - Free and open-source software is exempt unless distributed in the course of commercial activity. - Personal-use property and purely informational content are excluded. Manufacturer’s Responsibilities: - Includes liability for cybersecurity vulnerabilities. - Requires maintenance of software updates for safety but not necessarily functional updates.
-
The EU AI act just made some AI systems ILLEGAL, and tech giants are already pivoting. As of February 2024, the EU AI Act has officially kicked in - and we're seeing the impact ripple through the tech world. → In September last year, Meta suspended future AI model releases in Europe due to regulatory concerns. → DeepSeek AI — that kicked off the Nvidia $593B selloff last Monday— just got COMPLETELY BLOCKED in Italy over data protection issues. → Giants like Google and SAP are expressing fears around this slowing down innovation. Here's what's now banned under the world's first major AI law: ❌ Cognitive manipulation – AI designed to exploit vulnerabilities (e.g., AI toys & apps influencing children's behavior). AMEN! ❌ Real-time biometric surveillance – No more live facial recognition in public spaces ❌ Biometric categorization – AI can't classify people based on race, gender, or personal traits ❌ Social scoring – No AI-driven ranking of individuals based on behavior or socioeconomic status And these rules have teeth! Companies violating them could face fines of up to €35 million or 7% of global revenue — whichever is higher. But this also raises tough questions: 1. Will this stifle AI innovation? Could strict regulations slow down progress? 2. Is the definition of "unacceptable risk" too broad or too narrow? Could transformative beneficial AI get caught in the crossfire? 3. How will enforcement play out? Who decides when AI crosses the line? The AI Wild West isn’t over yet… but we’re heading there. Businesses must adapt or risk being locked out of the EU market. Is this the right move, or is the EU going too far? What’s your take? #EU #AI #innovation
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development