0% found this document useful (0 votes)
46 views16 pages

Eu AI Act Guide

EU AI Act

Uploaded by

Gaali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views16 pages

Eu AI Act Guide

EU AI Act

Uploaded by

Gaali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

EU AI Act

Guide
Foundations and

Practical Insights
Date: 22.04.2024
modulos.ai

Modulos AG

Technoparkstrasse 1
+41 76 566 05 48

8005 Zürich, Switzerland [email protected]


Index

Foundations: Understanding the EU AI Act

1.1 Introduction to the EU AI Act


3
1.2 Artificial Intelligence Definition
3
1.3 Risk-based Classification
4
1.4 AI Systems vs AI Models
5
1.5 Compliance Requirements
7
1.6 Conformity Assessment
7
1.7 Penalties
8
1.8 Entry Into Application 8

Practical Insights: Operationalizing the EU AI Act

2.1 Ensuring Compliance with the EU AI Act


9
2.2 Risk Management in AI
11
2.3 Global Perspectives on AI Regulatio 13

Are you ready to comply with the EU AI Act? 15

© Modulos AG, 2024


Disclaimer: The information contained in this publication is for general information
purposes only and does not constitute and does not constitute legal advice. EU AI Act Guide 2
Foundations:

Understanding the EU AI Act

1.1 Introduction to the EU AI Act

April 2021 June 2023 March 202 4


Eu AI Act First AI Act adopted by EU AI Act

proposal EU Parliament has passed

December 2022 December 2023

Adoption o Political agreement


general approach

The EU AI Act aims to protect fundamental rights, democracy, the rule of law and environmental

sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the

field. The regulation establishes obligations for AI based on its potential risks and level of impact.

On the 13th of March, the European Parliament has passed the Artificial

Intelligence Act with a strong majority of 523 votes. The Act, which

needed final endorsement after approval on political and technical level,

will now most likely enter into force this May.

1.2 Artificial Intelligence Definition

‘Artificial Intelligence system’ (AI system) means a machine-based system that is designed

to operate with varying levels of autonomy and that can, for explicit or implicit objectives,

generate outputs such as predictions, recommendations, or decisions that influence physical

or virtual environments.

The EU wants their definition of “artificial intelligence” to be future- proof, which means it has to

cover an incredibly wide range of data analysis techniques. That means the EU will consider not

just deep learning and complex applications such as self-driving cars as AI. The proposed

definition is so broad that many of the technologies used by your business today will fall under

its regulations.

EU AI Act Guide 3
1.3 Risk-based Classification
The EU AI Act introduces a risk-based classification scheme for AI applications. The main
criterion is the level of risk posed by the AI application to individuals or society as a whole.

The classification ranges from minimal risk to applications which are banned entirely.

Unacceptable Risk

Some AI applications such as social scoring systems or manipulative


systems potentially leading to harm are outlawed completely.

High Risk

High-risk applications include services directly affecting citizens’ lives


(e.g., evaluating creditworthiness or educational opportunities,
applications applied to critical infrastructure). They will have to be put
through strict assessment regimes before they can be put on the market.
Businesses need to consider whether their existing or planned AI
application might be considered “high risk”. The EU will update and
expand this list on a regular basis.

Limited Risk

Other AI applications still carry obligations with them, such as disclosing


that a user interacted with an AI system. Best practices related to data
quality and fairness are essential even in this risk regime. Some examples
are image and video processing, recommender systems, and chatbots.

Minimal Risk

Applications such as spam filtering or video games are deemed to carry a


minimal risk and as such they are not subject to further regulatory
requirements.

© Modulos AG, 2024

EU AI Act Guide
Disclaimer: The information contained in this publication is for general information
purposes only and does not constitute and does not constitute legal advice. 4
1.4 AI Systems vs AI Models

Following the intense negotiations around the AI Act, the final law incorporates several

distinctions and categories beyond the risk tiering.

The Act distinguishes between AI Systems and AI Models. General Purpose AI Models, such as

powerful Large Language Models fall in a separate category and have their own set of

requirements depending on whether or not they present a Systemic Risk.


The requirements for AI Systems, outlined in the various Titles of the Act, can overlap. For

example, a High Risk AI System may, or may not, have transparency obligations depending on its

use case. The Act furthermore carves out exemptions for free and open source (FOSS) AI

Systems, as well as for AI Systems used in national defense, scientific research or law

enforcement.

A detailed analysis of where any AI System or Model falls within the AI Act’s

taxonomy is essential to avoid miscategorization and potential penalties.

© Modulos AG, 2024

Disclaimer: The information contained in this publication is for general information


purposes only and does not constitute and does not constitute legal advice.
EU AI Act Guide 5
EU AI Act: AI Systems and Models Taxonomy (as per the draft leaked on 22 Jan 2024)

AI Systems AI Models

Specific Purpose Al
Systems: Prohibited (T.II)

Specific Purpose Al
Systems: High-Risk (T.III)
General
Purpose Al
T.III + T.IV Models with
Systemic Risk
In scope

Al Systems with (GPAIM-SR)


Transparency Obligations
(T.IV)

T.IV + GPAIS General


General Purpose Al Purpose Al
Systems (GPAIS) Systems
(GPAIS)

GPAIS

Specific Purpose Al Systems and Models: used


released under

solely for Scientific Research and Development

FOSS licenses

Al Systems and Models used by 3rd Country


Specific Purpose Al
Out of scope

Public Authorities or Int'l Orgs for Int'l


Systems: Military,
Cooperation or under Law Enforcement or
Defence, or National Judicial Cooperation Agreements, with
Security
Safeguards

Specific Purpose Al Specific Purpose Al Specific


System: Minimal Risk** System: Minimal Risk** Purpose

Al Models

*Free and open source


v. 1.0 2024-01-26 Aleksandr Tiulkanov 2024

**Staff and agents Al Literacy obligations still apply

© Modulos AG, 2024


Disclaimer: The information contained in this publication is for general information
purposes only and does not constitute and does not constitute legal advice. EU AI Act Guide 6
1.5 Compliance Requirements

The Act lays out a range of requirements for high risk AI systems from the design,

implementation and post-market entry phases. These include:

EU AI Act: Article 9 Risk Management System

Article 10 Data and Data Governance

Article 11 and Annex IV Technical Documentation

Article 12 Record Keeping

Article 13 Transparency and provision of information to user

Article 14 Human Oversight

Article 15 Accuracy, Robustness and Cybersecurity

Article 1 7 Quality Management System

Fundamental Rights Impact Assessment

While limited risk systems will not face the same compliance scrutiny including conformity

assessments and product safety reviews, they will also be evaluated under these categories.

1.6 Conformity Assessment

High Risk AI Systems will have to undergo a

Conformity Assessment (Article 19) to demonstrate

adherence to the AI Act before being placed on the

market in the EU. You are required to generate and

collect the documentation and evidence for such an

Assessment.

© Modulos AG, 2024

EU AI Act Guide
Disclaimer: The information contained in this publication is for general information

purposes only and does not constitute and does not constitute legal advice.
7
1.7 Penalties
The fines for violations of the AI act were set as a percentage of the offending company’s global
annual turnover in the previous financial year or a predetermined amount, whichever is higher.
However, the provisional agreement provides for more proportionate caps on administrative fines
for SMEs and start-ups in case of infringements of the provisions of the AI act.

Non-compliance with Non-compliance with other Supplying incorrect, incomplete,


prohibitions obligations or misleading information

€35M €15M €7.5M


up to up to up to

or 7% of turnover or 3% of turnover or 1.5% of turnover

1.8 Entry into Application


The agreement on the compromise sets out varying timelines for different sections of the
Regulation.

It specifies a 24-month period for the majority of the Regulation's aspects. However, it outlines
shorter timelines for specific elements: 6 months for prohibitions, and 12 months for matters
related to notifying authorities and notified bodies, governance, general-purpose AI models,
confidentiality, and penalties. For high-risk AI systems listed in Annex II, a longer timeline of 36
months is allocated.

6 12 24 36
months months months months

Prohibitions Notifying authorities, High-Risk AI Most parts of the


governance, general- Systems Regulation
purpose AI models,
confidentiality & penalties

© Modulos AG, 2024


Disclaimer: The information contained in this publication is for general information
purposes only and does not constitute and does not constitute legal advice. EU AI Act Guide 8
2. Practical Insights:
Operationalizing the EU AI Act

2.1 Ensuring Compliance with the EU AI Act

What steps should companies take to ensure compliance with

the EU AI Act?

Initiate Change Management

1
Start by adjusting your organizational structure to clearly define roles

and responsibilities related to AI governance

Develop a collaborative culture that views AI systems as integral

stakeholders.


2 Reevaluate Your AI Perspective

Reassess how AI is perceived and utilized within your organization to

ensure a comprehensive understanding and integration.


Inventory of AI Applications

3
Conduct a detailed inventory of all AI tools and applications in use, with

a focus on identifying those that might be considered high-risk under

the EU AI Act.


Technical and Regulatory Review


4
Assign competent personnel to study the EU AI Act’s technical

requirements and determine how they apply to your AI applications

Invest in workforce acquisition to get a competitive edge and ensure

owning the required skills across risk management, legal and compliance

and data science.

Invest in training for your team to enhance their ability to manage AI

compliance and ethical considerations internally.


Establish Technical Standards and Expertise


5
Define specific technical standards for assessing the risk level of your AI

systems

Empower specialized staff within your organization to take on this

assessment, moving beyond mere legal consultation.

© Modulos AG, 2024

Disclaimer: The information contained in this publication is for general information

purposes only and does not constitute and does not constitute legal advice.
EU AI Act Guide 9
6 Building and Maintaining Trust
Recognize the critical role of trust with customers and stakeholders,
emphasizing transparency and ethical AI usage.


7 Adopting a Digital Compliance Platform


Consider adopting a digital platform designed to simplify the AI
compliance process.

How should companies uncertain about the risk level of their AI


applications approach compliance with the EU AI Act?

For companies uncertain about the risk level of their AI applications in relation to the EU AI Act,
focusing specifically on high-risk categories might seem narrow. However, understanding and
addressing high-risk classifications is crucial because the Act imposes the strictest regulations
on these applications, necessitating rigorous compliance measures.

If a company is uncertain whether its AI falls under a high-risk category, it should:

Conduct a Risk Assessment: 
 Engage in Comprehensive Review:



Perform an in-depth analysis of all AI Review the EU AI Act's criteria for high-risk
applications to determine their potential applications, comparing these standards
impact on rights and safety, identifying any against your AI applications to clarify
that might be considered high-risk under which, if any, could be subject to these
the EU AI Act guidelines. stricter requirements.

By focusing specifically on these steps, companies can more


effectively address the compliance requirements for high-risk AI
systems under the EU AI Act, ensuring they meet regulatory obligations
and mitigate potential risks.

©M odulos AG, 2024


isclaimer: The information contained in this publication is for general information
EU AI Act Guide 10
D
purposes only and does not constitute and does not constitute legal advice.
Prioritize Transparency and Seek Expert Guidance: 

Documentation:
 Consult with legal and AI ethics experts to
For applications with ambiguous risk levels, gain clarity on the Act's implications for
prioritize transparency in processing and your specific AI applications, ensuring
decision-making, alongside thorough informed decisions about compliance
documentation, to prepare for any strategies
potential reclassification as high-risk in the
future
Monitor Regulatory Updates: 

Stay informed about any updates or
Adopt a Precautionary Approach: 
 clarifications to the EU AI Act that might
When in doubt, treat AI applications as if affect the risk classification of your AI
they are high-risk, applying the highest applications, adjusting your compliance
standards of accountability and ethical AI strategies accordingly.
practices. This ensures preparedness for
any future regulatory scrutiny or changes.

2.2 Risk Management in AI


How can it be determined when a sufficient level of risk mitigation has
been achieved?

Risk management is an ongoing effort that never really ends,given that the landscape of threats
evolves constantly. As long as an application remains operational, the potential for risk persists.
Risk management, therefore, is not a static target but a continuous journey of vigilance,
adaptation, and improvement.

If you want customers to trust and use your service, adherence to


security guidelines and best practices is not optional; it's imperative.
Failing to uphold these standards can lead to diminished trust among
consumers, potentially placing a service provider in a disadvantageous
position.

© M odulos AG, 2024


isclaimer: The information contained in this publication is for general information
EU AI Act Guide
D

purposes only and does not constitute and does not constitute legal advice. 11
How does AI risk management differ from traditional risk
management?

AI risk management significantly differs from traditional risk management by being a

multidisciplinary, dynamic approach that necessitates new expertise that is currently scarce in

traditional frameworks. This requirement stems from the complex and often unpredictable nature of

AI technologies, which demand an understanding not only of technical aspects but also of ethical,

legal, and social implications.

Multidisciplinary Nature: 
 Data and Privacy: 



Effective AI risk management involves The role of data in AI systems introduces
collaboration across various fields such as risks related to data quality, privacy, and
data science, ethics, law, and domain- security. Effective risk management must
specific knowledge. This multidisciplinary address the integrity and protection of
approach ensures an understanding of data throughout the AI lifecycle
potential risks from all angles

Regulatory Compliance:

Dynamic and Evolving: 
 With AI regulations emerging globally,
AI systems are capable of learning and organizations must navigate a variety of
evolving over time, which means the risks legal requirements, making compliance a
are constantly changing. Risk management moving target that requires constant
strategies must therefore be agile and vigilance and adaptability.

adaptable, capable of evolving alongside


the AI systems they aim to govern

New Skill Sets: 
 Incorporating these



Traditional risk management often relies on
elements into AI risk
established principles and methodologies
management practices is
that may not be directly applicable to AI's
unique challenges. AI risk management essential for organizations to
requires a blend of data literacy, ethical
not only mitigate risks but also
reasoning, regulatory knowledge, and
to harness AI's potential
technical skills that goes beyond traditional
skill sets responsibly and ethically. 


This necessitates a proactive

Ethical and Societal Considerations: 
 approach to developing the


AI introduces complex ethical and societal
required skills and adapting
risks, such as bias, privacy concerns, and
traditional risk management
accountability issues. Managing these risks
requires a deep understanding of both the frameworks to meet the unique
technology and its broader impact on
demands of AI technology.
society.

© Modulos AG, 2024

Disclaimer: The information contained in this publication is for general information

EU AI Act Guide
purposes only and does not constitute and does not constitute legal advice.
12
2.3 Global Perspectives on AI Regulation

How does the US approach to AI regulation differ


from that of the European Union?

The United States has a longstanding tradition of law enforcement.



The existing laws have often been adapted to manage the evolving

challenges of new technologies, including artificial intelligence. In contrast, the European Union
has recognized a specific need for dedicated AI regulation, leading to the development of
comprehensive frameworks like the EU AI Act.

While it may seem that the US regulatory environment is stricter and more expansive in some
areas, the EU's focused efforts on AI regulation highlight a deliberate move towards ensuring AI
technologies are developed and deployed in a manner that is safe, ethical, and respects
fundamental rights. The EU's approach is characterized by its specificity to AI, aiming to set a
global standard for responsible AI innovation and use.

How can the EU AI Act affect companies in countries


outside the EU?

The EU AI Act's extraterritorial provisions mean it extends to



companies outside the EU, impacting those whose AI services or

products are accessible or have effects within the EU.

These companies must ensure their AI applications comply with the Act, focusing on aspects
like risk assessment, adherence to compliance standards, and undergoing the required
conformity assessments.

This requirement underscores the importance of understanding and implementing the Act’s
guidelines for international businesses, ensuring they can operate within the EU market without
legal hurdles, while also championing ethical AI practices on a global scale.

© Modulos AG, 2024


Disclaimer: The information contained in this publication is for general information
purposes only and does not constitute and does not constitute legal advice. EU AI Act Guide 13
Should Swiss-based companies adopt the EU

AI Act standards for their AI implementations,

and what benefits could this bring?

Swiss enterprises, regardless of their size or international


footprint, should consider adhering to the EU AI Act.

Switzerland, despite its geographical and economic uniqueness, is not insulated from global

regulatory trends and pressures.

Being compliant with the EU AI Act can provide several strategic advantages

Future-Proofing Operations: Aligning with the EU AI Act prepares Swiss companies for

imminent local regulations, reducing future compliance costs and disruptions

Market Access: Compliance with the EU AI Act is crucial for businesses targeting the

European market, signaling trust and reliability to consumers and partners

AI Governance: The EU AI Act provides a robust framework for transparent, accountable, and

ethical AI, enhancing governance and promoting responsible AI use

Competitive Edge: Early adoption of EU AI Act standards differentiates companies,

highlighting their commitment to ethical AI and attracting like-minded customers and

investors.

In summary, even though Switzerland might not currently be bound by

the EU AI Act, the strategic benefits of proactively aligning with its

standards are clear. Such alignment not only prepares Swiss companies

for upcoming local regulations but also enhances their competitiveness

and operational resilience in the global market.

© Modulos AG, 2024

Disclaimer: The information contained in this publication is for general information

purposes only and does not constitute and does not constitute legal advice.
EU AI Act Guide 14
3. Are you ready to comply


with the EU AI Act?

At Modulos, we support organizations to

responsibly govern AI products and services

under the new requirements of the EU AI Act.

Modulos Responsible AI Platform simplifies the

compliance process, integrating AI governance,

risk management, and data science to ensure

your organization not only meets regulations but

also harnesses the full potential of AI innovation

safely and ethically.

Benefits

Risk-centric AI governance aligned to ISO/ Trustworthy Process for Responsible AI


IEEE and NIST AI RMF

Centralized AI Management System ISO Faster and simplified adherence to

42001 compliant
frameworks

Consistent and Standardized AI practices Multi-stakeholder interplay (RM, DS, LC,

across the organization


BUs)

© Modulos AG, 2024

Disclaimer: The information contained in this publication is for general information

purposes only and does not constitute and does not constitute legal advice.
EU AI Act Guide 15
Ready to start your
Compliance journey
with Modulos?

Sign up for a free demo

[email protected] +41 76 566 05 48 Technoparkstrasse 1

8005, Zurich Switzerland

modulos.ai

You might also like