0% found this document useful (0 votes)
15 views56 pages

Lecture 02 - Introduction To AI

Uploaded by

dpufd6hdz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views56 pages

Lecture 02 - Introduction To AI

Uploaded by

dpufd6hdz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

IT and Society

Lecture 2: Artificial Intelligence ‒ Introduction

Prof. Jens Grossklags, Ph.D.


Chiara Ullstein, M.Sc.
Professorship of Cyber Trust
Department of Computer Science
School of Computation, Information, and Technology
Technical University of Munich
May 5, 2025
TUM: Digital Leaders Ranking
• Strong focus on digital expertise and teaching of
transferable digital skills across their curriculum
• Rank 5 in 2024

https://www.emerging.fr/digital-leaders/rankings (April 17, 2024) 2


Recap – IT Productivity
• Job prospects in the U.S., European, German IT field
• Studied literature on IT productivity
– IT Productivity growth paradox
– Labor market polarization
• Trends that shape future of work
– E.g., new business models providing new employment opportunities but
are also a source of potential societal tensions
– Probability of computerization of professions
– Avoiding “unemployability“; Fostering relearning. How?
– Impact of generative AI: Different style of work? Reducing inequality?

3
Perspective of the Individual
To which degree does IT make us
more productive?
Example: Email
• Numerous benefits in the workplace!
• Study: tracked email usage with computer
logging, biosensors and daily surveys for 40
information workers over 12 days
• Finding: The more daily time spent on email,
the lower was perceived productivity and the
higher the measured stress
• Takeaway: Managing IT productivity at the
individual level is also challenging Mark et al. (2016) Email Duration, Batching and Self-interruption:
Patterns of Email Use on Productivity and Stress, CHI 2016
4
What is Artificial Intelligence?
• No clear consensus on the definition of AI
- John McCarthy coined the phrase AI in 1956
- Developed a Q&A on this subject

Q. What is artificial intelligence?


A. It is the science and engineering of making intelligent machines, especially
intelligent computer programs. It is related to the similar task of using
computers to understand human or other intelligence, but AI does not have to
confine itself to methods that are biologically observable.
Q. Yes, but what is intelligence?
A. Intelligence is the computational part of the ability to achieve goals in the
world. Varying kinds and degrees of intelligence occur in people, many
animals and some machines.

http://jmc.stanford.edu/artificial-intelligence/what-is-ai/index.html 5
Different Views
Haugeland (1985): The Whinston (1992): The
exciting new effort to study of the computations
make computers think … that make it possible to
machines with minds, in perceive, reason and act.
the full and literal sense.

Thinking humanly Thinking rationally


Acting humanly Acting rationally
Kurzweil (1990): The art Luger and Stubblefield
of creating machines that (1993): The branch of
perform functions that computer science that is
require intelligence when concerned with the
performed by people. automation of intelligent
behavior.

Preferred technical view: Acting rationally


Rational: maximize goal achievement; no mistakes 6
Another “Working Definition” of AI
Artificial intelligence is the study of how to make computers
do things that people are better at or would be better at if:
• They could extend what they do to a World Wide
Web-sized amount of data
• Not make mistakes

Clearly motivated by the “acting rationally“ view.

7
Is AI Rational? Are Humans?

Comic (1990?)
Humans are amazing in their abilities, but also boundedly rational.
8
More Definitions?
I am glad you asked…

“AI can have two purposes. One is to use the power of computers to
augment human thinking, just as we use motors to augment human
or horse power. Robotics and expert systems are major branches of
that. The other is to use a computer's artificial intelligence to
understand how humans think. In a humanoid way. If you test your
programs not merely by what they can accomplish, but how they
accomplish it, then you're really doing cognitive science; you're
using AI to understand the human mind.”
Herbert Simon
9
How to Measure “Success“ in an
AI world?
Turing Test

Add vision and robotics to


get the total Turing test.

One critique of Turing test: Focus on Turing test leads to


non-serious, gimmick-like work.
10
Eliza: Psychotherapist Program
• Simple strategies used:
– Keywords and pre-canned responses
• “Perhaps, I could get along with my
mother.“ or more generally: <x> + {mother |
father | brother | sister …}  “Can you tell
me more about your family?“
– Parroting:
• “My boyfriend made me come here?“ 
“Your boyfriend made you come here?“
– Highly general questions:
Conceived by Joseph Weizenbaum • In what way?
in 1966 • Can you give a specific example?

11
Loebner Prize (1991 ‒ 2019)
• Forum for competitive Turing tests
– Rules evolved, e.g., 5 minutes conversation until 2003; later more than 20
minutes
– Bots cannot be connected to the Internet
• Categories:
– Gold prize (audio and visual) – Never won No successful Turing test
– Silver prize (text only) – Never won according to rules of the prize

– Bronze medal: Computer system with the “most human” conversational


behavior in a given year
• Competitions are a central factor in AI progress!
12
Steve Worswick
5-Times Winner of
most human-like
chatbot with his
System Mitsuku

Interview with Worswick:


"What keeps me going is when I get emails or comments
in the chat-logs from people telling me how Mitsuku has
helped them with a situation whether it was dating advice,
being bullied at school, coping with illness or even advice
about job interviews. I also get many elderly people who
talk to her for companionship."
Commentary:
Any advertiser who doesn't sit bolt upright after reading
that doesn't understand the dark art of manipulation on
which their craft depends.

Wall Street Journal "Advertising's New Frontier: Talk to the


Bot“ (2015)

13
Ashley Madison Data Breach
• Site for extra-marital affairs
• July 2015: data theft including emails, names, home
addresses, sexual fantasies and credit card information—
and threatened to post the data online
• … and code for bots

Data fields tell us that 20 million


men out of 31 million received bot
mail, and about 11 million of them
were chatted up by an automated
“engager.”

14
https://gizmodo.com/ashley-madison-code-shows-more-women-and-more-bots-1727613924
Presentation by Sundar Pichai (Google CEO):
Much More
Another Sophisticated

Early Example
• Google Duplex (2018): A.I. Assistant
– Natural language processing
– Deep learning
– Text-to-speech conversion
• What is Deep Learning?
– Typically based on artificial neural networks
– Each level of the network learns to
transform its input data into a slightly more
abstract and composite representation

2024 example: GPT-4o


„With GPT-4o, we trained a single new model
end-to-end across text, vision, and audio,
meaning that all inputs and outputs are
processed by the same neural network.“
https://openai.com/index/hello-gpt-4o/ 15
LLMs Pass the Turing Test
2025 Study
• Study Objectives: Evaluation of 4 systems
(ELIZA, GPT-4o, LLaMa-3.1-405B, and
GPT-4.5)
• Study Design: Participants had 5-minute
conversations simultaneously with another
human participant and one of these
systems before judging which
conversational partner they thought was
human.
• Results: The results constitute the first
empirical evidence that any artificial system
passes a standard three-party Turing test. Source:
https://arxiv.org/abs/2503.23674 16
Strong versus Weak AI
• Strong AI is artificial intelligence that matches or exceeds human
intelligence — the intelligence of a machine that can successfully
perform any intellectual task that a human being can.
– Primary goal of artificial intelligence research
– Continues to be an important topic for science fiction writers and futurists
– Strong AI is also referred to as “artificial general intelligence (AGI)” or as the
ability to perform “general intelligent action”
– Science fiction associates strong AI with such human traits as consciousness,
sentience, sapience and self-awareness
• Weak AI is an artificial intelligence system which is not intended to
match or exceed the capabilities of human beings, as opposed to
strong AI, which is. Also known as applied AI or narrow AI.
– The weak AI hypothesis: Philosophical position that machines can demonstrate
intelligence, but do not necessarily have a mind, mental states or consciousness

17
How Intelligent
or Conscious
Can Machines
Get? A critique.
The Chinese Room
Experiment

John Searle (1980)

18
[Telefonica: Surviving the AI Hype – Fundamental concepts to understand Artificial Intelligence]
AI Winter(s) in the 1970s – 1990s

• After rapid expansion, there is often a period of


disillusionment and contraction
– Doubts in the feasibility of the approach & too many promises
• Dartmouth Conference (1956): „Every aspect of learning or any other
feature of intelligence can be so precisely described that a machine can be
made to simulate it.“
– Problems:
• Limited computing power
• Combinatorial explosion
• End of large-scale funding
19
20
Source: State-of-the-Art Mobile Intelligence: Enabling Robots to Move Like Humans by Estimating Mobility with Artificial Intelligence
Rise of LLMs

2025 …

Significant advancements over the last years. ‒ Compact release timeline measurable in months!

Source: https://doi.org/10.48550/arXiv.2303.18223 21
LLMs: Diverse Tasks Possible
LLMs can be enablers and boosters to a large and diverse range of
tasks, including but not limited to:

⇾ Text Generation
⇾ Translation
⇾ Summarization
⇾ Information Extraction
⇾ Automated Planning
⇾ Program Synthesis
⇾ Mathematical Modeling
⇾ …
22
Transformer: Backbone of LLMs
● Capture the contextual relationships (attention mechanism)
between words.
● Compute probabilities for what the next token should be.

Source: Momentum Lab 23


LLM Lifecycle

24
Gartner Hype Cycle: 2024 Version

What has changed?

four key areas: autonomous


AI, developer productivity,
total experience, and
human-centric security and
privacy programs

https://www.gartner.com/en/newsr
oom/press-releases/2024-08-21-
gartner-2024-hype-cycle-for-
emerging-technologies-highlights-
developer-productivity-total-
experience-ai-and-security
(Source: August 2024)
25
Fears related to AI

• Impact on the job market (see first lecture):


– Is AI primarily job-replacing?
– Is AI primarily job-enabling?

• Opposing views about a glorious AI-centric future versus


a dystopian AI-dominated future
– Who is correct?

26
AI for Good

27
AI for Good - Topics

• Sustainable AI: Food, energy and water


• Environment and AI: Healthy oceans, protect wildlife
Look for
• Health and AI: Health, sleep, nutrition examples
• Transparency and AI: Fighting corruption
• Education and AI: Personalized education

Also: Harvesting human intelligence (for good) as a byproduct of fighting


artificial intelligence advances
• Prime example: Captcha  reCaptcha
(completely automated public Turing
test to tell computers and humans apart)
28
AI regulation as a mode to foster
trustworthy AI
(European perspective)

EU Artificial Intelligence Act (AI Act)


We will discuss:
1. Procedure
2. Legislation (scope, risk-level, compliance)
29
Overview on the EU AI Act

EU AI Act: first-ever legal framework on AI


• addresses risks of AI
• fosters trustworthy AI in Europe and beyond
• positions Europe to play a leading role globally
• aims to provide AI developers, deployers, and users
with clear requirements and obligations
• seeks to reduce administrative and financial burdens
for business, in particular SMEs

Proposed by the European Commission in 2021  right of legislative initiative


Legislators: The Council of the European Union and the European Parliament
Source: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai, https://digital-strategy.ec.europa.eu/en/policies/plan-ai; https://ec.europa.eu/commission/presscorner/detail/en/ip_24_383 30
Putting the EU AI Act into Context

AI Act is part of wider package of policy measures


→ guarantee safety and fundamental rights of

people and businesses

I. Regulatory Framework (AI Act)


II. Coordinated Plan on AI
key policy objectives:
key aims: accelerate investment, act on strategies
and align policy to avoid fragmentation
III. AI innovation package
support for AI start-ups and SMEs

Source: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai, https://digital-strategy.ec.europa.eu/en/policies/plan-ai; https://ec.europa.eu/commission/presscorner/detail/en/ip_24_383 31


3 main institutions involved in EU decision-making

European Commission (EC) European Parliament (EP) Council of the


European Union1
&
European Council2
1 informally known as the Council, is composed of national ministers from each EU country
2 is the body of leaders (heads of state or government) of the 27 EU member states
https://www.consilium.europa.eu/en/european-council-and-council-of-the-eu/
32
EU decision-making: Ordinary legislative procedure
The ordinary legislative procedure
consists in the joint adoption by the
European Parliament and the Council
of the European Union of a regulation,
directive or decision in general on a
proposal from the Commission.

Source: https://www.consilium.europa.eu/de/council-eu/decision-making/ordinary-legislative-procedure/ pa.eu/en/european-council/members/; https://www.consilium.europa.eu/en/council-eu/decision-


making/ordinary-legislative-procedure/; https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=LEGISSUM:ordinary_legislative_procedure 33
Overview on the process
2019

final lawyer- final


linguist check adoption
Ordinary
legislative
procedure

Pro- Amend- trilo-


Public Debate Act
posal ments gues1
entered into
April 2021 start: June 14, 2023 force:
end: Dec. 9, 2023 Aug 1, 2024

_ Elect member state _ Public Consultations _ Advocacy


governments _ National/ EU-wide Campaigning
_ Elect Members of the _ Contacting individual MEPs
European Parliament
Amendments published: 1 optional trilogues
EP: June 14, 2023 = informal interinsti-
Council: Nov. 25, 2023 tutional meetings
Source: https://www.europarl.europa.eu/infographic/legislative-procedure/index_en.html; https://www.consilium.europa.eu/en/council-eu/decision-making/ordinary-legislative-procedure/;
https://www.consilium.europa.eu/en/council-eu/decision-making/ordinary-legislative-procedure/; https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai 34
Example: Changes suggested to the AI Act Proposal
• Different committees of the EP suggested amendments. List of officially published
documents with amendments: https://artificialintelligenceact.eu/documents/
• The suggested changes by the EP are
summarized into a document called a Example amendment
„negotiating mandate.“
• This mandate allowed the responsible
Members of the European Parliament to
enter discussions with the Council to find
a political agreement.
• Additionally, a multitude of comments
and opinions exist from academics or
non-governmental organizations, etc.,
which have influenced the amendments.

Amendments = suggested changes to the AI Act Proposal


Source: https://artificialintelligenceact.eu/wp-content/uploads/2022/06/AIA-IMCO-LIBE-Report-All-Amendments-14-June.pdf 35
Example: Differing positions on Facial Analysis AI
Facial Analysis AI (May 2023)
_ Both classification and identification/ verification are regulated by the AI Act

European Commission (Source: AI Act proposal – April 2021):


_ high-risk: ‘real-time’ and ‘post’ remote biometric identification of natural persons
_ harmonized transparency rules for emotion recognition systems and biometric categorization
systems
European Parliament (Source: negotiating mandate – 14. June 2023):
_ prohibition: “[b]iometric categorisation systems using sensitive characteristics” and “[e]motion
recognition systems in law enforcement, border management, workplace, and educational
institutions”
Council of the European Union (Source: general approach - 25. Nov. 2022):
_ additional transparency obligation to inform when exposed to emotion recog. systems

 Task of EP and the Council to find a political agreement, which resembles compromises.
36
Conclusion of the trilogues – A political agreement was reached
(end of the trilogues)

09. December 2023

Source: European Parliament (2024). https://www.europarl.europa.eu/news/en/press-room/20231206IPR15699/artificial-intelligence-act-deal-on-comprehensive-rules-for-trustworthy-ai;


https://www.euractiv.com/section/digital/news/ai-trilogues-round-one-the-eucs-opposition-front/ ; https://www.euractiv.com/section/artificial-intelligence/news/european-union-squares-the-circle-on-
37
the-worlds-first-ai-rulebook/
European Parliament approves the AI Act  1st Reading
(after the trilogues)

13. March 2024 Adopted Text


Plenary Vote: Html
AI Act endorsed by MEPs with PDF
_ 523 votes in favour, PDF made
_ 46 against and available to
_ 49 abstentions. the Council

Source: European Parliament (2024). https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law; Votes:


https://www.europarl.europa.eu/doceo/document/PV-9-2024-03-13-RCV_EN.html; Press Conference: https://multimedia.europarl.europa.eu/en/webstreaming/press-conference-by-brando-benifei-
38
and-dragos-tudorache-co-rapporteurs-on-ai-act-plenary-vote_20240313-1100-SPECIAL-PRESSER
The AI Act has been approved by the Council | 21 May 2024
(after the trilogues)

Law that the Council approved:


https://data.consilium.europa.eu/doc/document/
PE-24-2024-INIT/en/pdf

Signature of text has yet to happen


(17 June ’24)
Source: Council of the European Union (2024). https://www.consilium.europa.eu/en/press/press-releases/2024/05/21/artificial-intelligence-ai-act-council-gives-final-green-light-to-the-first-worldwide-
rules-on-ai/ 39
Next Steps for the AI Act
July 12th
20 days after 2024
publication in Feb 2nd
2025
the Official
after 6 after 9 after 12 after 24 after 36
Journal months months months months months

Entering into Bans on Codes General- Fully Obligations for high-


force prohibited AI of practice purpose AI applicable: risk systems (high-
August 1st systems; are ready systems limited risk, risk AI systems that
AI literacy and minimal risk,
2024 are already regulated
Governance high-risk incl.
rules Annex III by other EU
legislation – Annex I)

Source: European Parliament (2024). https://www.europarl.europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law; https://eur-lex.europa.eu/legal-


content/EN/ALL/?uri=CELEX:32024R1689; Luca Bertuzzi (2024). EU‘s AI rulebook to be published in July, enter into force in August. https://mlex.shorthandstories.com/eus-ai-rulebook-to-be-
published-in-july-enter-into-force-in-august/index.html 40
When does the AI Act apply?  Scope (Art. 2; excerpt)

This Regulation applies to:


(a) providers placing on the market or putting into service AI systems or placing on the
market general-purpose AI models in the Union, irrespective of whether those providers
Who are established or located within the Union or in a third country;
What
Where Important: AI Act applies if output produced by the AI system is used in the Union

The regulation also applies to deployers , importers and distributors, product


manufacturers (e.g., if product marketed under own brans includes AI system by other
provider), authorized representatives of providers (e.g., if provider not in the European
Union), affected persons that are located in the Union.

Exceptions exist: military, defense, or national security purpose; public authorities in a third
country, international organizations; sole purpose of scientific research and development;
research, testing, or development activity prior to their being placed on the market or put
into service; purely personal non-professional activity; free and open source licenses.
Source: https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138-FNL-COR01_EN.pdf 41
Structure AI Act: What does the regulation cover? (Excerpt)
Prohibited Artificial Intelligence Practices (Art. 5)
High-risk AI Systems (Art. 6-49)
Including: Classification, Requirements, Obligations, Notifying Authorities and Notified Bodies,
Standards, Conformity Assessment, Certificates, Registration
Transparency Obligations for Providers & Deployers of Certain AI Systems (Art. 50)
General-purpose AI Models (Art. 51 - 56)
Including: Classification, Obligations for providers of general-purpose AI with/without systemic risk
Measures in Support of Innovation (Art. 57-63)
Governance (Art. 64 – 70)
EU Database for High-risk AI Systems (Art. 71)
Chapter IX: Post-market Monitoring, Information Sharing, Market Surveillance (Art. 72-94)
Codes of Conduct and Guidelines (Art. 95-96)
Delegation of Power and Committee Procedure (Art. 97-98)
Penalties (Art. 99 - 101)
Specifications can be found in the Annexes
Source: https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138-FNL-COR01_EN.pdf 42
The EU AI Act’s Risk-Based Approach

Source: https://ec.europa.eu/newsroom/dae/redirection/document/105898 43
Risk-Based Approach to AI Regulation: Definition of Risk

Definition of ‘risk’ in Article 3(2) of the EU AI Act

‘risk’ means the combination of the probability of an occurrence of harm and the severity of that harm

Reference to the EU Charter of Fundamental Rights in Recital 1 of the EU AI Act

The purpose of this Regulation is […] to promote the


uptake of human centric and trustworthy artificial
intelligence (AI) while ensuring a high level of
protection of health, safety, fundamental rights as
enshrined in the Charter of Fundamental Rights of
the European Union (the ‘Charter’), including
democracy, the rule of law and environmental
protection, to protect against the harmful effects of AI
systems in the Union, and to support innovation.

Source: https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138-FNL-COR01_EN.pdf; Source: https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng; https://eur-lex.europa.eu/legal-


content/EN/TXT/HTML/?uri=CELEX:12016P/TXT 44
Risk-based approach to AI Regulation: Different levels of risk

Prohibited Artificial Unacceptable risk  anything considered a clear threat to EU citizens


Intelligence Practices Examples: social scoring by governments, toys using voice assistance that
encourages dangerous behavior of children.
Regulatory measure: banned

High-risk AI Systems High risk (next slide)

Transparency Obligations Limited risk  intention: enable informed decisions


for Providers & Deployers Example: chatbots, deepfakes, emotion recognition systems
of Certain AI Systems
Regulatory measure: inform natural person about exposure/ disclose that the
content has been artificially generated or manipulated by labeling content as
such; ensure outputs of the AI system are marked in a machine-readable
format and detectable as artificially generated or manipulated

Codes of Conduct and Minimal risk  minimal or no risk for citizen’s rights or safety
Guidelines Examples: AI-enabled video games or spam filters (vast majority of AI systems
currently used in the EU fall into this category)
risk levels as reflected in the articles Regulatory measure: none, optional codes of conduct and guidelines
of the EU AI Act (previous slide)
Source: https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/excellence-trust-artificial-intelligence_en (last accessed: June 15, 2024) 45
Classification of High-Risk AI Systems: Art. 6 & Annex I
AIA

1. Irrespective of whether an AI system is placed on the market or put into service


independently from the products referred to in points (a) and (b), that AI system shall be
considered to be high-risk where both of the following conditions are fulfilled:
(a) the AI system is intended to be used as a safety component of a product, or the AI
system is itself a product, covered by the Union harmonization legislation listed in

Annex I;

List of Regulations and


Directives specific to
certain products.

Source: EU AI Act: 46
Classification of High-Risk AI Systems: Art. 6 & Annex III
2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to
in Annex III shall be considered to be high-risk.

• Biometrics (e.g., remote biometric identification systems, biometric categorisation, emotion


recognition; application of some of these systems are prohibited)
• Critical infrastructures (e.g., safety components in critical digital infrastructure or road traffic)
• Educational or vocational training (e.g., determine access or admission to education, evaluate
learning outcomes, monitoring of prohibited behaviour of students)
• Employment, management of workers and access to self-employment (e.g., recruitment or
selection of natural persons, decisions affecting terms of work-related relationships)
• Access to and enjoyment of essential private services and essential public services and
benefits (e.g., evaluate the eligibility for essential public assistance benefits and services)
• Law enforcement (e.g., assess risk of a natural person becoming the victim of criminal offences)
• Migration, asylum and border control management (e.g., polygraphs, assess a risk posed by
a natural person who intends to enter, examination of applications for asylum or visa)
• Administration of justice and democratic processes (e.g., researching and interpreting facts and
the law)

Source: EU AI Act: 47
Compliance Procedure for Providers of High-Risk AI Systems

listed on
next slide

Source: https://ec.europa.eu/info/strategy/priorities-2019-2024/europe-fit-digital-age/excellence-trust-artificial-intelligence_en (last accessed: June 15, 2024) 48


Obligations of High-Risk AI Providers (Ch. III, Sect. 3)
High-risk AI systems will be subject to strict obligations before they can be put on the market: AIA

Providers of high-risk AI systems shall:


(a) ensure that their high-risk AI systems are compliant with the requirements set out in
Section 2; Essential Requirements (Sec.2)
(b) indicate on the high-risk AI system or, where that is not possible, on its packaging or its
accompanying documentation, as applicable their name, registered trade name or
registered trademark, the address at which they can be contacted;
(c) have a quality management system in place which complies with Article 17; Quality Management System (Sec.3, Art. 17)
Technical Documentation (Sec.2, Art. 11)
(d) keep the documentation referred to in Article 18;
(e) when under their control, keep the logs automatically generated by their high-risk AI
systems as referred to in Article 19; Record-keeping (Sec.2, Art. 12)
(f) ensure that the high-risk AI system undergoes the relevant conformity assessment
procedure as referred to in Article 43, prior to its being placed on the market or put into
service; Conformity Assessment (Ch.5, Art. 43)
(g) draw up an EU declaration of conformity in accordance with Article 47;
(h) affix the CE marking to the high-risk AI system or, where that is not possible, on its
packaging or its accompanying documentation, to indicate conformity with this CE Marking (Ch.5, Art. 48)
Registration (Ch.5, Art. 49)
Regulation, in accordance with Article 48;
(i) comply with the registration obligations referred to in Article 49(1);
(j) take the necessary corrective actions and provide information as required in Article 20;
(k) upon a reasoned request of a national competent authority, demonstrate the conformity
of the high-risk AI system with the requirements set out in Section 2;
(l) ensure that the high-risk AI system complies with accessibility requirements in
accordance with Directives (EU) 2016/2102 and (EU) 2019/882.
Source: EU AI Act: 49
Requirements for High-Risk AI (Ch. III, Sect. 2)
AIA
Essential Requirements:

Article 8: Compliance with requirements


Article 9: Risk management system
Article 10: Data and data governance
Article 11: Technical documentation (purpose: demonstrate
compliance this Section, including Annex IV)
Article 12: Record-keeping (logging for traceability)
Article 13: Transparency and provision of information to
deployers
Article 14: Human oversight (purpose: effectively oversee
the AI system)
Article 15: Accuracy, robustness, cybersecurity
(and articles referenced in the listed articles) …

Source: EU AI Act: 50
Many requirements are rather vague

Many vague requirements, examples:

… Responsible standardization body:

51
From high-level requirements to technical standards

• The EU AI Act does not mandate specific technical solutions/ approaches but rather high-
level requirements;
• Technical solutions for the fulfillment of the requirements in practice will be specified primarily
in the form of technical standards;
• Standards capture best practices and state-of-the-art techniques and methods (in
trustworthy AI)
• Standards will be based on existing standards (which have been published independent of
the EU AI Act) as well as voluntary (and successful) approaches/ best-practices (that have
been suggested by industry and academia), e.g.:
• ISO/IEC 42001:2023 Artificial Intelligence Management System
• Datasheets for Datasets, The Dataset Nutrition Label, Model Cards and AI Factsheets

 Transition from voluntary practices to hard legal requirements


Source: Hupont et al. (2023).https://doi.org/10.1109/MC.2023.3235712, p. 19; T. Gebru et al., “Datasheets for datasets,” Commun. ACM, vol. 64, no. 12, pp. 86–92, Nov. 2021, doi: 10.1145/3458723; S.
Holland, A. Hosny, S. Newman, J. Joseph, and K. Chmielinski, “The dataset nutrition label: A framework to drive higher data quality standards,” 2018, arXiv:1805.03677; K. S. Chmielinski et al., “The dataset
nutrition label (2nd gen): Leveraging context to mitigate harms in artificial intelligence,” 2022, arXiv:2201.03954; M. Mitchell et al., “Model cards for model reporting,” in Proc. Conf. Fairness, Accountability,
Transparency (FAT), Jan. 2019, pp. 220–229, doi: 10.1145/3287560.3287596; M. Arnold et al., “FactSheets: Increasing trust in AI services through supplier’s declarations of conformity,” IBM J. Res.
Develop., vol. 63, no. 4/5, pp. 6:1–6:13, Jul./Sep. 2019, doi: 10.1147/JRD.2019.2942288; “OECD Framework for the Classification of AI Systems: A tool for effective AI policies,” OECD, Paris, France, 2022.
[Online]. Available: https:// oecd.ai/en/classification. ISO/IEC 42001:2023 https://www.iso.org/standard/81230.html 52
Studies on the distribution of the risk categories

Hauer et al. (2023): Classified 514 cases

Applied AI Study (2023):


18% of the AI systems are in the high-risk class, 42% are low-risk, and for 40% it is unclear
whether they fall into the high-risk class or not. Thus, the percentage of high-risk systems in
this sample ranges from 18% to 58%. One of the AI systems may be prohibited.
Most high-risk systems are expected to be in human resources, customer service, accounting
and finance, and legal.
Note: Studies referred to the AI Act Proposal from
2021. Share may differ for adopted AI Act today.
Source: Hauet et al. (2023). Quantitative study about the estimated impact of the AI Act. Available at: https://arxiv.org/abs/2304.06503; AppliedAI (March, 2023). AI Act: Risk Classification of AI
Systems from a Practical Perspective. Available at: https://www.appliedai.de/en/hub-en/ai-act-risk-classification-of-ai-systems-from-a-practical-perspective 53
Takeaways
• Artificial Intelligence research, practice and deployment has a long and rich
history: Lots of interesting facets (booms and busts)

• Very different directions: E.g., modeling human intelligence versus creating


actionable systems delivering output that is difficult to accomplish for
humans

• EU AI Act first legally-bindinding regulation on AI entered into force in


August 2024 (aim: promote the uptake of human centric and trustworthy AI)

54
Tackle Hard(er) Questions

• Arguments by Joseph Weizenbaum


– It is not just wrong, but dangerous and, in some cases, immoral
to assume that computers would be able to do everything given
enough processing power and clever programming.
– “No other organism, and certainly no computer, can be made to
confront genuine human problems in human terms.”
(Computer Power and Human Reason: From Judgment to
Calculation, 1976)

And yet we use computers and AI for exactly that more and more. 55
No more for today. The End.

You might also like