0% found this document useful (0 votes)
112 views4 pages

IEEE P7003TM Standard For Algorithmic Bias Considerations

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
112 views4 pages

IEEE P7003TM Standard For Algorithmic Bias Considerations

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

2018 ACM/IEEE International Workshop on Software Fairness

IEEE P7003TM Standard for Algorithmic Bias Considerations


Work in progress paper

Ansgar Koene Liz Dowthwaite Suchana Seth


Chair of IEEE P7003 working group IEEE P7003 working group secretary IEEE P7003 working group member
Horizon Digital Economy Research Horizon Digital Economy Research Berkman Klein Center for Internet &
institute, University of Nottingham institute, University of Nottingham Society, Harvard University
NG7 2TU NG7 2TU MA 02138
United Kingdom United Kingdom USA
[email protected] [email protected] [email protected]

ABSTRACT* SA) launched the IEEE Global Initiative on Ethics for


Autonomous and Intelligence Systems [1] in April 2016. The
The IEEE P7003 Standard for Algorithmic Bias Considerations is
‘Global Initiative’ aims to provide “an incubation space for new
one of eleven IEEE ethics related standards currently under
standards and solutions, certifications and codes of conduct, and
development as part of the IEEE Global Initiative on Ethics of
consensus building for ethical implementation of intelligent
Autonomous and Intelligent Systems. The purpose of the IEEE
technologies”. As of early 2018 the main pillars of the Global
P7003 standard is to provide individuals or organizations creating
Initiative are:
algorithmic systems with development framework to avoid
x a public discussion document “Ethically Aligned Design: A
unintended, unjustified and inappropriately differential outcomes
vision for Prioritizing human Well-being with Autonomous
for users. In this paper, we present the scope and structure of the
and Intelligent Systems” [2], on establishing ethical and
IEEE P7003 draft standard, and the methodology of the
social implementations for intelligent and autonomous
development process.
systems and technology aligned with values and ethical
principles that prioritize human well-being in a given cultural
CCS CONCEPTS
context;
• General and reference → Document types → Computing
x a set of eleven working groups to create the IEEE P70xx
standards, RFCs and guidelines
series ethics standards, and associated certification programs,
for Intelligent and Autonomous systems.
KEYWORDS
Algorithmic Bias, Standards, work-in-progress, methods The IEEE P70xx series of ethics standards aims to translate the
principles that are discussed in the Ethically Aligned Design
document into actionable guidelines or frameworks that can be
ACM Reference format:
used as practical industry standards. The eleven IEEE P70xx
A. Koene, L. Dowthwaite, and S. Seth. 2018. IEEE P7003 Standard for standards that are currently under development are:
Algorithmic Bias Consideratoins. FairWare'18, May 29, 2018,
• IEEE P7000: Model Process for Addressing Ethical
Gothenburg, Sweden. 4 pages. https://doi.org/10.1145/3194770.3194773
Concerns During System Design
• IEEE P7001: Transparency of Autonomous Systems
1 INTRODUCTION • IEEE P7002: Data Privacy Process
• IEEE P7003: Algorithmic Bias Considerations
In recognition of the increasingly pervasive role of algorithmic
• IEEE P7004: Standard on Child and Student Data
decision making systems in corporate and government service,
Governance
and growing public concerns regarding the ‘black box’ nature of
• IEEE P7005: Standard on Employer Data Governance
many of these systems, the IEEE Standards Association (IEEE-
• IEEE P7006: Standard on Personal Data AI Agent Working
Group
*
Permission to make digital or hard copies of all or part of this work for personal or • IEEE P7007: Ontological Standard for Ethically Driven
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full Robotics and Automation Systems
citation on the first page. Copyrights for components of this work owned by others • IEEE P7008: Standard for Ethically Driven Nudging for
than the author(s) must be honored. Abstracting with credit is permitted. To copy
otherwise, or republish, to post on servers or to redistribute to lists, requires prior Robotic, Intelligent and Autonomous Systems
specific permission and/or a fee. Request permissions from [email protected]. • IEEE P7009: Standard for Fail-Safe Design of Autonomous
FairWare'18, May 29, 2018, Gothenburg, Sweden
© 2018 Copyright is held by the owner/author(s). Publication rights licensed to
and Semi-Autonomous Systems
ACM. • IEEE P7010: Wellbeing Metrics Standard for Ethical
ACM ISBN 978-1-4503-5746-3/18/05…$15.00 Artificial Intelligence and Autonomous Systems
https://doi.org/10.1145/3194770.3194773

38

Authorized licensed use limited to: ANNA UNIVERSITY. Downloaded on December 01,2023 at 13:52:18 UTC from IEEE Xplore. Restrictions apply.
FairWare’18, May 2018, Gothenburg, Sweden A. Koene et al.

A brief paper outlining the aims of IEEE P7003 and its Throughout the development process IEEE P7003 challenges
relationship to the other IEEE P700x series standards working the developer to think explicitly about the criteria that are being
groups was published in [3] and a tech-industry oriented summary used for the recommendation process and the rationale, i.e.
of the eleven IEEE P70xx series standards appeared on the justification, for why these criteria are relevant and why they are
technology-industry blog TechEmergence [4]. appropriate (legally and socially). Documenting these will help
In this paper we present a more detailed overview of the the business respond to possible future challenges from
scope, structure and development process of the IEEE P7003 customers, competitors or regulators regarding the
Standard for Algorithmic Bias Considerations [5]. recommendations produced by this system. At the same time, this
IEEE P7003 is aimed to be used by people/organizations who process of analysis will help the business to be aware of the
are developing and/or deploying automated decision (support) context for which this recommendation system can confidently be
systems (which may or may not involve AI/machine learning) that used, and which uses would require additional testing (e.g. age
are part of products/services that affect people. Typical examples ranges of customers, types of products).
would include anything related to personalization or individual
assessment, including any system that performs a filtering 2 SCOPE
function by selecting to prioritize the ease with which people will
The IEEE P7003 standard will provide a framework, which
find some items over others (e.g. search engines or
helps developers of algorithmic systems and those responsible for
recommendation systems). Any system that will produce different
their deployment to identify and mitigate unintended, unjustified
results for some people than for others is open to challenges of
and/or inappropriate biases in the outcomes of the algorithmic
being biased. Examples could include:
system. Algorithmic systems in this context refers to the
x Security camera applications that detect theft or suspicious combination of algorithms, data and the output deployment
behaviour. process that together determine the outcomes that affect end users.
x Marketing automation applications that calibrate offers, Unjustified bias refers to differential treatment of individuals
prices, or content to an individual’s preferences and based on criteria for which no operational justification is given.
behaviour. Inappropriate bias refers to bias that is legally or morally
x etc… unacceptable within the social context where the system is used,
e.g. algorithmic systems that produce outcomes with differential
The requirements specification provided by the IEEE P7003 impact strongly correlated with protected characteristics (such as
standard will allow creators to communicate to users, and race, gender, sexuality, etc).
regulatory authorities, that up-to-date best practices were used in The standard will describe specific methodologies that allow
the design, testing and evaluation of the algorithm to attempt to users of the standard to assert how they worked to address and
avoid unintended, unjustified and inappropriate differential impact eliminate issues of unintended, unjustified and inappropriate bias
on users. in the creation of their algorithmic system. This will help to
Since the standard aims to allow for the legitimate ends of design systems that are more easily auditable by external parties
different users, such as businesses, it should assist them in (such as regulatory bodies).
assuring citizens that steps have been taken to ensure fairness, as
appropriate to the stated aims and practices of the sector where the Elements include:
algorithmic system is applied. For example, it may help customers x a set of guidelines for what to do when designing or using
of insurance companies to feel more assured that they are not such algorithmic systems following a principled
getting a worse deal because of the hidden operation of an methodology (process), engaging with stakeholders (people),
algorithm. determining and justifying the objectives of using the
As a practical example, an online retailer developing a new algorithm (purpose), and validating the principles that are
product recommendation system might use the IEEE P7003 actually embedded in the algorithmic system (product);
standard as follows: x a practical guideline for developers to identify when they
Early in the development cycle, after outlining the intended should step back to evaluate possible bias issues in their
functions of the new system IEEE P7003 guides the developer systems, and pointing to methods they can use to do this;
through a process of considering the likely customer groups, in x benchmarking procedures and criteria for the selection of
validation data sets for bias quality control;
order to identify if there are subgroups that will need special
consideration (e.g. people with visual impairments). In the next x methods for establishing and communicating the application
boundaries for which the system has been designed and
phase of the development, the developer is establishing a testing
validated, to guard against unintended consequences arising
dataset to validate if the system is performing as desired. from out-of-bound application of algorithms;
Referencing P7003 the developer is reminded of certain methods
x methods for user expectation management to mitigate bias
for checking if all customer groups are sufficiently represented in due to incorrect interpretation of systems outputs by users
the testing data to avoid reduced quality of service for certain (e.g. correlation vs. causation), such as specific action
customer groups. points/guidelines on what to do if in doubt about how to
interpret the algorithm outputs;

39

Authorized licensed use limited to: ANNA UNIVERSITY. Downloaded on December 01,2023 at 13:52:18 UTC from IEEE Xplore. Restrictions apply.
FairWare’18, May 2018, Gothenburg, Sweden A. Koene et al.

x a taxonomy of algorithmic bias 3.3 Use Cases


x … others yet to be determined The Use Cases form an annex to the IEEE P7003 standard
document listing a number of illustrative examples of algorithmic
3 STRUCTURE systems that resulted in unintended bias, or that highlight specific
Discounting procedural sections, dealing with matters of types of concerns about bias that could be addressed by following
Normative References, Definitions, Conformance etc, the the framework provided by IEEE P7003. The inclusion of the Use
standard document will consist of three main section categories: 1. Cases, and their standardized presentation format, were proposed
Foundational sections covering issues related to the fundamentals by a working group participant with experience of industry
of understanding algorithmic bias; 2. Algorithmic system design engagement with standards. They form an important element for
and implementation orientated sections addressing actionable ‘making the case’ for using ethics standards within a corporate
recommendations for identifying and mitigating algorithmic bias; context.
3. Use cases providing examples of systems where the use of the Some examples of the use cases that have been gathered so far
P7003 standard could provide clear benefits. include:
- “Tay the Nazi chatbot”, an example of deliberate system
3.1 Foundational sections behavior corruption through biased manipulation of
Foundational sections are currently envisioned to include inputs by an external ‘adversary’;
sections on ‘Taxonomy of Bias’, ‘Legal frameworks related to - “The use of facial expression recognition to support
Bias’, ‘Psychology of Bias’ and ‘Cultural context of Bias’. Each diagnostic assessment for patient prioritization”, an
of these sections will outline the associated socio-technical aspect example of a sensitive application context where
of algorithmic bias, providing a background understanding of the differences in operational capability of the system for
reasons for, and importance of, the design/implementation different population groups can easily result in reputation
recommendations that are provided in the subsequent sections. damaging claims of unjustified bias;
Even though the presence of these foundational sections may - “Beauty contest judging algorithm that appeared biased to
appear unusual for an industry standard, we believe that they play favor lighter skin tones”, an example of bias in the
an important part in an ‘ethics’ standard such as IEEE P7003. The training data resulting in biased outcomes that
foundational sections provide a framework of understanding that undermined the credibility of the statement purpose of the
should allow the designers of algorithmic systems to go beyond a algorithm (to produce objective beauty contest
mechanistic ‘tick-box’ compliance exercise towards a deeper judgements);
engagement with the underlying ethical issues of algorithmic bias. - …

3.2 System Design and Implementation sections 4 METHODOLOGY


The ‘algorithmic system design and implementation’ Methodologically, the content of the P70xx standards are
orientated sections are currently envisaged to include sections on developed by the working group members through an open
‘Algorithmic system design stages’, ‘Person categorizations and deliberation process in which each participant is encourage to
identifying of affected groups’, ‘Representativeness and balance suggest content or amendments for the standard document. In
of testing/training/validation data’, ‘System outcomes evaluation’, order to reflect the broad socio-technical nature of the AI ethics
‘Evaluation of algorithmic processing’, Assessment of resilience issues addressed by the P70xx standards, the working group
against external biasing manipulation’, ‘Assessment of scope members are drawn from a broad range of stakeholders including
limits for safe system usage’ and ‘Transparent documentation’, civil-society organizations, industry and a wide range of academic
though it is anticipated that further sections will be added as work disciplines. Participation in the working groups is on an individual
progresses. basis. Even through the participants are affiliated with particular
The intent of these sections is to provide a clear framework of stakeholder organizations, all voices in the standard development
guidance including challenge questions to help designers identify process are treated as equals. With the exception of the working
unintended bias issues that would go unnoticed unless specifically group chair and vice-chair, IEEE membership is not required and
looked for. A possible comparison would be the way in which does not change the status of the participant within the working
explicit questioning of everyday behavior is required in order to group.
identify and mitigate unconscious bias in management practices. For the P7003 Standard for Algorithmic Bias Considerations
Proposed solutions to identified causes of algorithmic bias will the working group currently consists of 78 participants identifying
likely primarily take the form of listing classes of solution as having expertise in: Computer Science (18), Engineering (8),
methods, with links to relevant work being published at venues Law (6), Business/Entrepreneurship (6), Policy (6), Humanities
such as FairWare, FAT*, KDD and similar publications, in order
to reflect the context dependent nature of optimal solutions and
the dynamic development in the research on improved methods.

40

Authorized licensed use limited to: ANNA UNIVERSITY. Downloaded on December 01,2023 at 13:52:18 UTC from IEEE Xplore. Restrictions apply.
FairWare’18, May 2018, Gothenburg, Sweden A. Koene et al.

(4), Social Sciences (3), Arts (2) and Natural Sciences (1) 1 . In developing the standard was first submitted, the standard is
light of the nature of the topic of the P7003 standard, dealing with published for use.
bias/discrimination, the working group also expressed special
concerns about establishing sufficient cultural diversity in its 5 CONCLUSION
participants. As of early 2018 the participants who chose to
As part of the IEEE Global Initiative on Ethics of Autonomous
indicate their geographic location were from: USA (11), UK (6),
and Intelligent Systems a series of eleven ethics standards are
Canada (3), Germany (3), Brazil (2), India (2), Japan (2), the
under development, designated IEEE P7000 through IEEE P7010.
Netherlands (2), Australia (1), Belgium (1), Israel (1), Pakistan
As outlined in this paper, the IEEE P7003 Standard for
(1), Peru (1), Philippines (1), S. Korea (1) and Uganda (1); clearly
Algorithmic Bias Considerations aims to provide an actionable
indicating a strong N. America / W. Europe bias that has not yet
framework for improving fairness of algorithmic decision-making
been resolved. With respect to types of employers, the participants
systems that are increasingly being developed and deployed by
are roughly separated into 1/3 academics, 1/3 industry and 1/3
industry, government and other organizations. The IEEE P7003
civil-society affiliations.
standard is currently transitioning from an initial exploratory
During the first eight months, the work of developing the
phase into a consolidation and specification phase. Participation in
standard focused on growing the participant membership and on
the IEEE P7003 working group is open to all who are interested in
exploratory discussions during the monthly conference calls to
contributing towards reducing and mitigating unintended,
identify possible factors and sections that could be of relevance
unjustified and societally unacceptable bias in algorithmic
for including in the standard. Much of this centered on the
decisions.
foundational sections, which were mostly proposed by working
Minutes of recent IEEE P7003 working groups meetings are
group members as a result of these discussions. In the time
available at [3].
between the monthly meetings, working group members are
encouraged to develop the document content. During this initial
exploratory phase detailed document development was initiated
ACKNOWLEDGMENTS
primarily for two of the foundational sections, ‘Taxonomy of The participation of Ansgar Koene (working group chair) and Liz
Bias’ and ‘Legal frameworks related to Bias’. Dowthwaite (secretary) in the IEEE P7003 Standards
As of January 2018, the standard development process has development process forms part of the UnBias project supported
transitioned into the next phase, moving from the initial by EPSRC grant EP/N02785X/1. UnBias is an interdisciplinary
exploration of the problem space towards consolidation and research project led by the University of Nottingham in
specification of the standard document content. All P7003 collaboration with the Universities of Oxford and Edinburgh. For
working group members are asked to identify document sections more information about the UnBias project, see
that they will take primary responsibility for, with the aim of http://unbias.wp.horizon.ac.uk/
having teams of at least two participants for each section. The
monthly conference calls will focus on providing updates from REFERENCES
each of the teams to the complete working group regarding their [1] The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.
https://ethicsinaction.ieee.org/
progress during the intervening month and any issues that might [2] Ethically Aligned Design: A Vision for Prioritizing Human Well-being with
require input from other teams. This will also be the primary Autonomous and Intelligent Systems, Version 2. IEEE, 2017.
http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html
opportunity for all other working group members to raise [3] Ansgar Koene. Algorithmic Bias: Addressing Growing Concerns. IEEE
questions, make suggestions and/or volunteer to (temporarily) Technology and Society Magazine, 26, 2, (June 2017), 31-32. DOI:
http://dx.doi.org/10.1109/MTS.2017.2697080
contribute to the work of another team. [4] Daniel Fagella, The Ethics of Artificial Intelligence for Business Leaders –
Once the IEEE P7003 draft document is completed and Should Anyone Care? TechEmergence, December 9, 2017.
approved by the IEEE P7003 working group, it will be submitted https://www.techemergence.com/ethics-artificial-intelligence-business-leaders/
[5] IEEE P7003 Working Group http://sites.ieee.org/sagroups-7003/
for balloting approval to the IEEE-SA. The IEEE-SA will send
out an invitation-to-ballot to all IEEE-SA members who have
expressed an in interest in the subject, i.e. Algorithmic Bias. If the
draft receives at least 75% approval, the draft is submitted to the
IEEE-SA Standards Board Review Committee, which checks that
the proposed standard is compliant with the IEEE-SA Standards
Board Bylaws and Operations Manual. The Standards Board then
votes to approve the standard, which requires a simple majority.
At that point, about 2.5 to 3 years after the proposal for

1
Number in brackets indicate number of participants who identified as having this
expertise as part of an informal internal survey. Many participants chose not to
respond while some chose to indicate multiple expertise.

41

Authorized licensed use limited to: ANNA UNIVERSITY. Downloaded on December 01,2023 at 13:52:18 UTC from IEEE Xplore. Restrictions apply.

You might also like