Sci 06 00003
Sci 06 00003
Thomas Lord Department of Computer Science, USC Viterbi School of Engineering, University of Southern
California, Los Angeles, CA 90007, USA; [email protected]
Abstract: The significant advancements in applying artificial intelligence (AI) to healthcare decision-
making, medical diagnosis, and other domains have simultaneously raised concerns about the
fairness and bias of AI systems. This is particularly critical in areas like healthcare, employment,
criminal justice, credit scoring, and increasingly, in generative AI models (GenAI) that produce
synthetic media. Such systems can lead to unfair outcomes and perpetuate existing inequalities,
including generative biases that affect the representation of individuals in synthetic data. This survey
study offers a succinct, comprehensive overview of fairness and bias in AI, addressing their sources,
impacts, and mitigation strategies. We review sources of bias, such as data, algorithm, and human
decision biases—highlighting the emergent issue of generative AI bias, where models may reproduce
and amplify societal stereotypes. We assess the societal impact of biased AI systems, focusing on
perpetuating inequalities and reinforcing harmful stereotypes, especially as generative AI becomes
more prevalent in creating content that influences public perception. We explore various proposed
mitigation strategies, discuss the ethical considerations of their implementation, and emphasize
the need for interdisciplinary collaboration to ensure effectiveness. Through a systematic literature
review spanning multiple academic disciplines, we present definitions of AI bias and its different
types, including a detailed look at generative AI bias. We discuss the negative impacts of AI bias on
individuals and society and provide an overview of current approaches to mitigate AI bias, including
data pre-processing, model selection, and post-processing. We emphasize the unique challenges
presented by generative AI models and the importance of strategies specifically tailored to address
these. Addressing bias in AI requires a holistic approach involving diverse and representative
datasets, enhanced transparency and accountability in AI systems, and the exploration of alternative
Citation: Ferrara, E. Fairness and Bias
in Artificial Intelligence: A Brief
AI paradigms that prioritize fairness and ethical considerations. This survey contributes to the
Survey of Sources, Impacts, and ongoing discussion on developing fair and unbiased AI systems by providing an overview of the
Mitigation Strategies. Sci 2024, 6, 3. sources, impacts, and mitigation strategies related to AI bias, with a particular focus on the emerging
https://doi.org/10.3390/sci6010003 field of generative AI.
This study provides a comprehensive overview of the sources and impacts of bias in
AI, examining data, algorithmic, and user biases, along with their ethical implications. It
surveys current research on mitigation strategies, discussing their challenges, limitations,
and the significance of interdisciplinary collaboration.
The importance of fairness and bias in AI is widely recognized by researchers, policy-
makers, and the academic community [1,12–16]. This survey study delves into the complex
and multifaceted issues surrounding fairness and bias in AI, covering the sources of bias,
their impacts, and proposed mitigation strategies. Overall, the study aims to contribute to
ongoing efforts to develop more responsible and ethical AI systems by shedding light on
the sources, impacts, and mitigation strategies of fairness and bias in AI.
2. Sources of Bias in AI
Artificial intelligence (AI) has the potential to revolutionize many industries and
improve people’s lives in countless ways. However, one of the major challenges facing
the development and deployment of AI systems is the presence of bias. Bias refers to the
systematic errors that occur in decision-making processes, leading to unfair outcomes. In
the context of AI, bias can arise from various sources, including data collection, algorithm
design, and human interpretation. Machine learning models, which are a type of AI system,
can learn and replicate patterns of bias present in the data used to train them, resulting in
unfair or discriminatory outcomes. In this section, we will explore the different sources
of bias in AI, including data bias, algorithmic bias, and user bias, and examine real-world
examples of their impact.
2.2. Sources of Bias in AI, including Data Bias, Algorithmic Bias, and User Bias
Sources of bias in AI can arise from different stages of the machine learning pipeline,
including data collection, algorithm design, and user interactions. This survey discusses
the different sources of bias in AI and provides examples of each type, including data bias,
algorithmic bias, and user bias [17,18].
Data bias occurs when the data used to train machine learning models are unrepresen-
tative or incomplete, leading to biased outputs. This can happen when the data are collected
from biased sources or when the data are incomplete, missing important information, or
contain errors. Algorithmic bias, on the other hand, occurs when the algorithms used in
machine learning models have inherent biases that are reflected in their outputs. This can
happen when algorithms are based on biased assumptions or when they use biased criteria
to make decisions. User bias occurs when the people using AI systems introduce their
own biases or prejudices into the system, consciously or unconsciously. This can happen
when users provide biased training data or when they interact with the system in ways
that reflect their own biases.
To mitigate these sources of bias, various approaches have been proposed, including
dataset augmentation, bias-aware algorithms, and user feedback mechanisms. Dataset
augmentation involves adding more diverse data to training datasets to increase repre-
sentativeness and reduce bias. Bias-aware algorithms involve designing algorithms that
consider different types of bias and aim to minimize their impact on the system’s outputs.
User feedback mechanisms involve soliciting feedback from users to help identify and
correct biases in the system.
Sci 2024, 6, 3 3 of 15
Research in this area is ongoing, with new approaches and techniques being developed
to address bias in AI systems. It is important to continue to investigate and develop these
approaches to create AI systems that are more equitable and fairer for all users.
Table 1. Cont.
3. Impacts of Bias in AI
The rapid advancement of artificial intelligence (AI) has brought numerous benefits,
but it also comes with potential risks and challenges. One of the key concerns is the negative
impacts of bias in AI on individuals and society. Bias in AI can perpetuate and even amplify
existing inequalities, leading to discrimination against marginalized groups and limiting
their access to essential services. In addition to perpetuating gender stereotypes and
discrimination, it can also lead to new forms of discrimination based on skin color, ethnicity,
or physical appearance. To ensure that AI systems are fair and equitable and serve the
needs of all users, it is crucial to identify and mitigate bias in AI. Moreover, the use of
biased AI has numerous ethical implications, including the potential for discrimination,
the responsibility of developers and policymakers, undermining public trust in technology,
and limiting human agency and autonomy. Addressing these ethical implications will
require a concerted effort from all stakeholders involved, and it is important to develop
ethical guidelines and regulatory frameworks that promote fairness, transparency, and
accountability in the development and use of AI systems.
3.1. Negative Impacts of Bias in AI on Individuals and Society, Including Discrimination and
Perpetuation of Existing Inequalities
The negative impacts of bias in AI can be significant, affecting individuals and society.
Discrimination is a key concern when it comes to biased AI systems, as they can perpetuate
and even amplify existing inequalities [24]. For example, biased algorithms used in the
criminal justice system can lead to unfair treatment of certain groups, particularly people
of color, who are more likely to be wrongly convicted or receive harsher sentences [1].
Bias in AI can also have a negative impact on an individual’s access to essential services,
such as healthcare and finance. Biased algorithms can lead to the underrepresentation of
certain groups, such as people of color or those from lower socioeconomic backgrounds, in
credit scoring systems, making it harder for them to access loans or mortgages [25].
Furthermore, bias in AI can also perpetuate gender stereotypes and discrimination.
For instance, facial recognition algorithms trained on data primarily consisting of men
can struggle to recognize female faces accurately, perpetuating gender bias in security
systems [1]. When generative AI (GenAI) models are prompted to create images of CEOs,
they tend to reinforce stereotypes by depicting CEOs predominantly as men [23].
In addition to perpetuating existing inequalities, bias in AI can also lead to new forms
of discrimination, such as those based on skin color, ethnicity, or even physical appearance.
The same GenAI models that exhibit gender bias, perhaps unsurprisingly, also portray
criminals or terrorists as people of color.
The public deployment of these systems can lead to serious consequences, such as
denial of services, job opportunities, or even wrongful arrests or convictions. The risk is
twofold: on an individual level, it affects people’s perception of themselves and others,
potentially influencing their opportunities and interactions.
On a societal level, the widespread use of such biased AI systems can entrench discrim-
inatory narratives and hinder efforts toward equality and inclusivity. As AI becomes more
integrated into our daily lives, the potential for such technology to shape cultural norms
Sci 2024, 6, 3 5 of 15
and social structures becomes more significant, making it imperative to address these biases
in the developmental stages of AI systems to mitigate their harmful impacts [14,21,22].
4.1. Overview of Current Approaches to Mitigate Bias in AI, Including Pre-Processing Data,
Model Selection, and Post-Processing Decisions
Mitigating bias in AI is a complex and multifaceted challenge. However, several
approaches have been proposed to address this issue. One common approach is to pre-
process the data used to train AI models to ensure that they are representative of the entire
population, including historically marginalized groups. This can involve techniques such
as oversampling, undersampling, or synthetic data generation [14]. For example, a study
Sci 2024, 6, 3 6 of 15
Limitations and
Approach Description Examples Ethical Considerations
Challenges
Involves identifying and 1. Oversampling 1. Potential for over- or
addressing biases in the data darker-skinned underrepresentation of
before training the model. individuals in a facial certain groups in the data,
1. Time-consuming
Techniques such as recognition dataset [1]. which can perpetuate
process.
oversampling, undersampling, 2. Data augmentation to existing biases or create
2. May not always be
Pre-processing Data or synthetic data generation increase representation in new ones.
effective, especially if the
are used to ensure the data are underrepresented groups. 2. Privacy concerns related
data used to train models
representative of the entire 3. Adversarial debiasing to data collection and
are already biased.
population, including to train the model to be usage, particularly for
historically marginalized resilient to specific types historically
groups. of bias [33]. marginalized groups.
Focuses on using model 1. Selecting classifiers that
selection methods that achieve demographic
prioritize fairness. Researchers parity [31]. 1. Balancing fairness with
have proposed methods based 2. Using model selection other performance metrics,
on group fairness or methods based on group such as accuracy
individual fairness. fairness [11] or individual Limited by the possible or efficiency.
Model Selection Techniques include fairness [30]. lack of consensus on what 2. Potential for models to
regularization, which 3. Regularization to constitutes fairness. reinforce existing
penalizes models for making penalize discriminatory stereotypes or biases if
discriminatory predictions, predictions. fairness criteria are not
and ensemble methods, which 4. Ensemble methods to carefully considered.
combine multiple models to combine multiple models
reduce bias. and reduce bias [34].
Involves adjusting the output
of AI models to remove bias
1. Trade-offs between
and ensure fairness.
different forms of bias
Researchers have proposed
when adjusting
methods that adjust the Post-processing methods Can be complex and
Post-processing predictions for fairness.
decisions made by a model to that achieve equalized require large amounts of
Decisions 2. Unintended
achieve equalized odds, odds [11]. additional data [32].
consequences on the
ensuring that false positives
distribution of outcomes
and false negatives are equally
for different groups.
distributed across different
demographic groups.
One of the main challenges is the lack of diverse and representative training data.
As mentioned earlier, data bias can lead to biased outputs from AI systems. However,
collecting diverse and representative data can be challenging, especially when dealing
with sensitive or rare events. Additionally, there may be privacy concerns when collecting
certain types of data, such as medical records or financial information. These challenges
can limit the effectiveness of dataset augmentation as a mitigation approach.
Another challenge is the difficulty of identifying and measuring different types of bias
in AI systems. Algorithmic bias can be difficult to detect and quantify, especially when the
algorithms are complex or opaque. Additionally, the sources of bias may be difficult to
isolate, as bias can arise from multiple sources, such as the data, the algorithm, and the user.
This can limit the effectiveness of bias-aware algorithms and user feedback mechanisms as
mitigation approaches.
Moreover, mitigation approaches may introduce trade-offs between fairness and
accuracy. For example, one approach to reducing algorithmic bias is to modify the algorithm
to ensure that it treats all groups equally. However, this may result in reduced accuracy
for certain groups or in certain contexts. Achieving both fairness and accuracy can be
challenging and requires careful consideration of the trade-offs involved.
Finally, there may be ethical considerations around how to prioritize different types of
bias and which groups to prioritize in the mitigation of bias. For example, should more
attention be paid to bias that affects historically marginalized groups, or should all types
of bias be given equal weight? These ethical considerations can add complexity to the
development and implementation of bias mitigation approaches.
Sci 2024, 6, 3 8 of 15
Despite these challenges, addressing bias in AI is crucial for creating fair and equitable
systems. Ongoing research and development of mitigation approaches are necessary to
overcome these challenges and to ensure that AI systems are used for the benefit of all
individuals and society.
5. Fairness in AI
Fairness in AI is a critical topic that has received a lot of attention in both academic
and industry circles. At its core, fairness in AI refers to the absence of bias or discrimination
in AI systems, which can be challenging to achieve due to the different types of bias that
can arise in these systems. There are several types of fairness proposed in the literature,
including group fairness, individual fairness, and counterfactual fairness. While fairness
and bias are closely related concepts, they differ in important ways, including that fairness
is inherently a deliberate and intentional goal, while bias can be unintentional. Achieving
fairness in AI requires careful consideration of the context and stakeholders involved.
Real-world examples of fairness in AI demonstrate the potential benefits of incorporating
fairness into AI systems.
6.1. Overview of Current Approaches to Ensure Fairness in AI, including Group Fairness and
Individual Fairness
Ensuring fairness in AI is a complex and evolving field, with various approaches being
developed to address different aspects of fairness. Two key approaches that have emerged
are group fairness and individual fairness.
Group fairness is concerned with ensuring that AI systems are fair to different groups
of people, such as people of different genders, races, or ethnicities. Group fairness aims
to prevent the AI system from systematically discriminating against any group. This
can be achieved through various techniques such as re-sampling, pre-processing, or post-
processing of the data used to train the AI model. For example, if an AI model is trained
on data that are biased toward a particular group, re-sampling techniques can be used to
create a balanced dataset where each group is represented equally. Other techniques, such
as pre-processing or post-processing, can be used to adjust the output of the AI model to
ensure that it does not unfairly disadvantage any group. Corbett-Davies and collaborators
introduced risk-minimization approaches aimed at minimizing disparities [27,28].
Individual fairness, on the other hand, is concerned with ensuring that AI systems
are fair to individuals, regardless of their group membership. Individual fairness aims to
prevent the AI system from making decisions that are systematically biased against certain
individuals. Individual fairness can be achieved through techniques such as counterfactual
Sci 2024, 6, 3 11 of 15
fairness or causal fairness. For example, counterfactual fairness aims to ensure that the AI
model would have made the same decision for an individual, regardless of race or gender.
While group fairness and individual fairness are important approaches to ensuring
fairness in AI, they are not the only ones. Other approaches include transparency, account-
ability, and explainability. Transparency involves making the AI system’s decision-making
process visible to users, while accountability involves holding the system’s developers
responsible for any harm caused by the system. Explainability involves making the AI
system’s decisions understandable to users [26,38].
Overall, ensuring fairness in AI is a complex and ongoing challenge that requires a
multi-disciplinary approach involving experts from fields such as computer science, law,
ethics, and social science. By developing and implementing a range of approaches to
ensure fairness, we can work towards creating AI systems that are unbiased, transparent,
and accountable.
Table 4. Cont.
7. Conclusions
In conclusion, this paper has illuminated the various sources of biases in AI and
ML systems and their profound societal impact, with an extended discussion on the
emergent concerns surrounding generative AI bias [41]. It is clear that these powerful
computational tools, if not diligently designed and audited, have the potential to perpetuate
and even amplify existing biases, particularly those related to race, gender, and other
societal constructs [40–43]. We have considered numerous examples of biased AI systems,
with a particular focus on the intricacies of generative AI, which illustrates the critical need
for comprehensive strategies to identify and mitigate biases across the entire spectrum of
the AI development pipeline [44–48].
To combat bias, this paper has highlighted strategies such as robust data augmentation,
the application of counterfactual fairness, and the imperative for diverse, representative
datasets alongside unbiased data collection methods [49–51]. We also considered the ethical
implications of AI in preserving privacy and the necessity for transparency, oversight, and
continuous evaluation of AI systems [52–54].
As we look to the future, research in fairness and bias in AI and ML should prioritize
the diversification of training data and address the nuanced challenges of bias in generative
models, especially those used for synthetic data creation and content generation. It is im-
perative to develop comprehensive frameworks and guidelines for responsible AI and ML,
which include transparent documentation of training data, model choices, and generative
processes. Diversifying the teams involved in AI development and evaluation is equally
crucial, as it brings a multiplicity of perspectives that can better identify and correct for
biases [55–57].
Lastly, the establishment of robust ethical and legal frameworks governing AI and
ML systems is paramount, ensuring that privacy, transparency, and accountability are not
afterthoughts but foundational elements of the AI development lifecycle [35]. Research
must also explore the implications of generative AI, ensuring that as we advance in creating
ever more sophisticated synthetic realities, we remain vigilant and proactive in safeguard-
ing against the subtle encroachment of biases that could shape society in unintended and
potentially harmful ways.
Sci 2024, 6, 3 13 of 15
References
1. Buolamwini, J.; Gebru, T. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Proceedings
of the 1st Conference on Fairness, Accountability and Transparency, New York, NY, USA, 23–24 February 2018; pp. 77–91.
2. Dastin, J. Amazon scraps secret AI recruiting tool that showed bias against women. In Ethics of Data and Analytics; Auerbach
Publications: Boca Raton, FL, USA, 2018; pp. 296–299.
3. Eubanks, V. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor; St. Martin’s Press: New York, NY,
USA, 2018.
4. Kleinberg, J.; Lakkaraju, H.; Leskovec, J.; Ludwig, J.; Mullainathan, S. Human decisions and machine predictions. Q. J. Econ. 2018,
133, 237–293. [PubMed]
5. Kleinberg, J.; Ludwig, J.; Mullainathan, S.; Sunstein, C.R. Discrimination in the Age of Algorithms. J. Leg. Anal. 2018, 10, 113–174.
[CrossRef]
6. Kleinberg, J.; Ludwig, J.; Mullainathan, S.; Rambachan, A. Algorithmic fairness. AEA Pap. Proc. 2018, 108, 22–27. [CrossRef]
7. O’Neil, C. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy; Broadway Books: New York,
NY, USA, 2016.
8. Asan, O.; Bayrak, A.E.; Choudhury, A. Artificial intelligence and human trust in healthcare: Focus on clinicians. J. Med. Internet
Res. 2020, 22, e15154. [CrossRef] [PubMed]
9. Berk, R.; Heidari, H.; Jabbari, S.; Kearns, M.; Roth, A. Fairness in Criminal Justice Risk Assessments: The State of the Art. Sociol.
Methods Res. 2018, 47, 175–210. [CrossRef]
10. Friedler, S.A.; Scheidegger, C.; Venkatasubramanian, S.; Choudhary, S.; Hamilton, E.P.; Roth, D. A comparative study of fairness-
enhancing interventions in machine learning. In Proceedings of the Conference on Fairness, Accountability, and Transparency,
Atlanta, GA, USA, 29–31 January 2019; pp. 329–338.
11. Yan, S.; Kao, H.T.; Ferrara, E. Fair class balancing: Enhancing model fairness without observing sensitive attributes. In
Proceedings of the 29th ACM International Conference on Information & Knowledge Management, Online, 26 June–31 July 2020;
pp. 1715–1724.
12. Caliskan, A.; Bryson, J.J.; Narayanan, A. Semantics derived automatically from language corpora contain human-like biases.
Science 2017, 356, 183–186. [CrossRef] [PubMed]
13. European Commission. Ethics Guidelines for Trustworthy AI. Commission Communication. 2019. Available online: https:
//op.europa.eu/en/publication-detail/-/publication/d3988569-0434-11ea-8c1f-01aa75ed71a1 (accessed on 15 December 2023).
14. Ferrara, E. Should ChatGPT be Biased? Challenges and Risks of Bias in Large Language Models. First Monday 2023, 28.
15. Kleinberg, J.; Mullainathan, S.; Raghavan, M. Inherent trade-offs in the fair determination of risk scores. In Proceedings of the
Innovations in Theoretical Computer Science (ITCS), Berkeley, CA, USA, 9–11 January 2017.
16. Schwartz, R.; Vassilev, A.; Greene, K.; Perine, L.; Burt, A.; Hall, P. Towards a Standard for Identifying and Managing Bias in Artificial
Intelligence; NIST Special Publication: Gaithersburg, MD, USA, 2022; Volume 1270, pp. 1–77.
17. Crawford, K.; Calo, R. There is a blind spot in AI research. Nature 2016, 538, 311–313. [CrossRef]
18. Selbst, A.D.; Boyd, D.; Friedler, S.A.; Venkatasubramanian, S.; Vertesi, J. Fairness and abstraction in sociotechnical systems. In
Proceedings of the Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA, 29–31 January 2019; pp. 59–68.
19. Angwin, J.; Larson, J.; Mattu, S.; Kirchner, L. Machine bias. In Ethics of Data and Analytics; Auerbach Publications: Boca Raton, FL,
USA, 2016; pp. 254–264.
20. Obermeyer, Z.; Powers, B.; Vogeli, C.; Mullainathan, S. Dissecting racial bias in an algorithm used to manage the health of
populations. Science 2019, 366, 447–453. [CrossRef]
21. Ferrara, E. GenAI against humanity: Nefarious applications of generative artificial intelligence and large language models. arXiv
2023, arXiv:2310.00737. [CrossRef]
22. Ferrara, E. The butterfly effect in artificial intelligence systems: Implications for AI bias and fairness. arXiv 2023, arXiv:2307.05842.
[CrossRef]
23. Mittelstadt, B.D.; Allo, P.; Taddeo, M.; Wachter, S.; Floridi, L. The ethics of algorithms: Mapping the debate. Big Data Soc. 2016,
3, 2053951716679679. [CrossRef]
24. Sweeney, L. Discrimination in online ad delivery. Commun. ACM 2013, 56, 44–54. [CrossRef]
25. Dwork, C.; Hardt, M.; Pitassi, T.; Reingold, O.; Zemel, R. Fairness through awareness. In Proceedings of the 3rd Innovations in
Theoretical Computer Science Conference, Cambridge, MA, USA, 8–10 January 2012; pp. 214–226.
26. Ananny, M.; Crawford, K. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic
accountability. New Media Soc. 2018, 20, 973–989. [CrossRef]
Sci 2024, 6, 3 14 of 15
27. Corbett-Davies, S.; Pierson, E.; Feller, A.; Goel, S.; Huq, A. Algorithmic decision making and the cost of fairness. In Proceedings
of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, NS, Canada, 13–17
August 2017; pp. 797–806.
28. Corbett-Davies, S.; Goel, S. The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv 2018,
arXiv:1808.00023.
29. Gebru, T.; Morgenstern, J.; Vecchione, B.; Vaughan, J.W.; Wallach, H.; Iii, H.D.; Crawford, K. Datasheets for datasets. Commun.
ACM 2021, 64, 86–92. [CrossRef]
30. Zafar, M.B.; Valera, I.; Gomez Rodriguez, M.; Gummadi, K.P. Fairness beyond disparate treatment & disparate impact: Learning
classification without disparate mistreatment. In Proceedings of the 26th International Conference on World Wide Web, Perth,
Australia, 3–7 May 2017; pp. 1171–1180.
31. Kamiran, F.; Calders, T. Data preprocessing techniques for classification without discrimination. Knowl. Inf. Syst. 2012, 33, 1–33.
[CrossRef]
32. Barocas, S.; Selbst, A.D. Big data’s disparate impact. Calif. Law Rev. 2016, 104, 671–732. [CrossRef]
33. Bolukbasi, T.; Chang, K.W.; Zou, J.Y.; Saligrama, V.; Kalai, A.T. Man is to computer programmer as woman is to homemaker?
Debiasing word embeddings. Adv. Neural Inf. Process. Syst. 2016, 29, 4349–4357.
34. Ferguson, A.G. Predictive policing and reasonable suspicion. Emory LJ 2012, 62, 259. [CrossRef]
35. Wachter, S.; Mittelstadt, B.; Russell, C. Counterfactual explanations without opening the black box: Automated decisions and the
GDPR. Harv. J. Law Technol. 2018, 31, 841–887. [CrossRef]
36. Žliobaitė, I. Measuring discrimination in algorithmic decision making. Data Min. Knowl. Discov. 2017, 31, 1060–1089. [CrossRef]
37. Crawford, K.; Paglen, T. Excavating AI: The politics of images in machine learning training sets. AI Soc. 2021, 36, 1105–1116.
[CrossRef]
38. Donovan, J.; Caplan, R.; Matthews, J.; Hanson, L. Algorithmic Accountability: A Primer; Data & Society: New York, NY, USA, 2018.
39. Ezzeldin, Y.H.; Yan, S.; He, C.; Ferrara, E.; Avestimehr, S. Fairfed: Enabling group fairness in federated learning. In Proceedings of
the AAAI 2023—37th AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023.
40. Crenshaw, K. Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist
theory and antiracist politics. In Feminist Legal Theories; Routledge: London, UK, 1989; pp. 23–51.
41. Nicoletti, L.; Bass, D. Humans Are Biased: Generative AI Is Even Worse. Bloomberg Technology + Equality, 23 June 2023.
42. Cirillo, D.; Catuara-Solarz, S.; Morey, C.; Guney, E.; Subirats, L.; Mellino, S.; Gigante, A.; Valencia, A.; Rementeria, M.J.;
Chadha, A.S.; et al. Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare. NPJ Digit. Med.
2020, 3, 81. [CrossRef] [PubMed]
43. Noble, S.U. Algorithms of Oppression: How Search Engines Reinforce Racism; NYU Press: New York, NY, USA, 2018.
44. Chouldechova, A. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data 2017,
5, 153–163. [CrossRef] [PubMed]
45. Huang, J.; Galal, G.; Etemadi, M.; Vaidyanathan, M. Evaluation and mitigation of racial bias in clinical machine learning models:
Scoping review. JMIR Med. Inform. 2022, 10, e36388. [CrossRef] [PubMed]
46. Park, J.; Arunachalam, R.; Silenzio, V.; Singh, V.K. Fairness in Mobile Phone-Based Mental Health Assessment Algorithms:
Exploratory Study. JMIR Form. Res. 2022, 6, e34366. [CrossRef] [PubMed]
47. Ricci Lara, M.A.; Echeveste, R.; Ferrante, E. Addressing fairness in artificial intelligence for medical imaging. Nat. Commun. 2022,
13, 4581. [CrossRef]
48. Yan, S.; Huang, D.; Soleymani, M. Mitigating biases in multimodal personality assessment. In Proceedings of the 2020 International
Conference on Multimodal Interaction, Utrecht, The Netherlands, 25–29 October 2020; pp. 361–369.
49. Chouldechova, A.; Roth, A. The frontiers of fairness in machine learning. arXiv 2018, arXiv:1810.08810.
50. Mehrabi, N.; Morstatter, F.; Saxena, N.; Lerman, K.; Galstyan, A. A survey on bias and fairness in machine learning. ACM Comput.
Surv. (CSUR) 2021, 54, 1–35. [CrossRef]
51. Verma, S.; Rubin, J. Fairness definitions explained. In Proceedings of the International Workshop on Software Fairness, Gothen-
burg, Sweden, 29 May 2018; pp. 1–7.
52. Lipton, Z.C. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and
slippery. Queue 2018, 16, 31–57. [CrossRef]
53. Mitchell, M.; Wu, S.; Zaldivar, A.; Barnes, P.; Vasserman, L.; Hutchinson, B.; Spitzer, E.; Raji, I.D.; Gebru, T. Model cards for model
reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA, 29–31 January
2019; pp. 220–229.
54. Raji, I.D.; Buolamwini, J. Actionable auditing: Investigating the impact of publicly naming biased performance results of
commercial AI products. In Proceedings of the Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA,
29–31 January 2019; pp. 77–86.
55. Chauhan, P.S.; Kshetri, N. The Role of Data and Artificial Intelligence in Driving Diversity, Equity, and Inclusion. Computer 2022,
55, 88–93. [CrossRef]
Sci 2024, 6, 3 15 of 15
56. Holstein, K.; Wortman Vaughan, J.; Daumé, H., III; Dudik, M.; Wallach, H. Improving fairness in machine learning systems: What
do industry practitioners need? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow,
UK, 4–9 May 2019; pp. 1–16.
57. Stathoulopoulos, K.; Mateos-Garcia, J.C.; Owen, H. Gender Diversity in AI Research. 2019. Available online: https://www.nesta.
org.uk/report/gender-diversity-ai/ (accessed on 15 December 2023).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.