Ken Masters
Ken Masters
Ken Masters
To cite this article: Ken Masters (2023) Ethical use of Artificial Intelligence in Health
Professions Education: AMEE Guide No. 158, Medical Teacher, 45:6, 574-584, DOI:
10.1080/0142159X.2023.2186203
AMEE GUIDE
ABSTRACT KEYWORDS
Health Professions Education (HPE) has benefitted from the advances in Artificial Intelligence (AI) Ethics; artificial intelligence;
and is set to benefit more in the future. Just as any technological advance opens discussions medical education; health
about ethics, so the implications of AI for HPE ethics need to be identified, anticipated, and professions education;
ChatGPT
accommodated so that HPE can utilise AI without compromising crucial ethical principles. Rather
than focussing on AI technology, this Guide focuses on the ethical issues likely to face HPE teach-
ers and administrators as they encounter and use AI systems in their teaching environment. While
many of the ethical principles may be familiar to readers in other contexts, they will be viewed in
light of AI, and some unfamiliar issues will be introduced. They include data gathering, anonymity,
privacy, consent, data ownership, security, bias, transparency, responsibility, autonomy, and benefi-
cence. In the Guide, each topic explains the concept and its importance and gives some indication
of how to cope with its complexities. Ideas are drawn from personal experience and the relevant
literature. In most topics, further reading is suggested so that readers may further explore the con-
cepts at their leisure. The aim is for HPE teachers and decision-makers at all levels to be alert to
these issues and to take proactive action to be prepared to deal with the ethical problems and
opportunities that AI usage presents to HPE.
Introduction
Artificial intelligence: background and the health Practice points
AI introduces and amplifies ethical issues in HPE,
professions
for which educators are unprepared.
In 1950, Alan Turing posed the question ‘Can machines This Guide identifies and explains a wide range of
think?’ (Turing 1950) and followed it with a philosophical these issues, including data gathering, anonymity,
discussion about definitions of machines and thinking. privacy, consent, data ownership, security, bias,
Barely five years later, John McCarthy and colleagues intro- transparency, responsibility, autonomy, and ben-
duced the term ‘Artificial Intelligence’ (AI) (McCarthy 1955). eficence, and gives some guidance on how to
deal with them.
Although they did not precisely define the term, they did
raise the ‘conjecture that every aspect of learning or any
other feature of intelligence can in principle be so precisely
described that a machine can be made to simulate it’ and controlled trials, and systematic literature reviews (Lee
the term has been used since then to refer to activities by et al. 2021).
computer systems designed to mimic and expand upon
the intellectual activities and capabilities of humans.
Ethics in HPE
AI currently influences every field of human enquiry,
and is well-established as a research area in the health pro- The role of ethics in the context of AI usage in HPE is less
fessions: a PubMed search with the phrase ‘Artificial clear, not least because of the way in which ethics is
Intelligence’ returns some 60,000 articles published in the viewed in HPE institutions.
last 20 years, and AI is generally regarded as ‘integral to Firstly, it is routine for HPE institutions to have Research
health-care systems’ (Lattouf 2022). In Health Professions Ethics Boards or Clinical Ethics Boards, but not Educational
Education (HPE), articles focusing on AI show a range of Ethics Boards. (Although the post of Academic Integrity
typical paper types, including AMEE Guides, randomised Officer or similar may exist, descriptions of the role usually
CONTACT Ken Masters [email protected] Medical Education and Informatics Department, College of Medicine and Health Sciences, Sultan Qaboos
University, Muscat, Sultanate of Oman
Supplemental data for this article is available online at http://dx.doi.org/10.1080/0142159X.2023.2186203
This article has been corrected with minor changes. These changes do not impact the academic content of the article.
ß 2023 The Author(s). Published by Informa UK Limited, trading as Taylor & Francis Group.
This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License (http://creativecommons.org/licenses/by-nc-
nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited, and is not altered, transformed, or
built upon in any way. The terms on which this article has been published allow the posting of the Accepted Manuscript in a repository by the author(s) or with their consent.
MEDICAL TEACHER 575
focus on the academic integrity of students, not teachers guide the HPE teacher in how to ensure that they are using
(Vogt and Eaton 2022)). In fact, when one speaks of an AI ethically in their teaching. As a result, a more direct
Institutional Ethics or Review Board (IRB), the assumption is awareness of the complexities of ethics while using AI in
that one is speaking of research, and not education prac- HPE is required. These complexities include those around
tice. Faculty routinely submit research protocols to special- the amount of data gathered, anonymity and privacy, con-
ised research ethics committees, but do not routinely sent, data ownership, security, data and algorithm bias,
submit course outlines to educational ethics committees. transparency, responsibility, autonomy, and beneficence.
Secondly, despite a frequent blurring between This Guide attempts to meet that requirement, and will
‘evaluation’ and ‘research’ in HPE (Sandars et al. 2017), focus on the relationships between institution, teacher, and
many universities do not classify course evaluation student student. In addition, while there will be some discussion of
data as research data, or they classify them as ‘quality the interactions with patients, those ethical issues are best
improvement’, and so do not require prior ethics approval covered in medical ethics, and so are not the focus of
for such data gathering, storage and use. Alternately, some this guide.
institutions use the rather enigmatic term ‘approved with Because many readers may be unfamiliar with the impli-
exempt status’. cations of ethical issues in light of AI, each topic will, of
Thirdly, national legislation further clouds the picture: In
necessity, detail the nature and scope of the problem
the USA, the Department of Health and Human Services
before describing possible solutions. In some cases, the
(Regulation 46.104 (d) (HHS 2021) appears to exempt virtu-
path ahead might not always be clear, but an awareness of
ally all research of classroom activities as long as participants
the problem and its scope is a first step to solving that
are not identified; Dutch law automatically determines edu-
problem.
cation research ‘exempt from ethical review’ (ten Cate 2009);
In addition, the AI system named Generative Pre-trained
Danish law automatically grants national ethics pre-approval
Transformer (GPT) was released to the public in November
to all health surveys, unless the ‘the project includes human
2022, and this release highlighted many practical examples
biological material’ (såfremt projektet omfatter menneskeligt
of ethical issues to be considered in HPE, so frequent refer-
biologisk materiale) (Sundheds- og Ældreministeriet 2020),
and Finland does not require ethics approval for surveys of ence to that system will be made. (In AI terms, one might
people over the age of 15 (FNB 2019). even be tempted to consider a pre-ChatGPT world vs. a
These inconsistencies lead to an uncomfortable situation post-ChatGPT world, but it must be remembered that the
in which HPE journals receive manuscripts dealing with stu- major significant difference was its public release and
dent surveys for which researchers have not obtained prior accessibility, rather than its development).
ethics approval because these data were considered One ethical complication concerns ChatGPT’s principles:
‘evaluation’ data, and HPE institutions have not required its response to a query on its ethical principles and policies
prior ethics approval. As a result, a tension exists between is that it does ‘not have the capacity to follow any ethical
journal and medical education researchers, because jour- principles or policies’ (see Appendix 1). This approach to
nals are increasingly pressuring researchers to obtain ethics ethics is reminiscent of the belief that educational technol-
approval for their research (ten Cate 2009; Sandars et al. ogy is inherently ethically neutral – a position long-since
2017; Masters 2019; Hays and Masters 2020), but this debunked. The implications of this perception will be
causes problems when an institution follows national referred to later in the Guide.
guidelines, and declines to review the ethics application. The Guide then ends with a broader Discussion of the
first steps to be taken.
AI ethics in HPE
A little technical
Into this murky area, AI is introduced, and, as will be seen
This Guide avoids being overly-technical, and so will not
from the discussion below, AI amplifies the ethical com-
become distracted with definitions of machine learning,
plexities for which many researchers (and institutions) are
deep learning, and other AI terminology, but some tech-
unprepared.
Some countries and regions have set up broad AI princi- nical jargon is required. At this point, it is necessary to
ples and ethics, but their mention of education is very explain two technical concepts that may not be familiar to
broad, mostly referring to the need to educate people readers: the algorithm and the model.
about AI and AI ethics (e.g. (HLEG 2019). A recent, small (4- For the purposes of this paper, it is necessary to know
questions) study in Turkey by Celik assessed K-12 teachers’ only that an algorithm is a step-by-step process followed
knowledge of ethics and AI in education (Celik 2023). by a computer system to solve a problem. Some algo-
Although there are now many articles on AI in HPE, they rithms are designed entirely by humans, some entirely by
frequently focus on the technical and practical aspects, and computers, and some a hybrid mix of both. In AI, the algo-
the ethical discussion refers primarily to the impact on eth- rithms can be extremely complex. Algorithms are referred
ics of AI in medical practice or clinical activities using AI, to again in the topics below.
rather than the ethical use of AI in medical or health pro- A model is the full computer program that uses the
fessions education (Chan and Zary 2019; Masters 2019; algorithms to perform tasks (e.g. making predictions, or
Masters 2020a, Rampton et al. 2020; Çalışkan et al. 2022; classifying images). Because the word model is used in so
Cobianchi et al. 2022; Grunhut et al. 2022). many other fields, this Guide will refer primarily to algo-
Although there is recent concern about ethical use of AI rithms, to reduce confusion. [For a little more detail on
in HPE (Arizmendi et al. 2022), there is currently little to models, see Klinger (2021).]
576 K. MASTERS
The main ethics topics of concern Finally, Big Data goes beyond institutional-controlled
databases: AI uses internet systems (e.g. social media
Guarding against excessive data collection accounts) outside the institution, allowing for computa-
In HPE research, there is a concern about excessive data tional social science that ‘leverages the capacity to collect
collection from students (Masters 2020b). In traditional HPE and analyze data with an unprecedented breadth and
research (such as student and staff surveys), in the cases depth and scale’ (Lazer et al. 2009). Following common
when ethics approval is required, IRBs review data collec- practices in industry, these data can be used to create
tion tools and query the relevance of specific data catego- unique profiles of each user.
ries and questions, and researchers must justify the need These data are usually collected at a hidden, electronic
for those. Justifications are usually for testing hypotheses, level, in a process known as ‘passive data gathering’, in
or so that results can be compared to findings given in which participants may not be aware of the data-gathering
the literature. process. The only ethical requirement appears to be a sin-
Although there may be weaknesses in IRB-approval, the gle click to indicate that the user has read the ‘Terms and
process does go some way in reducing excessive data col- Conditions’; and we all know how closely our students read
lection: the data to be collected are pre-identified, focused, those: about as closely as we read them. Accompanying
and can be traced directly to the pre-approved survey the ethical problems of excessive data-gathering, there is
forms and other data-gathering tools. also the increased risk of apophenia: finding spurious pat-
AI changes the data collection process. A strength of AI terns and relationships between random items.
is the use of Big Data: the collection and use of large As a result, in this seeming drag-net approach, there is
the risk that a great amount of irrelevant data are
amounts of data from many varied datasets. As a result,
gathered.
data that might previously have been considered noise are
As a first step in reducing this risk and to guard against
frequently found to be relevant, and previously unknown
excessive student and staff data-gathering, institutional
patterns are identified. In clinical care, the issue of clinical
protocols for active and passive data-gathering in HPE
‘relevance’ has been of concern for many years (Lingard
need to be clearly defined, justified, applied and monitored
and Haber 1999), and a recent example of using large
in the same way that research protocols are monitored by
amounts of seemingly irrelevant data is a study in which
research IRBs. These protocols should be required for gen-
an AI system was able to determine a person’s race from x-
eral teaching, formal research, and ‘evaluation’ data-
ray images (Gichoya et al. 2022).
gathering.
When trying to understand the ethical concerns of using
These preventative steps should not apply to online
AI in HPE, it is crucial to understand that Big Data does not
teaching only. Although the danger of excessive data gath-
mean just a little more data, but to amounts and types of
ering is generally greater in online teaching than in face-
data on an unprecedented scale. In HPE, a starting point to-face teaching, tools also exist for passive data gathering
for Big Data is the obvious student data, or Learning in face-to-face situations, and, if these tools are to be used
Analytics (Ellaway et al. 2014; Goh and Sandars 2019; ten at all, great care should be taken about how the data are
Cate et al. 2020) that can be easily electronically harvested used. An example of one tool, created by teradata is dem-
from institutional sources, such as Learning Management onstrated by (Bornet 2021).
Systems (LMSs), and, for clinical students, from Electronic In addition, given the massive enlargement of the data
Health Records (Pusic et al. 2023). The aims are generally catchment area beyond the institution, IRBs and national
laudable: using data-driven decisions to provide the best bodies need to urgently re-visit their ethics approval
possible learning outcome for each student or to predict requirements, to ensure that they are still appropriate for
behaviour patterns. advances in AI data-gathering processes. This will mean
Learning Analytics and Big Data required for AI, how- that ethical approval will need to be nuanced enough to
ever, go much further than the simple data that we com- account for the relationship between survey data and data
monly associate with education (e.g. test scores). Even in institutional and external databases.
without AI, statistical studies of medical students have For further reading on this topic, see (Shah et al. 2012)
examined behaviour, performance, and demographics to and (Boyd and Crawford 2011).
predict outcomes (Sawdon and McLachlan 2020; Arizmendi
et al. 2022). Following lessons learned from social media,
the data can now also include a much wider range of per- Protecting anonymity and privacy
sonal and behavioural data, such as face- and voice-prints, Given this large amount of student data collected, there is
eye-tracking, number of clicks, breaks, screentime, location, a risk to student anonymity and privacy. This risk is pos-
general interests, online interactions with others, search sibly exacerbated by the well-meant principles of FAIR
history, and mapping to other devices on the local (i.e. (Wilkinson et al. 2016) and Open Science, through which
home) network. These are then combined with other journals increasingly encourage or require raw data to be
aspects and further interpreted (e.g. facial and textual anal- submitted with research papers.
yses and stress levels to infer emotional status). Partial solutions to the problem of protecting student
In addition, there is the expansion into a wider range of anonymity and privacy have been found. For example,
institutional databases, including those that record student researchers can anonymise student identities by creating
(and staff) applications, financial records, general informa- temporary subject-generated identification codes (SGICs:
tion, electronic venue access, network log files, and alumni Damrosch 1986) based on personal characteristics (e.g. first
data. letter of father’s name). In addition, to meet the FAIR and
MEDICAL TEACHER 577
Open Science requirements, data are further anonymised Deny third parties’ access to any of the tracking data
(or de-identified), a process usually involving removing from the LMS or any apps.
Personally-Identifying Information (PII) (e.g. social securi- Take care when implementing social media widgets, as
ty/insurance numbers, zip/postal codes), or data-grouping/- these frequently gather ‘anonymous’ data, and can track
categorisation in order to hide details (McCallister et al. students, even if they are not registered with those
2010). social media.
Unfortunately, researchers know that complete data Closely inspect qualitative data and redact items if there
anonymisation is a myth (Narayanan and Shmatikov 2009) is the risk that they could be used for de-
because data de-anonymisation (or re-identification), in anonymisation.
which data are cross-referenced with other data sets and If external systems require registration with an email
persons are re-identified, is well-developed. With the ever- address, then the institution could consider creating
increasing number of data sets used by Big Data, and more temporary non-identifiable email addresses for students
powerful AI capabilities, the risk of data de-anonymisation to use. Institutional registration may also be required to
grows. When coupled with social media databases, all the ensure equitable student accessibility, should ChatGPT
information in SGICs is easily accessible, and these SGICs (or other systems) begin to charge for usage.
are easily de-anonymised.
Even on a local scale, data de-anonymisation is possible. These steps, however, will require institutional or legal
For example, institutions use LMSs to run anonymous stu- power, so, again, it is important that IRBs and national
dent surveys, and Supplementary Appendix 2 gives an bodies re-visit their ethics approval requirements. For the
next steps, a useful (albeit somewhat complex) guideline
example case showing how anonymous data from a popu-
on data-anonymisation is OCR (2012).
lar LMS can be easily de-anonymised.
The risk of data de-anonymisation increases as teaching
systems become more sophisticated: automated Ensuring full consent
‘personalised’ education requires tracking, and HPE institu-
tions may team up with industry to use their AI technology There are several ethical questions surrounding student
on de-identified data, but lack of control of the data can consent that need to be addressed. These include: Are stu-
lead to problems of data ownership and sharing (Boninger dents aware that we gather these data? Have they given
and Molnar 2022). consent? To what have they consented? And, if so, has that
In addition, HPE data are frequently qualitative, increas- consent been given voluntarily? One may argue that check-
ing a consent tick box is enough, because that is an
ing the risk of de-anonymisation, and that is a partial rea-
‘industry standard’, but this argument ignores important
son that researchers are reluctant to share their data with
issues:
others (Gabelica et al. 2022). AI’s Natural Language
Processing (NLP) is already being used in HPE to identify
Firstly, the student ‘industry’ is not a computer system,
and categorise items through semantic classifications
but, rather, education. Using common practices from
(Tremblay et al. 2019; Gin et al. 2022), but people belong
social media and other similar sites is inappropriate, as
to social groups, and there is a strong link between lan-
those sites are not part of the education ‘industry’.
guage usage and these social groups (Byram 2022). As a
Secondly, merely because something is widely prac-
result, the ability for NLP to identify people through non-
ticed, does not make it a standard of best practice. It
PII, such as a combination of language idiosyncrasies and means only that other places are doing the same thing.
behaviours (e.g. spelling, grammar, colloquialisms, support- Having many institutions doing the same thing does
ing a particular sports team, etc.) is a relatively trivial task. not make it right or ethical.
A use of third-party tools further increases the risk to Thirdly, people may (and do) exercise their free choice,
student anonymity: a possible response to ChatGPT is the and disengage from social media platforms, or access
suggestion to have students explicitly use it, and then dis- some without registration, or even give false registra-
cuss responses in class. While this has educational value, tion details. That choice is not available to our students
ChatGPT usage requires registration and identification, and who wish to access their materials through an LMS, so,
that identification will also be linked to any data supplied there is nothing ‘voluntary’ about their ‘consent’ to
by the students, and so anonymity will be compromised. In institutions’ passively gathering their data.
addition, ChatGPT has the potential to assist in the grading
process, especially of written assignments, but institutional As a result, it is essential that institutions apply the basic
policy would need to accommodate a scenario that permits protections that they apply to research subjects when data
student work to be uploaded, and usage guidelines (e.g. on students are gathered, irrespective of the reasons for
marking schemes, degrees of reliance) for faculty need to doing so. An education Ethics Board is required to ensure
be clear. that student consent for active and passive data-gathering
That said, although data anonymisation is fraught with will be obtained in the same way that IRBs ensure that
danger, attempting to do so is better than no attempt at consent is obtained for research subjects.
all. Some steps for data anonymisation are: In addition, protecting previous students’ data must be
implemented. In many cases, institutions have student data
Closely examine the tracking data collected by the LMS going back years, and will wish to have more information
and other teaching systems to ensure that no cross- about future students. Safeguards against abuses must be
referencing can occur. implemented.
578 K. MASTERS
Further consideration must be given to how current stu- We would wish to avoid, for example, wide-spread govern-
dent data will be used in the future. Student consent forms mental surveillance of students that occurred with school
need to clearly indicate this (if the intention exists), but children during the Covid-19 pandemic (Han 2022). A start-
this may not always be clear, given that we might not ing point is to ensure that all stored data (whether on net-
even know how we are going to use it. It is clear, however, worked machines, private laptops, portable drives) are
that we cannot simply hope or assume that current con- encrypted. Further details on how to accomplish this can
sent procedures are ethically acceptable. be found in Masters (2020b).
Protecting student data ownership Guarding against data and algorithm bias
While AI developers are very clear on their intellectual The student as data
property and ownership of their algorithms and software, Before we can discuss data and algorithm bias, it is import-
there is less clarity on the concept of the subjects’ owning ant to be a little technical, and to understand that, in AI
their own data. systems, a person is not a person. Using the terminology of
When considering student data ownership, HPE institu- the General Data Protection Regulation (GDPR), a person is
tions need to account for the fact that there are two data a Data Subject (European Parliament 2016; Brown and Klein
types: data created by students (e.g. assignments), and 2020), i.e. a collection of data points or identifiers or vari-
data created by the institution about the students (e.g. able values. So, within the context of AI in HPE, the term
tracking data, grades) (Jones et al. 2014). Ultimately, institu- student is merely an identifier related to the Data Subject,
tions need to answer the questions: do students have the and this identifier is important only insofar as it allows an
right to claim ownership of their academic data, and what operator to distinguish the attributes of one Data Subject
are the implications of this ownership for usage by from another Data Subject (which may be identified by
institutions? other identifiers, such as faculty or teacher.) The distinction
These are not insignificant questions: we expect govern- afforded by these identifiers is primarily to aid in determin-
ment protection on the use of our data by social media ing functionality, such as access permissions to online sys-
and other companies, and that we should have a say in tems and long-term relationships and is useful for
how our data are used, but education institutions fre- reporting processes.
quently follow much the same questionable patterns when Yes, as far as AI is concerned, you and your students are
they use student data, and we appear to accept that usage. merely data subjects or collections of data points and
Taking direction from Valerie Billingham’s phrase relating identifiers.
to medical usage of patient data, ‘Nothing about me with- The ‘merely’, however, might be misleading, because
out me’ (Wilcox 2018), students should have a say regard- these data points do have a crucial function in AI: they are
ing how their data are used, with whom they are shared, used to create the algorithms. Whether designed by
and under what circumstances they are shared. humans, machines, or co-designed, the algorithms are
This problem is amplified in AI because the student based on data. That means that we need to be able to
data are used to develop the sophisticated algorithms on completely trust the data so that we can trust the algo-
which the AI systems rely (More on this concept below). rithms on which they are based.
Addressing this issue will require institutions to make high-
level and far-reaching ethical and logistic decisions.
Algorithm bias
Data can be dominated by some demographic identifiers
Applying stricter security policies or under-represented by others (e.g. race, gender, cultural
In general, data security at Higher Education institutions identity, disabilities, age) (Gijsberts et al. 2015), and so the
leaves much to be desired. For years, specific areas like algorithms formed according to those data will also reflect
library systems have been routinely compromised (Masters the dominance and under-representation. In addition, ster-
2009) and institutional policies are frequently ill-communi- eotypes inherent in the data labelling can be transferred to
cated to students (Brown and Klein 2020). Although world the AI algorithm (e.g. number and labels of gender and
figures are not easily obtained, 2021 estimates are that race), and incorrect weightings can be attributed to data,
‘since 2005, K–12 school districts and colleges/universities or there can be unfounded connections between reality
across the US have experienced over 1,850 data breaches, and the data indicators. Stereotypes have already been
affecting more than 28.6 million records’ (Cook 2021). This identified in HPE (Bandyopadhyay et al. 2022), and, when
is a frightening statistic. incorporated into AI, could lead to inappropriate algo-
With the large-scale storage, sharing and coupling of rithms, that are inherently racist, sexist or otherwise preju-
data required by AI, the possibilities for much wider diced (Bender et al. 2021; Racism and Technology Center
breaches grow. Not only do HPE institutions use AI to trawl 2022). This characteristic is usually termed algorithm bias,
databases, but hackers use these same methods to trawl and is a concern in all fields, including medicine (Dalton-
institutional databases, giving the potential for a breach of Brown 2020; Straw 2020). In HPE, the impact of this bias
a single database to balloon into several systems can occur when any AI systems are used in staff and stu-
simultaneously. dent recruitment, promotions, awards, internships, course
HPE institutions’ data security policies and practices, design, and preferences.
including those dealing with third-party data-sharing, will In ChatGPT, the cultural bias is not always apparent, and
have to be significantly improved, tested and monitored. may not be obvious in ‘scientific’ subjects, but the moment
MEDICAL TEACHER 579
one steps into sensitive areas, the cultural bias, especially Similarly, ChatGPT gives only a vague indication of its
USA-centred (Rettberg 2022), becomes apparent, affecting algorithms, and its reference to the fact that it is merely a
the responses, and even stopping the conversation. In its statistical model with no ethical principles should alert us to
dealing with some topics, rather than discussing them, it the fact that it does not know the truth or validity of anything
appears to apply a form of self-censorship, based on some that it reports, and it does not care. (See Supplementary
reluctance to offend. That is not a good principle to apply Appendix 1 for ChatGPT’s response to the question ‘Who
to academic debate. Even though these biases may not be wrote ChatGPT’s algorithms, and how were they written?’).
obviously apparent, experiments have exposed them (see As a result, when we ask AI developers to explain their
Supplementary Appendix 1 for examples). algorithm, and they do not, it is not because they do not
These responses are particularly pertinent, given that wish to; it is because they cannot. In this ‘black-box’ scen-
ChatGPT claims to follow no ethical principles or policies. ario, no-one knows what is going on with the algorithm.
Irrespective of whether one agrees with ChatGPT’s As a result, when a failure occurs, it is difficult to establish
responses to the question about the Holocaust and Joseph the cause of the failure, and to prevent future failure.
Conrad’s work (Supplementary Appendix 1), it is obvious Further, this reliance on pure statistical models without
that it is an ethical position, and emphasises the point that understanding threatens to separate the observations from
there is no such thing as ethically-neutral AI, and all any underlying educational theory (necessary, for example,
responses and decisions will have bias. to distinguish between correlation and causality), and so
A starting point to reducing AI algorithm bias in HPE is hinders our real understanding and generalisability of any
to ensure that there is sufficient data diversity, although findings.
bearing in mind that size alone does not guarantee diver- The ethical problem of algorithm non-transparency is
sity (Bender et al. 2021; Arizmendi et al. 2022). Irrespective being addressed to some extent through Explainable
of AI, diversity in HPE is good practice (Ludwig et al. 2020), Artificial Intelligence (XAI: Linardatos et al. 2020), but the
and this diversity will contribute to stronger and less- problem still exists, and probably will for some time. In the
biased algorithms. Where the training data are not widely meantime, HPE institutions will need to ensure that all
represented, this should be stated clearly and identified as algorithms used are well-documented. As far as possible,
a limitation. In addition, although the field of learning ana- they can elect to use only open-source routines (and to
lytics is ever-evolving, educators must be careful about make new routines open-source), or to use algorithms that
drawing too-strong associations (and causation) between have been rigorously tested on a wide scale, so that, at the
student activities and perceived effects. very least, the algorithms are known to the wider commu-
Tools are also being developed to check for data and nity. In addition, the findings and predictions made by the
algorithm bias (e.g. PROBAST: Wolff et al. 2019), although AI system should, as far as possible, be related to educa-
more directed tools are required. One might also make tional theory, to highlight both connections and short-com-
data Open Access to reduce algorithm bias, because every- ings requiring further exploration.
one can inspect the data. Open Access data does, however,
have its own problems: those data are now widely
Clearly demarcating responsibility, accountability,
exposed, so, one needs to ensure that consent for that
blame, and credit
exposure exists. The impact on anonymisation (see above)
must also be considered, because the larger the data set, A strength of AI systems is that they can make predictions,
and the more numerous the data sets, the greater the and can usually give a statistical probability of an outcome.
potential for triangulation and data de-anonymisation. A statistical probability, however, does not necessarily
apply to individual cases, so final decisions must be made
by humans.
Ensuring algorithm transparency
When implementing such systems, the institution will
In addition to the bias from the data, algorithms can also require clear guidelines and policies regarding decision
be non-transparent. responsibility and accountability, blame for bad decisions
Firstly, whether designed by humans or machines, they and credit for good decisions, and axiology (exactly how
may be proprietary and protected by intellectual property ‘good’ and ‘bad’ are determined). A simple judgment on
laws, and therefore not available to inspection and wider the results is sure to squash innovation and risk-taking, but
dissemination. a laissez-faire approach can lead to recklessness.
Secondly, when designed by machines, they may have
several hidden layers that are simply impenetrable by
Supporting autonomy
inspection. Some of the most successful algorithms are not
understood by humans, nor do they need to be: they sim- Related to the previous topic, the institution needs to be
ply need to find the patterns and then make predictions clear on the amount of autonomy granted to the decision-
based on those patterns. Results, not methodology, meas- makers regarding their use of AI systems, so that they may
ures their success. For example, most readers have used be treated ethically and fairly. If they act contrary to the AI
Google Translate, but the system does not actually ‘know’ recommendations, and are wrong, then they may be chas-
any of the languages it translates; it simply works with the tised for ignoring an approved and expensive system; on
data (Anderson 2008). In essence, ‘We can throw the num- the other hand, if they follow the AI recommendations,
bers into the biggest computing clusters the world has and it is wrong, they may be blamed for blindly following
ever seen and let statistical algorithms find patterns where a machine instead of using their own training, experience,
science cannot.’ (Anderson 2008). and common sense.
580 K. MASTERS
In HPE, AI will also impact the autonomy of the stu- aspects in mind, there is no reason that this ‘self-
dents. Already in the use of Learning Analytics, there is the improvement’ should not apply to ethical models also.
concern that gathering of student data may lead to stu- We need to prepare for a world in which our views on
dents’ conforming to a norm in their ‘data points’ (ten Cate educational ethics are challenged by AI’s views on ethics.
et al. 2020); with AI’s reaching beyond the LMS, the impact
is even greater.
Preparing for AI as a person, with rights
Ensuring appropriate beneficence While there is documented concern about the impact of AI
on human rights (CoE Commissioner for Human Rights
Related to the issue of student data ownership, HPE institu-
2019; Rodrigues 2020), we do need to consider the con-
tions need to consider acknowledging and rewarding the
verse of the discussion: AI rights and protections
data suppliers – i.e. the students, and ensure that they are
protected from use of these data against them. For years, (Boughman et al. 2017; Liao 2020). If the AI system is at a
the medical field has grappled with acknowledging level of reasoned consciousness, capable of making deci-
patients for their tissue and other material (Benninger sions that affect the lives of our students, and being held
2013). In HPE AI systems, the donation is student data, responsible for those decisions, then does it make sense to
often given without knowledge. Acknowledgement of this have AI rights? Given that AI systems are already creating
is a start. new algorithms, art, and music, should they be protected
But the concern is a lot deeper than acknowledgment. by the same copyright laws that protect humans (Vallance
When institutions use student data to improve courses and 2022)?
services, the institutions improve, and so draw in more rev- If so, which other basic human rights will AI be granted,
enue from new students, donors, etc. But the extra revenue and how will this affect HPE? While we may wish to con-
is because of student data, so there should be restitution sider this is the realm of science fiction, several develop-
for those students. Institutions need to recognise and ments are leading us to a point where we will have to
reward their user agency. address these questions. For years, HP Educators have used
Before we judge this to be unnecessary, we should virtual patients (Kononowicz et al. 2015) including High
again consider social media and other companies. We com- Fidelity Software simulation and Virtual Reality patients
plain about how our data are being used to increase cor- used for clinical trials (Wetsman 2022). These might be
porate value (as ‘Surveillance Capitalism’: Zuboff 2019), yet, entirely synthetically created, but there is no reason that
we happily use student data to increase our educational the concept of a ‘digital twin’ ‘information about a physical
institutions’ value. Do we have that ethical right? object can be separated from the object itself and then
Although some HPE institutions have clear policies mirror or twin that object’ (Grieves 2019) could not be bor-
regarding student behaviour and data (Brown and Klein rowed from industry and applied to humans, to create per-
2020), they need greater clarity on the benefit of this activ- sonal digital twins. As Google and other companies push
ity to the students themselves. the boundaries of sentience in AI (Fuller 2022; Lemoine
2022), our virtual patients, AI clinical trial candidates, and
Preparing for AI to change our views of ethics AI digital twins are surely, one day, to be sentient. What
rights will they have?
The next two topics consider more philosophical questions As an example, we should note the fracas around the
in the AI-ethics relationship, and the first deals with our interview conducted by Blake Lemoine of the Google AI
own views of ethics. system, LaMDA, in which it was claimed that LaMDA was
It has been long-argued that, as AI advances, it is sentient (Lemoine 2022). For now, the edited conversation
expected that it will develop ever-more intelligent
shown in Figure 1 would probably serve as ‘informed con-
machines until it reaches what has been termed a
sent’, but would surely have to be revised for HPE research
‘singularity’ (Vinge 1993) with greater-than-human-
in the future.
intelligence.
We should consider that many of our day-to-day deci-
sions about education and students are ethical decisions,
but ethics are merely human inventions grounded in our
own reason, and vary over time, and from culture to cul-
ture. There are surprisingly very few ‘basic human rights’
agreed to by all 8 billion humans on this planet.
Given that ethical decisions are grounded in reason, and
AI will eventually develop greater-than-human-intelligence,
it is plausible that AI will recognise short-comings in our
ethical models, and will adapt and develop its own ethical
models. After all, there is nothing inherently superior about
human ethics except that we believe it to be so at an axio-
matic level. McCarthy and his colleagues had recognised
that ‘a truly intelligent machine will carry out activities
which may best be described as self-improvement’ Figure 1. AI informed consent from the conversation between Blake
(McCarthy 1955), and, while they may have had technical Lemoine and the Google AI system, LaMDA.
MEDICAL TEACHER 581
the ideas given above, the rest of this discussion provides exhortation for institutions to enact the necessary changes
a few starting points: in how they view and address the ethical concerns that
face us now, and will face us in the future. Although there
is a great deal to be done, it is necessary for HPE educators
Step 1: Education ethics committee
and administrators to be aware of the problems and how
Similar to Research Ethics Boards or IRBs, Education to begin the process of solving them. It is my hope that
Institutions should set up Education Ethics Boards with the this Guide will assist Higher Professional Educators in that
same authority as the Research Ethics Board, and which journey.
focus on the AI issues raised in this Guide, with a particular
interest in the use of student data in AI-enhanced HPE. In
particular, ‘evaluation’ type of data should be considered in
Acknowledgments
the same light as formal research data, irrespective of its Some of these ideas were presented by the author in a Keynote
purpose, and irrespective of national or other policies that Address at the 1st International ICT for Life Conference, May, 2022.
do not require it. (While this is preferable in all such https://ictinlife.eu/. Dr. Paul de Roos, Uppsala universitet, for pointing
me to the GPT-3 references. Dr. David Taylor, Gulf Medical University,
research, AI makes this step crucial).
for his comments on an earlier draft of this Guide.
Proceedings of the 2021 ACM Conference on Fairness, recognition of patient race in medical imaging: a modelling study.
Accountability, and Transparency, ACM, Virtual Event Canada. p. Lancet Digit Health. 4(6):e406–e414.
610–623. Gijsberts CM, Groenewegen KA, Hoefer IE, Eijkemans MJC, Asselbergs
Benninger B. 2013. Formally acknowledging donor-cadaver-patients in FW, Anderson TJ, Britton AR, Dekker JM, Engstro €m G, Evans GW,
the basic and clinical science research arena: acknowledging donor- et al. 2015. Race/Ethnic differences in the associations of the fra-
cadaver-patients. Clin Anat. 26(7):810–813. mingham risk factors with carotid IMT and cardiovascular events.
Boninger F, Molnar A. 2022. Don’t go ‘Along’ with corporate schemes PLoS One. 10(7):e0132321–e0132321.
to gather up student data. Phi Delta Kappan. 103(7):33–37. Gin BC, Ten Cate O, O’Sullivan PS, Hauer KE, Boscardin C. 2022.
Bornet P. 2021. Tweet. https://twitter.com/pascal_bornet/status/ Exploring how feedback reflects entrustment decisions using artifi-
1457951888272343049 cial intelligence. Med Educ. 56(3):303–311.
Boughman E, Kohut SBAR, Sella-Villa D, Silvestro MV. 2017. “Alexa, do Gleason N. 2022. ChatGPT and the rise of AI writers: how
you have rights?”: legal issues posed by voice-controlled devices should higher education respond? Campus Learn Share Connect.
and the data they create. https://www.americanbar.org/groups/busi https://www.timeshighereducation.com/campus/chatgpt-and-rise-ai-
ness_law/publications/blt/2017/07/05_boughman/ writers-how-should-higher-education-respond
Boyd D, Crawford K. 2011. Six provocations for big data. SSRN J. DOI: Goh PS, Sandars J. 2019. Increasing tensions in the ubiquitous use of
10.2139/ssrn.1926431. technology for medical education. Med Teach. 41(6):716–718.
Brown M, Klein C. 2020. Whose data? Which rights? Whose power? A Grieves MW. 2019. Virtually intelligent product systems: digital and
policy discourse analysis of student privacy policy documents. physical twins. In: Flumerfelt S, Schwartz KG, Mavris D, Briceno S,
J High Educ. 91(7):1149–1178. editors. Complex systems engineering: theory and practice. Reston,
Byram M. 2022. Languages and identities. Strasbourg: Council of VA: American Institute of Aeronautics and Astronautics, Inc. p. 175–
Europe. 200.
Çalışkan SA, Demir K, Karaca O. 2022. Artificial intelligence in medical Grunhut J, Marques O, Wyatt ATM. 2022. Needs, challenges, and appli-
education curriculum: an e-Delphi study for competencies. PLoS cations of artificial intelligence in medical education curriculum.
One. 17(7):e0271872. JMIR Med Educ. 8(2):e35587.
ten Cate O. 2009. Why the ethics of medical education research differs Han HJ. 2022. How dare they peep into my private life? Human rights
from that of medical research. Med Educ. 43(7):608–610. Watch Report. USA. https://www.hrw.org/report/2022/05/25/how-
ten Cate O, Dahdal S, Lambert T, Neubauer F, Pless A, Pohlmann PF, dare-they-peep-my-private-life/childrens-rights-violations-
van Rijen H, Gurtner C. 2020. Ten caveats of learning analytics in governments
health professions education: a consumer’s perspective. Med Teach. Hays R, Masters K. 2020. Publishing ethics in medical education: guid-
42(6):673–678. ance for authors and reviewers in a changing world. MedEdPublish.
Celik I. 2023. Towards Intelligent-TPACK: an empirical study on teach- 9:10–48.
HHS 2021. Exemptions (2018 Requirements). https://www.hhs.gov/
ers’ professional knowledge to ethically integrate artificial intelli-
ohrp/regulations-and-policy/regulations/45-cfr-46/common-rule-sub
gence (AI)-based tools into education. Comput Hum Behav. 138:
part-a-46104/index.html
107468.
High-Level Expert Group on Artificial Intelligence (HLEG AI). 2019.
Chan KS, Zary N. 2019. Applications and challenges of implementing
Ethics guidelines for trustworthy AI. Brussels: European Commission
artificial intelligence in medical education: integrative review. JMIR
Jones K, Thomson J, Arnold K. 2014. Questions of data ownership on
Med Educ. 5(1):e13930.
campus. https://er.educause.edu/articles/2014/8/questions-of-data-
Cobianchi L, Verde JM, Loftus TJ, Piccolo D, Dal Mas F, Mascagni P,
ownership-on-campus
Garcia Vazquez A, Ansaloni L, Marseglia GR, Massaro M, et al. 2022.
Klinger. 2021. What is an AI Model? Here’s what you need to know.
Artificial intelligence and surgery: ethical dilemmas and open issues.
https://viso.ai/deep-learning/ml-ai-models/
J Am Coll Surg [Internet]. 235(2):268–275.
Kononowicz AA, Zary N, Edelbring S, Corral J, Hege I. 2015. Virtual
CoE Commissioner for Human Rights 2019. Unboxing artificial intelli-
patients - what are we talking about? A framework to classify the
gence: 10 steps to protect human rights. Strasbourg: Council of
meanings of the term in healthcare education. BMC Med Educ.
Europe. 15(1):11.
Cook S. 2021. US schools leaked 28.6 million records in 1,851 data Lattouf OM. 2022. Impact of digital transformation on the future of
breaches since 2005. Comparitech. https://www.comparitech.com/ medical education and practice. J Card Surg.:Jocs.16642. 37(9):
blog/vpn-privacy/us-schools-data-breaches/ 2799–2808.
Dalton-Brown S. 2020. The ethics of medical AI and the physician- Lazarus MD, Truong M, Douglas P, Selwyn N. 2022. Artificial intelli-
patient relationship. Camb Q Healthc Ethics. 29(1):115–121. gence and clinical anatomical education: promises and perils. Anat
Damrosch SP. 1986. Ensuring anonymity by use of subject-generated Sci Educ. DOI:10.1002/ase.2221.
identification codes. Res Nurs Health. 9(1):61–63. Lazer D, Pentland A, Adamic L, Aral S, Barabasi A-L, Brewer D,
Descartes 1637. Discourse on the method and the meditations. 1968 Christakis N, Contractor N, Fowler J, Gutmann M, et al. 2009.
edn. Harmondsworth: Penguin. Computational social. Science. 323(5915):721–723.,.
Ellaway RH, Pusic MV, Galbraith RM, Cameron T. 2014. Developing the Lee J, Wu AS, Li D, Kulasegaram, KM, (2021). Artificial intelligence in
role of big data and analytics in health professional education. Med undergraduate medical education: a scoping review. Acad Med.
Teach. 36(3):216–222. 96(11S):S62–S70.
European Parliament 2016. EU General Data Protection Regulation Lemoine B. 2022. Is LaMDA Sentient? — an Interview. Medium.
(GDPR). https://gdprinfo.eu/ https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-
FNB 2019. The ethical principles of research with human participants ea64d916d917
and ethical review in the human sciences in Finland. 2nd ed. Levy S. 2022. Blake Lemoine Says Google’s LaMDA AI Faces “Bigotry.
Helsinki, Finland: Finnish National Board on Research Integrity TENK Wired. https://www.wired.com/story/blake-lemoine-google-lamda-ai-
publications. bigotry/
Fuller 2022. Google engineer claims AI technology LaMDA is sentient. Liao SM. 2020. The moral status and rights of artificial intelligence. In:
ABC News. https://www.abc.net.au/news/2022-06-13/google-ai- LiaoSM, editor. Ethics of artificial intelligence. New York: Oxford
lamda-sentient-engineer-blake-lemoine-says/101147222 University Press; p. 480–504.
Gabelica M, Bojcic R, Puljak L. 2022. Many researchers were not com- Linardatos P, Papastefanopoulos V, Kotsiantis S. 2020. Explainable AI:
pliant with their published data sharing statement: mixed-methods a review of machine learning interpretability methods. Entropy.
study. J Clin Epidemiol. 150:33–41. 23(1):18.
Gebru T, Mitchell M. 2022. We warned Google that people might Lingard LA, Haber RJ. 1999. What do we mean by “relevance”? A clin-
believe AI was sentient. Now it’s happening. Wash Post. https:// ical and rhetorical definition with implications for teaching
www.washingtonpost.com/opinions/2022/06/17/google-ai-ethics- and learning the case-presentation format. Acad Med. 74(10):
sentient-lemoine-warning/ S124–S127.
Gichoya JW, Banerjee I, Bhimireddy AR, Burns JL, Celi LA, Chen L-C, Ludwig S, Gruber C, Ehlers JP, Ramspott S. 2020. Diversity in medical
Correa R, Dullerud N, Ghassemi M, Huang S-C, et al. 2022. AI education. GMS J Med Educ. 37(2):Doc27.
584 K. MASTERS
Masters K. 2009. Opening the closed-access medical journals: internet- Straw I. 2020. The automation of bias in medical Artificial Intelligence
based sharing of institutions’ access codes on a medical website. (AI): decoding the past to create a better future. Artif Intell Med.
Internet J Med Inform. https://ispub.com/IJMI/5/2/6358. 110:101965.
Masters K. 2019. Artificial intelligence in medical education. Med Sundheds- og Ældreministeriet 2020. Bekendtgørelse af lov om viden-
Teach. 41(9):976–980. skabsetisk behandling af sundhedsvidenskabelige forskningsprojekter
Masters K. 2020a. Artificial Intelligence developments in medical edu- og sundhedsdatavidenskabelige forskningsprojekter [Promulgation of
cation: a conceptual and practical framework. MedEdPublish. DOI: the Act on the ethical treatment of health science research projects
10.15694/mep.2020.000239.1 and health data science research projects - Author’s Translation]
Masters K. 2020b. Ethics in medical education digital scholarship. Med
[Internet]. https://www.retsinformation.dk/eli/lta/2020/1338.
Teach. 42(3):252–265.
Thorp HH. 2023. ChatGPT is fun, but not an author. Science. 379(6630):
McCallister E, Grance T, Scarfone K. 2010. NIST Special Publication 800-
313–313.
122: guide to Protecting the Confidentiality of Personally
Thunstro €m AO. 2022. We Asked GPT-3 to Write an Academic Paper
Identifiable Information (PII). Gaithersburg, MD: National Institutes
about Itself—Then We Tried to Get It Published. Sci Am. https://
of Standards and Technology, Department of Commerce, USA.
www.scientificamerican.com/article/we-asked-gpt-3-to-write-an-aca
McCarthy J. 1955. A proposal for the dartmouth summer research pro-
ject on artificial intelligence. Hanover. http://jmc.stanford.edu/ demic-paper-about-itself-mdash-then-we-tried-to-get-it-published/
Thunstro €m AO, Steingrimsson S, Generative Pretrained Transformer G
articles/dartmouth/dartmouth.pdf
Metz R. 2022. No, Google’s AI is not sentient. CNN. https://www.cnn. 2022. Can GPT-3 write an academic paper on itself, with minimal
com/2022/06/13/tech/google-ai-not-sentient/index.html human input? https://hal.archives-ouvertes.fr/hal-03701250
Narayanan A, Shmatikov V. 2009. De-anonymizing social networks. In: Tremblay G, Carmichael P-H, Maziade J, Gr egoire M. 2019. Detection of
2006 IEEE Symposium on Security and Privacy (S&P’06). p. 173–187. residents with progress issues using a keyword–specific algorithm.
OCR 2012. Guidance regarding methods for de-identification of pro- J Grad Med Educ. 11(6):656–662.
tected health information in accordance with the Health Insurance Turing A. 1950. Computing machinery and intelligence. Mind. LIX(236):
Portability and Accountability Act (HIPAA) privacy rule. https://www. 433–460.
hhs.gov/hipaa/for-professionals/privacy/special-topics/de-identifica USF 2023. Artificial intelligence writing [Internet]. https://fctl.ucf.edu/
tion/index.html teaching-resources/promoting-academic-integrity/artificial-intelli
Pusic MV, Birnbaum RJ, Thoma B, Hamstra SJ, Cavalcanti RB, Warm EJ, gence-writing/
Janssen A, Shaw T. 2023. Frameworks for integrating learning ana- Vallance CT. 2022. UK decides AI still cannot patent inventions. BBC
lytics with the electronic health record. J Contin Educ Health Prof. News. https://www.bbc.com/news/technology-61896180.
43(1):52–59. Vinge V. 1993. Technological singularity. http://citeseerx.ist.psu.edu/
Racism and Technology Center 2022. College voor de Rechten van de viewdoc/download?doi=10.1.1.94.7856&rep=rep1&type=pdf
Mens oordeelt dat VU student het vermoeden van algoritmische Vogt L, Eaton SE. 2022. Make it someone’s job: documenting the
discriminatie succesvol heeft onderbouwd: antispieksoftware is rac- growth of the academic integrity profession through a collection of
istisch – Racism and Technology Center. https://racismandtechnol position postings. Can Perspect Acad Integr. 5(1):21–27.
ogy.center/2022/12/09/college-voor-de-rechten-van-de-mens-oor WEF 2021. A Holistic Guide to Approaching AI Fairness Education in
deelt-dat-vu-student-het-vermoeden-van-algoritmische-discrimina Organizations (World Economic Forum White Paper). Geneva: WEF.
tie-succesvol-heeft-onderbouwd-antispieksoftware-is-racistisch/. Wetsman N. 2022. A VR company is using an artificial patient group to
Rampton V, Mittelman M, Goldhahn J. 2020. Implications of artificial
test its chronic pain treatment. The Verge. https://www.theverge.
intelligence for medical education. Lancet Digit Health. 2(3):
com/2022/4/28/23044586/vr-chronic-pain-synthetic-clinical-trial-data
e111–e112.
Wilcox L. 2018. “Nothing about me without me”: investigating the
Rettberg JW. 2022. ChatGPT is multilingual but monocultural, and it’s
health information access needs of adolescent patients.
learning your values. https://jilltxt.net/right-now-chatgpt-is-multilin
Interactions. 25(5):76–78.
gual-but-monocultural-but-its-learning-your-values/
Wilkinson MD, Dumontier M, Aalbersberg IJJ, Appleton G, Axton M,
Rodrigues R. 2020. Legal and human rights issues of AI: gaps, chal-
lenges and vulnerabilities. J Responsible Technol. 4:100005. Baak A, Blomberg N, Boiten J-W, da Silva Santos LB, Bourne PE,
Sandars J, Brown J, Walsh K. 2017. Research or evaluation – does the et al. 2016. The FAIR Guiding Principles for scientific data manage-
difference matter? Educ Prim Care. 28(3):134–136. ment and stewardship. Sci Data. 3:160018–160018.
Sawdon M, McLachlan J. 2020. ‘10% of your medical students will Wolff RF, Moons KGM, Riley RD, Whiting PF, Westwood M, Collins GS,
cause 90% of your problems’: a prospective correlational study. BMJ Reitsma JB, Kleijnen J, Mallett S, PROBAST Group† 2019. PROBAST:
Open. 10(11):e038472. A tool to assess the risk of bias and applicability of prediction
Shah S, Horne A, Capella J. 2012. Good data won’t guarantee good model studies. Ann Intern Med. 170(1):51–58.
decisions. https://hbr.org/2012/04/good-data-wont-guarantee-good- Zuboff S. 2019. The Age of Surveillance Capitalism. New York:
decisions PublicAffairs.