0% found this document useful (0 votes)
6 views11 pages

10 1016@j Giq 2020 101489

Uploaded by

bz3l0mtehh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views11 pages

10 1016@j Giq 2020 101489

Uploaded by

bz3l0mtehh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Government Information Quarterly 37 (2020) 101489

Contents lists available at ScienceDirect

Government Information Quarterly


journal homepage: www.elsevier.com/locate/govinf

Opportunity for renewal or disruptive force? How artificial intelligence T


alters democratic politics☆
Pascal D. König⁎, Georg Wenzelburger
TU Kaiserslautern, Germany

ARTICLE INFO ABSTRACT

Keywords: The increasing adoption of AI profoundly changes the informational foundations of societies. What does this
Artificial intelligence mean for the functioning of liberal democracy, specifically in terms of responsiveness and accountability? The
Automated decision-making present paper addresses this question by discussing how capabilities of AI affect the informational requirements
Democracy of the democratic process. Based on a systems theory perspective, the consequences of AI are shown to be wide-
Politics
ranging. AI can reduce or increase information deficits of both citizens and decision-makers on the input,
Cybernetics
Responsiveness
throughout, and output level of the political system. While the challenges that AI creates for democratic re-
Accountability sponsiveness and accountability have a novel technological dimension, they are nonetheless in continuity with
existing transparency and accountability problems. Avoiding a negative impact will require institutionalizing
suitable governance mechanisms – a task that is challenging already at the throughout and output level, but
particularly difficult, and important, at the input level of politics.

1. Introduction a critique see Pasquale, 2015). In light of their wide adoption and their
growing capacities it is important to ask what human cohabitation with
While the jury is still out on the question whether there is intelligent forms of AI may look like and what this means for democratically
life in nearby galaxies, humans are already hard at work to create in- governed societies.
telligent machines on earth – and will have to figure out how to live In the following, we understand AI as the use of machine learning
with them. While there are long-standing debates at what point ma- techniques for the purpose of making computers process data to acquire
chines should be seen as intelligent, there is, as of yet, agreement that or improve the capability to deal with certain tasks in ways that are not
existing forms of artificial intelligence (AI) and trainable algorithmic explicitly programmed (as opposed to expert systems with predefined
systems are at best intelligent in a weak sense. They certainly do not decision rules). This understanding comprises algorithmic decision-
show awareness, their “intelligence” is hardly comparable to that of making systems that are designed for performing a specific task but, due
humans, and they are generally designed to deal with specific, narrowly to reliance on machine learning, involve a certain degree of un-
defined tasks. Nonetheless, already today, automated systems populate predictability. They range from applications that use easily compre-
societies as artificial agents and intervene into social relations and in- hensible statistical models to those which employ deep learning that
dividuals' lives. Most people in industrialized societies – consciously or lend them more agent-like character and make them difficult if not
not – use filtering algorithms in online searches and social networks or impossible to comprehend and explain (e.g. AlphaGo)1.
rely on increasingly potent personal assistants. And to some degree, In this article, we will approach the role of AI from the perspective
they are subject to algorithmic decision-making applied by the state, for of democratic politics. By doing so, we set out to contribute a political
instance in the areas of policing, welfare and education (Brauneis & science view to a debate on how AI challenges the very foundation on
Goodman, 2017). These algorithmic systems can also alter informa- which liberal democracies are built. While such a perspective has been
tional environments on a large scale, structuring individuals' decision largely missing from the debate on the governance of algorithms, we
situations and possibly affecting their welfare and life opportunities (for can nevertheless draw on several strands of adjacent disciplines. A first


We would like to thank the anonymous reviewers and the editors of Government Information Quarterly for their valuable comments and suggestions. The
manuscript has also gained from discussions with and feedback by Anja Achtziger, Julia Felfeli, Adam Harkens, Tobias Krafft, Johannes Schmees, Wolfgang Schulz,
Karen Yeung, and Katharina Zweig.

Correspondence author at: Department of Social Sciences, TU Kaiserslautern, Building 57, PO-Box 3049, 67653 Kaiserslautern, Germany.
E-mail address: [email protected] (P.D. König).
1
As current forms of AI largely involve some sort of algorithmic decision-making we use the terms interchangeably.

https://doi.org/10.1016/j.giq.2020.101489
Received 28 May 2019; Received in revised form 21 September 2019; Accepted 6 May 2020
0740-624X/ © 2020 Elsevier Inc. All rights reserved.
P.D. König and G. Wenzelburger Government Information Quarterly 37 (2020) 101489

body is centered on the functioning of AI themselve and discusses what While algorithmic systems are designed for handling specific tasks,
ethical standards these systems should respect (Binns, 2017; the scope of their societal impact can be vast. Whether deployed in the
Mittelstadt, Allo, Taddeo, Wachter, & Floridi, 2016; Winfield & Jirotka, private or the public sector, they can serve to score and classify in-
2018). Various organizations have also issued ethical guidelines for dividuals and to calculate the probabilities of future behaviors. This
automated systems and robotics (for an overview see Winfield & occurs, for instance in the targeted provision of online advertisements,
Jirotka, 2018, p. 4; more recently CEPEJ, 2018). However, this litera- the scoring of credit default risks or the assessment of terrorism risks.
ture is strongly focused on the AI systems themselves and how they can Algorithmic systems can also proactively intervene into social relations
be made to operate in ways that they benefit the public. It largely does thus amounting to a sort of governance that does not need to be backed
not place AI in the larger context of the workings of liberal democracy. by authoritative force in order to be highly effective (Coglianese & Lehr,
A second body of studies is more helpful in this respect: Critical 2018; Just & Latzer, 2017; Yeung, 2017a). This kind of complex
accounts in legal studies and the philosophy of law point to problems steering is based on the processing of information about manifold dis-
that AI creates for the rule of law and fundamental rights, such as due tributed and networked entities. The information system can “learn”
process (de Vries, 2013; Hildebrandt, 2016; Nemitz, 2018) and the about dispositions and behavioral patterns and then act upon these
functioning of the public sphere (Just & Latzer, 2017). Moreover, some insights generated from information processing. In this way it can
scholars have particularly stressed the power asymmetries associated furnish personalized information to individuals, thereby shaping their
with the development and application of algorithmic systems (e.g. Tene informational environments and their decision situations.
& Polonetsky, 2013). Yet, while this work addresses how certain aspects While this kind of algorithmic governance is not representative of
of democratic regimes are affected by AI, a broader and systemic per- the various ways in which algorithmic systems and AI can be employed,
spective on what living with artificial agents in a democratically gov- it illustrates how profoundly they can intervene into social relations.
erned society can look like is still missing from the literature. It is such They may unobtrusively structure behavior by changing the informa-
an angle that the political science perspective can contribute – based on tional environments in which human actors decide and they may sup-
the comprehensive body of work on the prerequisites and mechanisms port or replace human decision-making in various settings. If, for in-
that make democracy work. stance, a person relies mainly on an algorithmically personalized
The present paper thus, drawing on democratic theory, discusses the newsfeed to make up her mind about political issues, the adoption of AI
role of AI in liberal-democratic politics, specifically in terms of their has important ramifications for the workings of the political system,
possible impact on responsiveness and accountability.2 As will be ela- because it affects public opinion formation. This points to the need for
borated below, algorithmic decision-making (ADM) systems and other assessing more systematically how democratic politics change in a
forms of AI are geared towards information processing and thus solving world of AI. The following section lays the groundwork for this as-
cognitive and coordination tasks; and liberal democracy as a political sessment.
system is marked by special informational requirements. We argue that
such requirements exist not only with regard to producing adequate 3. The informational bases of liberal democracy
outputs. Rather, due to standards of democratic legitimacy, informa-
tional needs arise also for safeguarding a certain form of decision- 3.1. Foundations of liberal democracy
making (throughput level) as well as the continuous integration of ci-
tizen preferences (input level). This is important because liberal de- The core principle of democratic rule demands that those affected
mocracy is mainly about how outputs are produced. by political decisions can understand themselves as their authors and
The article is structured as follows. The second section discusses the that political decisions and outputs are tied back to citizens' views and
capacities of algorithmic decision-making systems to deal with cogni- preferences (Scharpf, 1999). This requires the political process to be
tive tasks, whereas section three elaborates on the information needed based on procedures that allow for translating inputs (e.g. citizens'
to make the democratic political processes work. These two sections preferences) into outputs (policies) (Klingemann, Hofferbert, & Budge,
form the basis for formulating the contrasted scenarios of AI adoption in 1994, p. 8). In liberal democracy, these rules comprise certain limits
section four, which is structured along the input, the throughput and based on (1) the guarantees of fundamental civil rights and rule of law
the output level of a democratic political system. The paper concludes (Merkel, 2004, pp. 38–40) as well as (2) on the rules that structure the
with a discussion and outlook in section five. institutional framework of democratic politics and the translation of
input into output. Two main features are essential to this translation
process: electoral institutions, ensuring free and fair elections, and
2. The capacities of systems based on artificial intelligence political parties, competing for government office (Dahl, 1998). How-
ever, citizens' preferences will always be imperfectly represented and
Some caution is warranted when thinking about AI. Real-world their views and preferences also change over time (Urbinati, 2000).
applications still only qualify as intelligent in a weak sense and they are Hence, also in the time between elections, there is an ongoing plur-
designed for dealing with specific problems and operating in narrowly alistic political struggle over which views and interests are getting ar-
defined domains. Yet they nevertheless show remarkable performance ticulated and taken up in the political process. Political parties, the civil
in their various fields of application (McAfee & Brynjolfsson, 2017). society, interest groups and other organizations all advocate positions
Machines already outperform medical professionals in some areas of and preferences and try to make the government act in ways that are
diagnostics; they beat human players at complex games (most notably responsive to their demands.
chess and go) and quizzes; some experts expect them to become better, At the same time, these various actors keep watchful eyes on the
safer and more efficient drivers than humans; and they can coordinate government and hold it accountable in case it abuses its power, over-
and steer the activities of distributed entities in ways that humans are steps its competences or deviates from its mandate. The ultimate way to
simply not capable of. ensure accountability works via elections in which citizens can sanction
governments retrospectively. However, also beyond elections, the
2 competition between political interests and particularly between poli-
Responsiveness refers to an actor being receptive to someone else's de-
mands, requests, or preferences (Bartolini, 1999, p. 448). Accountability has tical parties means that there are always some political actors, espe-
been defined in various ways. However, a common understanding of this cially those in opposition, that have an incentive to control the gov-
concept is that it entails a relationship in which one party is obliged to justify its ernment and hold it accountable (Bartolini, 1999). Besides calling
conduct to another party and may face consequences for its actions (Warren, public attention to issues, they have guaranteed control rights at their
2014). disposal, such as parliamentary oversight.

2
P.D. König and G. Wenzelburger Government Information Quarterly 37 (2020) 101489

Under these conditions, the government has to expect to lose sup- uncover and publicly report on the performance of political actors and
port and be punished if it fails to be responsive and accountable to policies. Without an assessment of decision outcomes and their re-
citizens. The political process therefore includes a feedback process that porting, citizens will not be able to hold political actors accountable for
generates dynamic responsiveness to a changing public opinion in line their actions.
with what has been called a “thermostatic model” (Wlezien, 1995). As These information needs on the input-side lie at the basis of political
such, a dynamic process of contestation lies at the heart of a liberal- judgment and electoral choice already in a minimal vision of democ-
democratic regime and promises to link political decisions back to ci- racy (Downs, 1957). However, citizens hardly have entirely stable
tizen preferences. The value of this democratic process consists of the preferences but may update their political views through new in-
continued equal possibility of all citizens to take part in the process of formation and when taking part in processes of opinion and will for-
opinion and will formation that leads to political decisions (Urbinati, mation. Hence, their preferences are formed in the democratic process
2014). itself and cannot be considered as completely exogenous. Political ac-
In this institutionalized process, the status quo can be contested, and tors, in turn, require information about what citizens want if their
previous decisions can be revised while this possibility also remains preferences are to be injected into the democratic process. Conse-
guaranteed in the future. This altogether prevents power from be- quently, channels through which information can travel from the citi-
coming solidified and instead keeps politics open to future changes; and zens to the political actors are crucial, if decision-makers want to re-
it serves to ensure that decisions are not produced against the will of the spond to citizens' demands.
people, i.e. are not biased in the sense that they consistently follow the On the throughput level, political decision-makers have an acute need
preferences of a minority. Following Lindblom (1965), who has em- for information, as already mentioned further above, before taking
phasized democracy's ability to adapt based on its capacity for learning decisions. They have a general incentive to get relevant information
and self-correction, one can regard liberal democracy as a cybernetic concerning the expected consequences of their decisions – with regard
system that produces outputs based on inputs and that institutionalizes to the realization of policy goals but also the consequences for their
feedback loops. This way, it makes sure that inputs are processed and support among citizens. Political decision-makers' information sources
transformed into outputs in a way that is oriented towards producing may also comprise feedback from the implementation and effects of
responsiveness and accountability. previous decisions (output level), which allows them to adapt to
changing circumstances. Overall, as their capacity to process informa-
3.2. Information and democracy tion is limited (Jones & Baumgartner, 2005), they will use various re-
sources and instruments, such as special committees, experts and ad-
The openness and adaptiveness of democratic politics described in visors for that purpose. They have also been shown to use simple
the previous section come at a price, though. The political system has to heuristics in order to take decisions in contexts of information overload
build up internal complexity that can sustain a self-regulating process, (Vis, 2019). This can present a chance for policy entrepreneurs to in-
and the specific form in which outputs are produced in a liberal de- fluence the process and push their favorite policy solution on the
mocracy implies various informational requirements. To produce sa- agenda (Kingdon, 2003).
tisfactory outputs while being responsive and accountable, information Second, accountable decision-making requires that the way in
is needed not just at the level of producing outputs, but also at the level which inputs are translated into decisions is subject to monitoring and
of throughputs (the decision-making) and the level of inputs that are control. There are established formal procedures for safeguarding ac-
fed into the decision-making process. The cybernetic process is further countability, such as electoral laws, voting rules in the legislature, and
characterized by feedback – i.e. that the inputs are, in turn, affected by different instruments of control (e.g. committees of inquiry). Yet, tra-
the outputs of the decision-making process (see Fig. 1). cing how political decisions have been made, how specific content has
This subsection discusses information demands on these three levels been translated into legislation, how different preferences articulated in
from the perspective of decision-makers and citizens. Drawing on the the public have been taken up and transformed into concrete policies, is
input-output-model in political science (seminally Easton, 1965) for extremely demanding. This is complicated even further by additional
this purpose has important advantages as it is tightly linked to cyber- informal procedures that shape processes of negotiation and bar-
netics as the science of steering systems based on the processing of gaining. How do citizens know who has been influenced by whom or
information. Moreover, the model has been the workhorse of political whether certain interests have been privileged? Obtaining the relevant
scientists that are interested in assessing the responsiveness of politics information is practically impossible for citizens. They must rely on
to the preferences of the citizens (e.g. Klingemann et al., 1994; Soroka & other sources, such as civil society, interest groups, the media, and ef-
Wlezien, 2009). The dynamic nature of politics represented by the fective competition between political actors to establish transparency
model as well as its high level of abstraction – which allows to gen- and hold political decision-makers accountable. But, again, if these
eralize over different concrete institutional settings – therefore make it intermediate actors want to fill out that role they need to be able to
a fitting conceptual foundation for examining how AI changes liberal obtain adequate information.
democracy's information bases. Finally, on the level of outputs, when decisions have been made,
On the input level, citizens' views and preferences are formed and there are still heavy informational demands. A decision also has to be
articulated. This requires an adequate and unbiased informational implemented after being adopted, which usually implies a need for
basis; citizens have to be able to determine how existing political offers detailed information about the populace. The state's administrative
relate to their preferences. Consequently, political actors as well as system has the important executive role of putting adopted decisions
media organizations need to convey information about the state of into practice in order to achieve intended outcomes; and to manage its
political affairs and establish a political public sphere in which citizens programs, services, and decisions, the production of accurate and
can make up their minds about politics. In order for citizens to assess timely information is indispensable. Citizens, too, have a need for in-
the responsiveness of political actors and to hold them accountable, formation on the output level, as the application of rules by the public
they also need to receive information about what happens on the administration can result in biased and unjust outcomes (Frederickson
throughput and on the output level – e.g. what decisions have been & Ghere, 2013). They therefore need to know how the administration
taken in the past and what effects they have had. For this feedback to be performs and how they are treated. Otherwise, they can hardly take
able to flow to the input level, additional information requirements legal action against the state – and fundamental rights of citizens might
have to be met on the throughput and output level. More directly with be harmed.
regard to the public sphere and the input level, establishing such a Altogether, the three levels of inputs, throughputs, and outputs
feedback link has traditionally been the role of the mass media that differ in terms of the kind of information needed to make the

3
P.D. König and G. Wenzelburger Government Information Quarterly 37 (2020) 101489

Democra c poli cal process

Inputs Throughput Outputs Outcomes

Administrave
Public opinion Polical decision- system
and will making
formaon Decisions, Effects on
Transla ng inputs into policies ci zens
Demands, outputs, involving
disapproval, par es, poli cal powers,
support interest groups etc.

Informa on requirements (dark grey: decision makers, light grey: ci zens)

Preference
Knowledge, exper se Informa on about decision domain
distribu on

Poli cal Transparency of poli cal


Transparency of administra ve decisions
informa on decision-making

Fig. 1. The cybernetic model of democratic decision-making and information needs.

democratic political system work. How does AI with its capacities to makes finding and compiling the relevant information a difficult task.
deal with cognitive tasks interfere in this informational environment As machine learning and AI applications are designed to deal with these
and affect the working of democracy on all three levels? As the sub- sorts of cognitive problems, this is where they can – in principle – play
sequent section will show, the consequences of an increased use of AI an important supporting role.
can be either detrimental or favorable for the working of liberal de- The superior capacities of such tools to deal with specialized cog-
mocracy. We will illustrate this by describing a positive and a negative nitive tasks can help to overcome what are essentially cognitive barriers
scenario regarding the consequences of AI for democratic politics. to the working of a public sphere. In the same way that search engines
The formulation of scenarios usually serves as a tool to forecast have made the web searchable and are increasingly capable of pro-
possible developments in face of conditions marked by uncertainty, viding direct answers to user questions, AI in the hands of citizens may
complexity and rapid changes (Stimson, 2014, p. 228). They can help to help them navigate the public sphere. They might use these tools si-
anticipate possible challenges e.g. for policy actors, and highlight how milar to the way in which AI already serves to deal with information
future developments could be shaped. In order obtain to a clear-cut overload in the area of law, where these systems process large amounts
picture, we formulate two contrary scenarios. They differ with regard to of text to synthesize relevant information (e.g. Boella et al., 2016).
whether the use of AI exacerbates or mitigates the challenge of ob- There are comparable developments that aim to empower civil society
taining an adequate informational basis for guaranteeing an overall through equipping citizens with tools that markedly lower the burden
responsive and accountable democratic process. In order to determine of obtaining relevant information about political issues. They are a
what such a positive versus negative impact may look like, we syn- specific form of what is called civic tech (Savaget, Chiarini, & Evans,
thesize arguments and insights from relevant contributions and link 2018; Ünver, 2018, p. 15), which refers to uses of information tech-
them to the framework developed above. nology that enable or foster citizen engagement and collaboration, and
enhance the relationship between governing and governed. An example
is the project “Active Citizen”, which is developed by the NGO Citizen
4. Different views on AI and democracy: two scenarios Foundation in Iceland.3 It is a platform and an interface that uses AI in
order to connect citizens and to provide them with notifications and
4.1. Input dimension information based on their needs and dispositions. Applications like
these have the potential to engage citizens and reduce their information
4.1.1. Positive scenario: algorithmically enhanced navigation of political barriers regarding political affairs.
information Moreover, it is conceivable that AI-based personal assistants partly
Taking part in the public process of opinion formation has a strong take over the role of journalists and curators of political information –
cognitive component. Citizens need to develop and use their political similar to but more sophisticated than e.g. vote advice applications,
judgment. They have to decide which preferences and demands to ar- public platforms or initiatives by some quality newspapers that monitor
ticulate and determine how political options and choices relate to their
views and preferences. This involves orienting themselves in an en-
vironment that is essentially one of information overload – which 3
https://www.citizens.is/empower-citizens-with-ai/

4
P.D. König and G. Wenzelburger Government Information Quarterly 37 (2020) 101489

the behavior of members of parliament and the government. Such AI misinformation or manipulative content deliberately designed to arouse
based solutions assisting citizens on the level of opinion and will for- affective reactions, e.g. with the aim to demobilize certain voter groups
mation are in line with Hilbert's (2012, p. 10) plea “to fight fire with (e.g. Bastos & Mercea, 2019; Howard, Woolley, & Calo, 2018; Persily,
fire: use our own technological devices (i.e. artificially intelligent 2017).
computers) to sift through the vast amounts of information delivered to In sum, the public process of opinion and will formation in the
us”. AI has the potential to compensate for the mismatch of humans' negative scenario is characterized by citizens more effectively being
cognitive capacities and the existing amount of information concerning lured by information offers that are personalized via machine learning
political affairs. tools and designed specifically to draw their attention – but that do not
Furthermore, the ability to synthesize information on the input level aim to provide a balanced picture of political matters. Hence, the
may even be enhanced to the degree that applications of AI serve to adoption of AI in this constellation rather impoverishes than enriches
compile relevant information also on the throughput and output level. the epistemic quality of the public discourse.
Such information would then feed back into public will formation on
the input level and could assist citizens in the dauting task of keeping 4.2. Throughput dimension
track of political actors' performance (e.g. kept policy promises). Hence,
if used in such a way, AI may actually enhance the possibilities to hold 4.2.1. Positive scenario: more informed and accountable political decision-
governments accountable for their decisions and thereby work in favor making
of the basic mechanisms that make democratic politics work. Democratic decision-making is a potentially complex and messy
process, which has sometimes been described using the famous “gar-
4.1.2. Negative scenario: algorithmically damaged public opinion formation bage can” model (Cohen, March, & Olsen, 1972). There are massive
In the negative scenario, algorithmic systems that gather, select or information needs both on the side of political decision-makers as well
interpret information exacerbate informational barriers or even create as those monitoring and controlling their actions – which may be citi-
new ones. In this constellation, AI performs an algorithmic sorting and zens themselves, various kinds of organizations advocating their in-
filtering that is designed primarily to capture the attention of an au- terests or actors within the political system (e.g. opposition parties).
dience and to engage users, and not primarily to convey an accurate The information demand for decision-makers concerns the question
and informative picture of politics (Carlson, 2018, pp. 1765–1766). what societal and political consequences their actions are likely to have.
This is based on the capacities of machine learning to process in- They will generally want to adopt political decisions that contribute to
formation about individuals' preferences and behaviors in order to achieving established policy goals, such as economic growth and low
achieve personalized messages and information offers that optimally crime, and to avoid policy failure, i.e. policies not bringing about the
resonate with individual predispositions (Yeung, 2017a). Citizens may intended change. Moreover, decision-makers also pursue the political
then increasingly be kept within the confines of what has notoriously goal of making decisions that resonate with citizens' preferences, at
been termed filter bubbles (Pariser, 2011). least with those of their clientele, to garner electoral support.
Thus, AI in the public sphere may influence individuals through A number of contributions suggests that the massively enhanced
shaping their information environments. In the aggregate, they operate informational basis in the age of “Big Data” leads to better-informed
as institutions that substantially intervene into citizens' opinion for- decisions by policy makers (e.g. Janssen & Kuk, 2016; van der Voort,
mation as they structure social constructions of reality for the larger Klievink, Arnaboldi, & Meijer, 2019), because algorithmic systems are
public (Just & Latzer, 2017; Napoli, 2014). For instance, how millions able to systematize and structure huge amounts of data. For instance,
of individuals obtain political information is filtered and sorted by applications of machine learning can be used for simulations of policy
search engines as well as social media and other platforms that can action and for continuously sourcing information from citizens that can
match presented information to user characteristics. Similarly, jour- then be used as the basis of policy-making (Chen & Hsieh, 2014;
nalistic tasks are partly becoming automated and can to some degree be Williamson, 2014) and help to produce more evidence-based decisions
tailored to those at the receiving end (Carlson, 2015; Napoli, 2014). (Lepri, Oliver, Letouzé, Pentland, & Vinck, 2018).
This personalization and accommodation of newsworthiness to per- In a business context, this advantage has already led to AI appli-
sonal preferences undermines the integrating and public character of cations being adopted in the upper echelons of management (Mayer-
news, which have traditionally represented the collectively important Schoenberger & Ramge, 2019; McAfee & Brynjolfsson, 2017). They
accounts of a society (Carlson, 2018, p. 1765). And even if journalists assist the management through specialized forms of information pro-
are still operating as gatekeepers, their role may also change if they, cessing, e.g. for business analysis, and produce recommendations and
themselves, rely on information that has been tailor-made for them. decisions that are based on an immense information input while re-
Algorithmic sorting furthermore makes it possible for third parties maining unaffected by human psychological biases.
to purposefully supply users e.g. of social network sites with tailored However, high-level political decision-makers in the cabinet and the
messages. The combination of publicly available voter data with con- administration operate under different conditions than their private
sumer and social media data and enhanced analytical capacities allow sector counterparts because their actions also follow a political logic
for generating detailed voter profiles, such that political actors develop that is not merely about solving problems but also about realizing
unprecedented possibilities of targeting citizens with messages (Franz, ideological goals and preserving power. This dimension of politics,
2013; Hersh, 2015). Being able to produce a more fine-grained seg- which lies at the heart of a democratic system, is thus arguably the least
mentation of citizens, they can craft and disseminate messages that amenable to AI. Nonetheless, political actors are in need of information
better match individual predispositions and are more likely to resonate. to make decisions, and to the extent that forms of AI prove to be su-
On the one hand, existing evidence suggests that the accuracy and perior tools for producing relevant knowledge, it is likely that they are
utility of these techniques is still rather limited (Endres & Kelly, 2018; increasingly taken up. Following the assessment by Höchtl et al. (2016,
Hersh, 2015). On the other hand, as they may become more effective, p. 162), a general and significant value of AI for political decision-
the unobtrusive and highly individualized targeting with political makers on the throughput level arguably lies in their capacity to
messages via AI could lead to a fragmentation of the electorate, less strengthen the feedback link from the output level. To the extent that
responsive and more exclusive politics, and an even less dialogical re- these systems allow for constant monitoring and analysis of policy
lationship between citizens and political actors (e.g. Franz, 2013; choices, political actors can more quickly and flexibly reevaluate pre-
Gorton, 2016; Jamieson, 2013). As more recent developments in poli- vious decisions in order to link policy decisions to public preferences.
tics have demonstrated, such algorithmically enhanced targeting of AI may, however, not only aid decision-makers but also foster ac-
specific groups has increasingly been taken up, in part for spreading countability to the extent that they furnish political actors and citizens

5
P.D. König and G. Wenzelburger Government Information Quarterly 37 (2020) 101489

with relevant information about the decision-making process. Similar to objectivity and fairness of decisions (e.g. Lepri et al., 2018; van der
their role regarding the input dimension, civic tech tools may serve to Voort et al., 2019).
gather, compile and synthesize accessible information in ways that in- In the positive scenario, exploiting the potential of algorithmic
crease transparency. As Savaget et al. (2018) argue with a view to civic governance for output production forms part of a larger vision of e-
tech projects in Brazil, AI can enable civil society to establish closer government which aims to realize an algorithmically enhanced public
links to public officials and politicians and to increase citizens' influence service (Williamson, 2014, p. 292). It draws on the superior coordina-
on political processes. The tools that the authors describe are mainly tion and information processing capacities of digital technologies in
designed to facilitate the task of holding decision-makers accountable, order to anticipate user needs, to better target these services, and to
specifically in the area of finances. But it is easily conceivable how si- achieve complex steering and coordination outcomes that humans
milar applications employed by civil society actors may serve to better could otherwise not attain (Chen & Hsieh, 2014; Dunleavy, 2016;
monitor politicians, coordinate activities of civil society actors and Janssen & Kuk, 2016; Meijer & Bolívar, 2016).
support citizen participation in the political process (see also Chen & The most sophisticated realizations of algorithmic systems and AI in
Hsieh, 2014, p. 5). the public sector amount to a sort of algorithmic governance and op-
erate through continuously processing information and feedback about
4.2.2. Negative scenario: paving the way towards technocratic decision- networked distributed entities to coordinate individuals' behaviors in a
making way that optimizes predefined objectives (Yeung, 2017a). This way,
In the negative scenario, AI applications exacerbate existing chal- they can lead to a more efficient resource allocation, e.g. in logistics,
lenges of monitoring, control and guaranteeing accountability of poli- traffic or energy use. Similarly, algorithmic governance can be used for
tical actors. This is the case if the use of such applications adds an regulatory purposes in the areas of policing, criminal justice, health
additional layer of complexity in decision-making so that citizens are care, and education, which may altogether combine into an entire
less able to determine how political actors arrive at decisions. Also, ecosystem, e.g. in the form of smart cities that are governed in part by
decision-makers themselves might lack a deeper understanding of the AI (Brauneis & Goodman, 2017).
tools on which they rely and thus be unaware of biases in their op- As AI leads to more personalized solutions and services, this alone
erations. In any case, increasing reliance on machine intelligence could may already contribute to fairer outcomes in the sense that decisions
contribute to a more technocratic mode of governing that remains more are more attuned to individuals' dispositions, preferences and needs and
detached from citizens and is justified in the name of greater efficiency reduce the probability of erroneous treatment (Coglianese & Lehr,
and effectiveness (Khanna, 2017). Given that individuals seem to suffer 2018, pp. 32, 46). Moreover, AI shaping decision-making in the public
from an automation bias (Parasuraman & Manzey, 2010), i.e. tend to sector may also contribute to fairness because they are objective at least
show overreliance on automated decision-making systems, such a ten- in the sense that they operate uninfluenced by affective states and the
dency could undercut an important part of the democratic policy pro- various biases which mark human decision-making (Lepri et al., 2018,
cess – the political debate about decisions. Hence, whereas throughput p. 612). And although AI systems can acquire undesirable biases from
is today characterized by the fact that diverse political actors are in- training data, it is possible to assess and to specify in quantifying terms,
volved and have at least the right to question the decisions (if not to and thus also to make transparent, which standard of fairness a system
veto them), increased reliance on AI may lead to an overconfidence in realizes (Kroll et al., 2017).
the “correctness” of a decision, which is then not questioned any more. In sum, the promise of AI for an improved functioning of the state
It is well known from psychology that informational “anchors” struc- on the output level is based on the unprecedented possibilities of an-
ture subsequent information processing and decision-making. Drawing ticipating behavior, providing targeted, personalized services and co-
on algorithmically produced evidence may thus contract the space for ordinating social interactions that AI entails. And these increased ad-
debate over the “right” solution to a problem – and undermine the ministrative capacities are expected to yield not only higher efficiency,
discursive nature of the political process in which different segments of effectiveness, and fairness but also a greater responsiveness as decisions
society are represented. and services are more individually tailored to citizens.
Moreover, power asymmetries may arise if private actors use potent
machine learning and AI applications that assist them in finding ways to 4.3.2. Negative scenario: heightened accountability problems in the public
better make their positions heard in political decision-making. While AI administration
as a part of civic tech can be used for public purposes, as described The negative vision of a wide-spread adoption of AI on the output-
above, this is by no means a necessity. If these technologies can be used level is characterized by a loss of citizen autonomy and equality that
to trace the behavior of political actors and to obtain informational results from a heightened opaqueness and lack of accountability. A
advantages which matter for influencing political decision-making, number of contributions has noted that algorithmic decision-making
there is no reason to believe that groups and organizations will not use involves decisions that remain hidden within a black box, making it
them to further their particular interests. hard to establish accountability (e.g. Ananny & Crawford, 2018;
The overall result in the negative scenario would thus be a heigh- Pasquale, 2015; Zweig, Wenzelburger, & Krafft, 2018). Even if in-
tened asymmetry between those governing and those governed as the formation about the system is provided, this does not mean that it is
state – together with private actors – extends its resources for using understandable to citizens – and partly not even to experts, mainly due
information power in ways that clearly surpass the possibilities of ci- to the dynamic and complex nature of some forms of machine learning.
tizens and that the political opposition may be unable to match and This can make the monitoring and scrutiny of their operations very
counter. difficult.
These problems of transparency and accountability are serious when
4.3. Output dimension considering the possible impacts of AI in the public sector. First, as
these applications use massive amounts of fine-grained behavioral data
4.3.1. Positive scenario: more responsive, effective, and efficient public about individuals to provide various forms of personalized treatment,
services they exacerbate information asymmetries (e.g. Tene & Polonetsky,
Expected positive consequences of algorithmic systems in a political 2013). Specifically, this data together with analytical and predictive
context are most prominently discussed with regard to the output di- abilities of automated decision-making systems can produce a kind of
mension of the political system. The promise of AI for democracy at this knowledge that allows for anticipating and pre-empting individuals'
level can be subsumed under (A) a greater efficiency and effectiveness future behaviors. Applications of AI may thus subtly intervene into ci-
of the state, especially in public service provision, and (B) greater tizens' decision situations to steer their perceptions and behaviors on a

6
P.D. König and G. Wenzelburger Government Information Quarterly 37 (2020) 101489

large scale – which may amount to an interference with their decisional scenarios, it is important to note that despite unprecedented techno-
autonomy as citizens are not necessarily aware of this influence logical possibilities due to AI, there is also a discernible continuity in
(Lanzing, 2018; Mittelstadt et al., 2016; Yeung, 2017b). the tendencies summed up in Table 1. The negative scenario in parti-
Second, exerting this kind of power may go against the interests or cular points to problems and challenges that are not new and that have
even protected rights of those affected. It has been pointed out re- existed well before the digital era, but which are exacerbated by the
peatedly that algorithmic decision-making systems and AI are never adoption of AI in a political context.
neutral but by necessity incorporate certain assumptions and values This is why an evaluation of chances and risks of AI for liberal de-
(e.g. Hildebrandt, 2016; Just & Latzer, 2017; Mittelstadt et al., 2016; mocracy has to be careful in choosing the point of reference. It is sen-
Yeung, 2017a). It is therefore possible that they embody assumptions sible to start from an honest assessment of existing deficiencies of de-
that were not explicitly considered as part of the design. Unintended mocracy while remaining open to the possibility that AI may overcome
functioning of the system may also result from biases that are acquired some of these problems but could also intensify them or add new ones.
from patterns in the data on which the system has been trained (Barocas A suitable point of reference is thus the status quo of democratic de-
& Selbst, 2016; Lepri et al., 2018). In any case, the bias that manifests in cision-making practice and not some ideal-typical notion of the demo-
the automated decision-making can lead to discrimination of protected cratic process that has never been realized. In the remainder, we
groups (Barocas & Selbst, 2016, pp. 674–675; Mittelstadt et al., 2016, therefore discuss the risks and chances of AI while taking into account
pp. 8–9). This risk of discrimination together with a bad quality of long-standing problems in existing liberal democracies when it comes
decision-making due to wrong classifications can have a profound im- to achieving responsiveness and accountability.
pact on individuals' welfare, e.g. with regard to sentencing in the area Starting with the input level, the democratic ideal of an enlightened
of criminal justice (Berk, Heidari, Jabbari, Kearns, & Roth, 2018) or the understanding of citizens (Dahl, 2000) deliberating and freely forming
selection of unemployed citizens for training or job offers (Niklas, their opinions in a public sphere has never been achieved. Manipulation
Sztandar-Sztanderska, & Szymielewidz, 2015). and deception have been a constant threat and a provision of political
In sum, in the negative scenario, AI in the public sector may sys- information that is geared towards capturing attention and satisfying
tematically work against the interests and even fundamental rights of emotional needs has become a core component of the larger public
the individuals who are affected by its operations and decisions. sphere. The rise of private media corporations and a trend of com-
Moreover, the affected individuals have little information and means to mercialization means that recipients are primarily addressed as con-
make sure that this does not occur. Citizens may not be aware e.g. of an sumers of content and less as citizens with an interest in developing
overall systematic discriminating impact and they can hardly scrutinize political judgment (Hjarvard, 2013). The adoption of AI in the public
these systems. In this negative scenario, AI could even deliver sa- sphere has so far continued this trend. As profit-oriented businesses, it
tisfactory results and increase effectiveness and efficiency, but still is imperative for media organizations to furnish users with information
adversely impact citizens' decisional autonomy. Indeed, it is precisely and messages that are primarily captivating. This also holds true for
when citizens are individually satisfied with these outputs that they are online platforms and services whose success depends on the degree to
less likely to question or contest that kind of governance even though it which they are designed as a product that creates intense user en-
is unaccountable. Especially given the aura of objectivity that such gagement and builds a strong habit of use (Eyal & Hoover, 2014).
information systems are often conferred (Floridi, 2012; Morozov, Hence, on the one hand, certain applications of AI may offer tech-
2014), algorithmic governance may become reified and seen as natural. nical solutions that help citizens as individual users of such solutions to
The key problem for democracy in this scenario is thus that algorithmic navigate through the information overload and to get an idea about
governance leads to a state of heteronomy, in which obedient citizens what is relevant information about political matters. On the other hand,
are more or less satisfied and get what they want, but according to the risks of AI on the input side of democracy loom large, because they
terms not defined and authorized by them. structurally alter the very foundations of public opinion formation.
First, applications of AI may result in fragmented publics, as they make
it easier and more convenient for citizens to obtain information that
4.4. Contextualizing the two scenarios in real-world democratic politics
reinforces worldviews based on personalizing algorithmic systems.
While it is true that such selection biases have always existed (Smith,
The possibilities to process and act on information that come with
Fabrigar, & Norris, 2008), it still makes a difference whether a news
the increased use of AI affect all stages of the democratic process and
consumer has the choice to pick only information that supports her
they concern both citizens and decision-makers. The adoption of AI
worldview or whether this process is externalized to AI; and this is
applications, as the summary in Table 1 illustrates, can either improve
particularly problematic if the information environment can be strate-
the informational basis of the political process and strengthen respon-
gically influenced by commercial or political interests. Second, this
siveness and accountability; or it can severely weaken them. When
influence of AI on public opinion formation also concerns alternative
thinking about what can make the difference between these two

Table 1
Overview contrasting the positive and the negative scenario.
Levels of the political Negative scenario Positive scenario
system

Input • Fragmented
reinforced
public, in which existing preferences are largely • Compilation of relevant information about citizen demands, political
offers and outputs
• Fake news, manipulation and subtle influence through
microtargeting practices
• Better ability to deal with information overload
• AIPower
influence on selection process within the media system
Throughput • makersasymmetries due to information advantages of decision- • Better informed and more evidence-based decision-making
and particular interests • Better ability to monitor and scrutinize political decision-making
• Heightened opaqueness of political decision-making
• Erosion of political debate due to automation bias and technocratic
decision-making
Output • Discriminating impact of administrative decision-making • More efficient, effective and fair administrative decision-making and
• Intransparent and unaccountable operations of the administration provision of public services

7
P.D. König and G. Wenzelburger Government Information Quarterly 37 (2020) 101489

ways of information. Traditional mass media systems, which are often 5. Discussion: what makes the difference between the positive and
cited as being one of the main strongholds against filter bubbles negative scenario?
(Zuiderveen Borgesius et al., 2016), are equally affected by AI when
journalists, who are gatekeepers in selecting news, are themselves The preceding discussion suggests that the challenges to respon-
subject to algorithmically transformed information environments and siveness and accountability introduced by AI should not be seen against
rely on tools such as automatic fact-checking systems. the backdrop of some idealized vision of a pre-algorithmic democracy.
In sum, there are important mechanisms at work on the input level Possible negative effects of AI on responsiveness and accountability
that are not of a technological sort and that go against information largely stem from exacerbating already-existing deficiencies of demo-
flows which would foster an informed public. Even a better access to cratic politics in a setting characterized by an incomplete realization of
information on politicians' behavior in the decision-making process and the ideal of a free society. They thus entail old problems with a new
on decision outputs will hardly feed back into the public will formation face, with a technological component that is indeed novel and for which
if the above-mentioned mechanisms and filters predominate. This al- existing political system are hardly prepared. Yet while this threat to a
together presents a strong barrier to realizing accountability because free society clearly has technological roots, it is important to note that
citizens can hardly assess how political actors perform, e.g. in terms of what distinguishes the negative from the positive scenario above is not
their electoral promises. The potential positive effects that AI may have only or not even primarily a question of technology – but a social and
in terms of structuring information are thus far from outweighing the political one. Essentially the same set of technological capacities can
risks under current conditions. sustain both those scenarios. Steering clear of the negative scenario is
On the throughput level, the process that ultimately leads to political therefore not simply about properly designing AI systems. Rather, it is
decisions is generally a complex one involving many actors and various crucial to consider the social relations in which they are embedded.
form of influence and deal-making. While this process in principle as- This does not mean that questions of AI design do not matter. The
sures the participation of different interests in the policy process, the ethical and design principles for applications of artificial intelligence
real-world of politics is less ideal and the process plagued by strong that have been discussed in various contributions (see e.g. Ananny,
influences stemming from particular interests and inequality (see e.g. 2016; Floridi & Cowls, 2019) are clearly important to guarantee that
Bartels, 2016; Giger, Rosset, & Bernauer, 2012). Given this rather cri- users remain in control over these devices.
tical assessment of the real world of politics as being detached from the However, our perspective rooted in the theory of democratic policy-
problems of ordinary citizens, AI systems may actually help to remedy making points to the importance of a larger relational dimension that is
such biases in political decision-making. If they make policy decisions key to determining which purposes applications of AI are serving. This
more evidence-based and allow citizens to throw more light on decision dimension goes beyond the question which features these systems
processes, it will be harder for special interests to substantially influ- should have and emphasizes their relationship to the people they are
ence the outcome. The potential for less biased policy making is ultimately supposed to serve, i.e. for whom they take decisions. As the
therefore considerable. literature on accountability (e.g. Warren, 2014) emphasizes, organizing
However, whether the adoption of AI in decision-making processes such a relationship and the delegation of decision-making authority
actually has such effects remains an open question. To date, we know requires adequate governance mechanisms. Corresponding provisions
almost nothing about whether and how algorithms are implemented in generally stipulate that those who wield public power can be mon-
high-level political decision-making processes and there is a risk that itored, must give answers for their actions (answerability), and can be
those forces which have captured the policy process according to sanctioned – provisions that are all designed to orient actors towards
scholars like Bartels (2016) and Gilens (2012) will also secure that al- public accountability. Such accountability mechanisms equally have to
gorithmic systems produce outputs which they prefer. be brought to the use of AI in a public context if it is to further citizens'
Finally, on the output level, bad and unfair decision-making that goes ability to assess responsiveness and hold politicians accountable while
unnoticed or can hardly be scrutinized is already a problem of bu- reducing the unauthorized or illegitimate influence of particular inter-
reaucracies staffed with humans (Jarvis, 2014). Indeed, administrative ests.
decisions in part must remain a black box to those how are subjected to This implies establishing instruments for effective control of these
them. Complete transparency would conflict with practical limitations systems by the larger public. Various instruments have been proposed
and would also constitute, following Coglianese and Lehr (2018), a that can be used for that purpose (for an overview, see Koene et al.
disproportionate standard given that human decision makers must re- 2019). These differ in their intensitiy of regulatory intrusiveness and
main opaque in that one cannot peek into their heads. Thus, issues of may thus be appropriate for different applications. Institutionalized
opaqueness and possibly unfair decisions are not new and have gen- auditing for scrutinizing AI systems forms a relatively strong provision
erally marked the functioning of public administrations, but the adop- (Lepri et al., 2018). Such audits may comprise an in-depth look into the
tion of AI has led to a heightened concern for questions of transparency systems, their construction, and operations or involve less intrusive
and fairness. special cryptographic methods for checking whether AI systems truly
So far, there are strong signs that these challenges on the output- follow a specific process and optimize certain criteria (Kroll et al.,
level can be managed. There are already strong provisions in the public 2017). A lighter approach consists in black box analyses that assess
sector that aim to ensure quality, fairness and accountability merely the observable performance by examining which inputs lead to
(Coglianese & Lehr, 2018). As these goals can be undermined in novel which outputs (Diakopoulos, 2014). Transparency could furthermore
ways due to the use of AI, there have been efforts by law makers, be achieved through building on advances in explainable AI (Holzinger
practitioners and a quickly growing research that devises ways of et al. 2014) or with general reporting requirements about the logic and
identifying and dealing with resulting problems and tensions (see e.g. the impact of a system. In any case, adopting such provisions firmly as a
Lepri et al., 2018; Veale & Brass, 2019; Veale, Van Kleek, & Binns, part of democratic governance and accountability mechanisms allows
2018). Given a traditionally strong degree of regulation on the level of citizens to evaluate how a system operates and makes decisions for
public administration and efforts to exert scrutiny, it seems probable them and therefore to control and alter the delegation relationship
that applications of AI will only be adopted and prevail on the output between AI and themselves.
level if they lead to improvements while also adhering to existing basic Hence, when political actors and the state adopt AI, regulatory
standards and regulatory requirements. provisions need to ensure not only high quality of AI allowing for

8
P.D. König and G. Wenzelburger Government Information Quarterly 37 (2020) 101489

better-informed decision-making, but also a use of these systems that is filter logics (Helberger, Pierson, & Poell, 2018). Citizens could then
as publicly transparent and accountable as possible. This is required to draw on different information offers while they would know what they
limit the possibilities of special interests to work towards the im- get and why they get it. In any case, however, there is a need for al-
plementation of AI in ways which favor their goals. Similarly, when gorithmic literacy since all citizens – including their political re-
citizens are the users of AI as tools to select political information, e.g. presentatives – need a basic understanding of how algorithmic systems
about the performance of a government, they must be able to find out process and arrange information and make selection decisions for them.
what criteria these systems optimize and how they filter and process This altogether means that the input level is fraught with large chal-
information for them. There thus need to be procedures that empower lenges for liberal democracy in the digital era. At the same time, to the
citizens to corroborate e.g. why they get the political information they degree that citizens are enabled to make competent use of adequately
get and to exert control over information selection mechanisms. More designed and embedded AI as their political information assistants, they
specifically with regard to the throughput and output level, AI that might enter into a more dialogical relationship with their political in-
assists citizens would need to operate akin to the role of an om- formation sources than ever before.
budsman, an independent public advocate who monitors and in-
vestigates the activities of political authorities. Such applications would Funding
have to be designed and embedded accordingly and allow citizens to
always make sure that they realize such a public value. The authors disclosed receipt of the following financial support for
Overall, while AI can heavily intervene in the informational basis of the research, authorship, and/or publication of this article: This re-
democratic regimes, it is the principles of liberal democracy itself that search has been conducted within the project “Deciding about, by, and
offer important guidelines about how the relationship between citizens together with algorithmic decision making systems”, funded by the
and these systems can be organized. Ultimately, what makes the dif- Volkswagen Foundation, Germany.
ference between the two scenarios formulated above is whether the use
of AI involves a form of public power which is not checked, scrutinized, References
and controlled. Thus, although liberal democracy is coming under
pressure, it is not overcome or becoming outdated. Quite the opposite – Ananny, M. (2016). Toward an ethics of algorithms: Convening, observation, probability,
and timeliness. Science, Technology & Human Values, 41(1), 93–117. https://doi.org/
as “one of the most successful cybernetic systems avant la lettre”
10.1177/0162243915606523.
(Hildebrandt, 2016, p. 23; emphasis in original) that institutionalizes Ananny, M., & Crawford, K. (2018). Seeing without knowing: Limitations of the trans-
contestability and revisability, liberal democracy offers fundamental parency ideal and its application to algorithmic accountability. New Media & Society,
20(3), 973–989. https://doi.org/10.1177/1461444816676645.
solutions to the long-standing problem of organizing politics in a way Barocas, S., & Selbst, A. D. (2016). Big data's disparate impact. California Law Review, 104,
that holds public power in check and operates in the service of those 671–732.
affected by it. Bartels, L. M. (2016). Unequal democracy: The political economy of the new gilded age (2nd
ed.). Princeton: Princeton University Press.
Embedding AI in corresponding governance mechanisms can con- Bartolini, S. (1999). Collusion, competition and democracy: Part I. Journal of Theoretical
tribute to an improved informational basis of democratic politics that, Politics, 11(4), 435–470. https://doi.org/10.1177/0951692899011004001.
Bastos, M. T., & Mercea, D. (2019). The Brexit botnet and user-generated hyperpartisan
at the same time, strengthens overall responsiveness and account- news. Social Science Computer Review, 37(1), 38–54. https://doi.org/10.1177/
ability. An already existing lack of responsiveness and accountability in 0894439317734157.
democratic regimes may work against this, but the discussion above Berk, R., Heidari, H., Jabbari, S., Kearns, M., & Roth, A. (2018). Fairness in criminal
justice risk assessments: The state of the art. Sociological Methods & Research, 1–42.
suggests that it can be achieved on the throughput and output level by https://doi.org/10.1177/0049124118782533.
building on existing regulations in liberal-democratic regimes in ex- Binns, R. (2017). Algorithmic accountability and public reason. Philosophy & Technology..
tending accountability mechanisms to comprise the use of AI. https://doi.org/10.1007/s13347-017-0263-5.
Boella, G., Caro, L. D., Humphreys, L., Robaldo, L., Rossi, P., & van der Torre, L. (2016).
The challenges on the input level appear to be much harder to re- Eunomos, a legal document and knowledge management system for the web to
solve. This level forms an integral part of the entire political system and provide relevant, reliable and up-to-date information on the law. Artificial Intelligence
and Law, 24(3), 245–283. https://doi.org/10.1007/s10506-016-9184-3.
is necessary for making the overall democratic process work. However, Brauneis, R., & Goodman, E. P. (2017). Algorithmic transparency for the smart city. SSRN
it comprises private actors whose behaviors are not similarly regulated Electronic Journal. https://doi.org/10.2139/ssrn.3012499.
and constrained as roles and institutions at the heart of the political Carlson, M. (2015). The robotic reporter: Automated journalism and the redefinition of
labor, compositional forms, and journalistic authority. Digital Journalism, 3(3),
system. First, private media organizations and online platforms, in their 416–431. https://doi.org/10.1080/21670811.2014.976412.
competition over capturing the attention of a large audience, have a Carlson, M. (2018). Automating judgment? Algorithmic judgment, news knowledge, and
journalistic professionalism. New Media & Society, 20(5), 1755–1772. https://doi.
strong incentive to cater to citizens as mere consumers of content.
org/10.1177/1461444817706684.
Second, while citizens may well be enabled to easily obtain relevant CEPEJ (2018). European ethical charter on the use of artificial intelligence in judicial systems
information about political affairs and politicians' performance, they and their environment. Brussels: European Commission for the Efficiency of Justice.
Chen, Y.-C., & Hsieh, T.-C. (2014). Big data for digital government: Opportunities,
will not necessarily make use of these possibilities. Instead, they may challenges, and strategies. International Journal of Public Administration in the Digital
simply prefer to seek political information that is in line with their Age, 1(1), 1–14. https://doi.org/10.4018/ijpada.2014010101.
views or that they find more spectacular and entertaining. Coglianese, C., & Lehr, D. (2018). Transparency and algorithmic governance. Pennsylvania:
University of Pennsylvania.
Overall, there are no strong rules that would structure and guide the Cohen, M. D., March, J. G., & Olsen, J. P. (1972). A garbage can model of organizational
process of public opinion formation towards strengthening public au- choice. Administrative Science Quarterly, 17(1), 1–25. https://doi.org/10.2307/
2392088.
tonomy, responsiveness and accountability. A strong regulation of Dahl, R. A. (1998). Polyarchy: participation and opposition (26. Print). New Haven: Yale
public opinion formation is also dangerous as it can quickly lead to Univ. Press.
censorship. Nonetheless, there already exist models for dealing with Dahl, R. A. (2000). On democracy. New Haven: Yale Univ. Press.
Diakopoulos, N. (2014). Algorithmic accountability reporting: On the investigation of black
these issues that can be adapted to the use of AI. Certain information boxes. Tow center for digital journalism publications. https://doi.org/10.7916/
offers could, for instance, be regulated in a way that they explicitly d8zk5tw2.
safeguard balance and pluralism in the way they compile information. Downs, A. (1957). An economic theory of democracy. New York: Harper.
Dunleavy, P. (2016). ‘Big data’ and policy learning. In G. Stoker, & M. Evans (Eds.).
Besides, it is also possible to devise inclusive ownership structures, for Evidence-based policy making in the social sciences: Methods that matter. Bristol Chicago,
instance through organizing online platforms based on cooperatives or IL: Policy Press.
Easton, D. (1965). A systems analysis of political life. New York: Wiley.
frameworks of shared responsibility that guarantee exposure to diverse Endres, K., & Kelly, K. J. (2018). Does microtargeting matter? Campaign contact
and balanced content while fostering awareness and choice of different

9
P.D. König and G. Wenzelburger Government Information Quarterly 37 (2020) 101489

strategies and young voters. Journal of Elections, Public Opinion and Parties, 28(1), 392–408. https://doi.org/10.1177/0020852314564308.
1–18. https://doi.org/10.1080/17457289.2017.1378222. Merkel, W. (2004). Embedded and defective democracies. Democratization, 11(5), 33–58.
Eyal, N., & Hoover, R. (2014). Hooked: How to build habit-forming products. London: https://doi.org/10.1080/13510340412331304598.
Portfolio Penguin. Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of
Floridi, L. (2012). Big data and their epistemological challenge. Philosophy & Technology, algorithms: Mapping the debate. Big Data & Society, 3(2), 1–21. https://doi.org/10.
25(4), 435–437. https://doi.org/10.1007/s13347-012-0093-4. 1177/2053951716679679.
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Morozov, E. (2014). To save everything, click here: Technology, solutionism and the urge to fix
Harvard Data Science Review.. https://doi.org/10.1162/99608f92.8cd550d1. problems that don't exist. London: Penguin Books.
Franz, M. (2013). Targeting campaign messages: Good for campaigns but bad for Napoli, P. M. (2014). Automated media: An institutional theory perspective on algo-
America? In T. N. Ridout (Ed.). New directions in media and politics (pp. 113–131). rithmic media production and consumption: Automated media. Communication
New York: Routledge, Taylor & Francis Group. Theory, 24(3), 340–360. https://doi.org/10.1111/comt.12039.
Frederickson, H. G., & Ghere, R. K. (2013). Ethics in public management. https://doi.org/ Nemitz, P. (2018). Constitutional democracy and technology in the age of artificial in-
10.4324/9781315704517. telligence. Philosophical Transactions of the Royal Society A: Mathematical, Physical and
Giger, N., Rosset, J., & Bernauer, J. (2012). The poor political representation of the poor Engineering Sciences, 376(2133), 1–14. https://doi.org/10.1098/rsta.2018.0089.
in comparative perspective. Representation, 48(1), 47–61. https://doi.org/10.1080/ Niklas, J., Sztandar-Sztanderska, K., & Szymielewidz, K. (2015). Profiling the unemployed
00344893.2012.653238. in Poland: Social and political implications of algorithmic decision making. Warsaw:
Gilens, M. (2012). Affluence and Influence: Economic Inequality and Political Power in Fundacja Panoptykon.
America. Princeton: Princeton University Press. Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of au-
Gorton, W. A. (2016). Manipulating citizens: How political campaigns' use of behavioral tomation: An attentional integration. Human Factors: The Journal of the Human Factors
social science harms democracy. New Political Science, 38(1), 61–80. https://doi.org/ and Ergonomics Society, 52(3), 381–410. https://doi.org/10.1177/
10.1080/07393148.2015.1125119. 0018720810376055.
Helberger, N., Pierson, J., & Poell, T. (2018). Governing online platforms: From contested Pariser, E. (2011). The filter bubble: What the internet is hiding from you. New York, NY:
to cooperative responsibility. The Information Society, 34(1), 1–14. https://doi.org/ Penguin Press.
10.1080/01972243.2017.1391913. Pasquale, F. (2015). The black box society: The secret algorithms that control money and
Hersh, E. (2015). Hacking the electorate: How campaigns perceive voters. New York, NY: information. Cambridge, Massachusetts London, England: Harvard University Press.
Cambridge University Press. Persily, N. (2017). Can democracy survive the internet? Journal of Democracy, 28(2),
Hilbert, M. (2012). How much information is there in the “information society”? 63–76. https://doi.org/10.1353/jod.2017.0025.
Significance, 9(4), 8–12. https://doi.org/10.1111/j.1740-9713.2012.00584.x. Savaget, P., Chiarini, T., & Evans, S. (2018). Empowering political participation through
Hildebrandt, M. (2016). Law as information in the era of data-driven agency: Law as artificial intelligence. Science and Public Policy. https://doi.org/10.1093/scipol/
information. The Modern Law Review, 79(1), 1–30. https://doi.org/10.1111/1468- scy064.
2230.12165. Scharpf, F. W. (1999). Governing in Europe: Effective and democratic? Oxford: Oxford
Hjarvard, S. (2013). The mediatization of culture and society. New York: Routledgehttps:// University Press.
doi.org/10.4324/9780203155363. Smith, S. M., Fabrigar, L. R., & Norris, M. E. (2008). Reflecting on six decades of selective
Höchtl, J., Parycek, P., & Schöllhammer, R. (2016). Big data in the policy cycle: Policy exposure research: Progress, challenges, and opportunities. Social and Personality
decision making in the digital era. Journal of Organizational Computing and Electronic Psychology Compass, 2(1), 464–493. https://doi.org/10.1111/j.1751-9004.2007.
Commerce, 26(1–2), 147–169. https://doi.org/10.1080/10919392.2015.1125187. 00060.x.
Holzinger, A., Kieseberg, P., Weippl, E., & Tjoa, A. M. (2018). Current Advances, Trends Soroka, S. N., & Wlezien, C. (2009). Degrees of democracy: Politics, public opinion, and
and Challenges of Machine Learning and Knowledge Extraction: From Machine policy. https://doi.org/10.1017/CBO9780511804908.
Learning to Explainable AI. In A. Holzinger, P. Kieseberg, A. M. Tjoa, & E. Weippl Stimson, R. J. (Ed.). (2014). Handbook of research methods and applications in spatially
(Eds.). Machine Learning and Knowledge Extraction (pp. 1–8). Cham: Springer integrated social science. Cheltenham, UK ; Northampton, MA: Edward Elgar.
International Publishing. https://doi.org/10.1007/978-3-319-99740-7_1. Tene, O., & Polonetsky, J. (2013). Big data for all:Privacy and user control in the age of
Howard, P. N., Woolley, S., & Calo, R. (2018). Algorithms, bots, and political commu- analytics. Northwestern Journal of Technology and Intellectual Property, 11(5), 240–273.
nication in the US 2016 election: The challenge of automated political communica- Ünver, H. A. (2018). Artificial intelligence, authoritarianism and the future of political
tion for election law and administration. Journal of Information Technology & Politics, systems. Cyber Governance and Digital Democracy, 9, 1–20.
15(2), 81–93. https://doi.org/10.1080/19331681.2018.1448735. Urbinati, N. (2000). Representation as advocacy: A study of democratic deliberation.
Jamieson, K. H. (2013). Messages, micro-targeting, and new media technologies. The Political Theory, 28(6), 758–786. https://doi.org/10.1177/0090591700028006003.
Forum, 11(3), https://doi.org/10.1515/for-2013-0052. Urbinati, N. (2014). Democracy disfigured: Opinion, truth, and the people. Cambridge,
Janssen, M., & Kuk, G. (2016). The challenges and limits of big data algorithms in Massachusetts: Harvard University Press.
technocratic governance. Government Information Quarterly, 33(3), 371–377. https:// Veale, & Brass, I. (2019). Administration by algorithm? Public management meets public
doi.org/10.1016/j.giq.2016.08.011. sector machine learning. In K. Yeung, & M. Lodge (Eds.). Algorithmic regulation.
Jarvis, M. D. (2014). The black box of bureaucracy: Interrogating accountability in the Oxford: Oxford University Press.
public service: Interrogating accountability in the public service. Australian Journal of Veale, M., Van Kleek, M., & Binns, R. (2018). Fairness and accountability design needs for
Public Administration, 73(4), 450–466. https://doi.org/10.1111/1467-8500.12109. algorithmic support in high-stakes public sector decision-making. Proceedings of the
Jones, B. D., & Baumgartner, F. R. (2005). The politics of attention: How government 2018 CHI Conference on Human Factors in Computing Systems - CHI ‘18 (pp. 1–14). .
prioritizes problems. Chicago: University of Chicago Press. https://doi.org/10.1145/3173574.3174014.
Just, N., & Latzer, M. (2017). Governance by algorithms: Reality construction by algo- Vis, B. (2019). Heuristics and political elites' judgment and decision-making. Political
rithmic selection on the internet. Media, Culture and Society, 39(2), 238–258. https:// Studies Review, 17(1), 41–52. https://doi.org/10.1177/1478929917750311.
doi.org/10.1177/0163443716643157. van der Voort, H. G., Klievink, A. J., Arnaboldi, M., & Meijer, A. J. (2019). Rationality and
Khanna, P. (2017). Technocracy in America: Rise of the info-state. politics of algorithms. Will the promise of big data survive the dynamics of public
Kingdon, J. W. (2003). Agendas, alternatives, and public policies (2nd ed.). New York: decision making? Government Information Quarterly, 36(1), 27–38. https://doi.org/
Longman. 10.1016/j.giq.2018.10.011.
Klingemann, H.-D., Hofferbert, R. I., & Budge, I. (1994). Parties, policies, and democracy. de Vries, K. (2013). Privacy, due process and the computational turn: A parable and a first
Boulder: Westview Press. analysis. In M. Hildebrandt, & K. de Vries (Eds.). Privacy, due process and the com-
Koene, A., Clifton, C. W., Hatada, Y., Webb, H., Patel, M., Machado, C., LaViolette, J., putational turn: The philosophy of law meets the philosophy of technology (pp. 9–38).
Richardson, R., & Reisman, D. (2019). A governance framework for algorithmic ac- Milton Park: Routledge.
countability and transparency. Brussels: European Parliamentary Research Service. Warren, M. E. (2014). Accountability and democracy. In M. Bovens, R. E. Goodin, & T.
Kroll, J. A., Huey, J., Barocas, S., Felten, E. W., Reidenberg, J. R., Robinson, D. G., & D. G., Schillemans (Eds.). The Oxford handbook of public accountability (pp. 39–54). Oxford:
& Yu, H. (2017). Accountable algorithms. University of Pennsylvania Law Review, 165, Oxford University Press.
633–705. Williamson, B. (2014). Knowing public services: Cross-sector intermediaries and algo-
Lanzing, M. (2018). “Strongly recommended” Revisiting decisional privacy to judge hy- rithmic governance in public sector reform. Public Policy and Administration, 29(4),
pernudging in self-tracking technologies. Philosophy & Technology. https://doi.org/ 292–312. https://doi.org/10.1177/0952076714529139.
10.1007/s13347-018-0316-4. Winfield, A. F. T., & Jirotka, M. (2018). Ethical governance is essential to building trust in
Lepri, B., Oliver, N., Letouzé, E., Pentland, A., & Vinck, P. (2018). Fair, transparent, and robotics and artificial intelligence systems. Philosophical Transactions of the Royal
accountable algorithmic decision-making processes: The premise, the proposed so- Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 1–13. https://
lutions, and the open challenges. Philosophy & Technology, 31(4), 611–627. https:// doi.org/10.1098/rsta.2018.0085.
doi.org/10.1007/s13347-017-0279-x. Wlezien, C. (1995). The public as thermostat: Dynamics of preferences for spending.
Lindblom, C. E. (1965). The intelligence of democracy: Decision process through adjustment. American Journal of Political Science, 39(4), 981. https://doi.org/10.2307/2111666.
New York: Free Press. Yeung, K. (2017a). Algorithmic regulation: A critical interrogation: Algorithmic regula-
Mayer-Schoenberger, V., & Ramge, T. (2019). Reinventing capitalism in the age of big data. tion. Regulation & Governance, (online first), 1–19. https://doi.org/10.1111/rego.
London: John Murray Publishers. 12158.
McAfee, A., & Brynjolfsson, E. (2017). Machine, platform, crowd: Harnessing our digital Yeung, K. (2017b). ‘Hypernudge’: Big data as a mode of regulation by design. Information,
future (1st ed.). New York: W.W. Norton & Company. Communication & Society, 20(1), 118–136. https://doi.org/10.1080/1369118X.2016.
Meijer, A., & Bolívar, M. P. R. (2016). Governing the smart city: A review of the literature 1186713.
on smart urban governance. International Review of Administrative Sciences, 82(2), Zuiderveen Borgesius, F. J., Trilling, D., Möller, J., Bodó, B., De Vreese, C. H., Helberger,

10
P.D. König and G. Wenzelburger Government Information Quarterly 37 (2020) 101489

N., ... Internet Policy Review (2016). Should we worry about filter bubbles?Should regarding digital technologies. Recent work has appeared in Journal of European Public
we worry about filter bubbles? Internet Policy Review, 5(1), 1–16. https://doi.org/10. Policy, Party Politics, Review of Policy Research, and Big Data & Society.
14763/2016.1.401.
Zweig, K. A., Wenzelburger, G., & Krafft, T. D. (2018). On chances and risks of security Georg Wenzelburger is Professor of Political Science and holds the Chair of Policy
related algorithmic decision making systems. European Journal for Security Research, Analysis and Political Economy at the TU Kaiserslautern (Germany). His research inter-
3(2), 181–203. https://doi.org/10.1007/s41125-018-0031-2. ests cover a wide range of policy areas such as the welfare state, fiscal and economic
policies, law and order or digital policies. Recent work has been published in the British
Pascal D. König is a researcher at the chair of Policy Analysis and Political Economy at Journal of Political Science (on welfare state reform), the European Journal of Political
the TU Kaiserslautern (Germany). He works in the project “Deciding about, by, and to- Research (on law and order) and the European Journal for Security Research (on digital
gether with algorithmic decision-making systems” funded by the Volkswagen foundation. politics).
His research mainly deals with political communication, party competition, and policies

11

You might also like