0% found this document useful (0 votes)
43 views27 pages

SSRN 4830265

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views27 pages

SSRN 4830265

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Generative AI, Work and Risks in Cultural and Creative industries

Emmanuelle Walkowiak, RMIT University (corresponding author)


Jason Potts, RMIT University

First version 16 May 2024

Second version 15 November 2024

Abstract:
The cultural and creative industries are often a bellwether for broader economic
transformations. We argue that as the performance advantages of AI are driving adoption into
the economy at the level of individual tasks (which we call the capability effect), a secondary
effect is occurring with the unbundling of responsibility for those tasks, which are now
separated across human and (newly creative) machines. We identify this as a new class of risk
related to privacy, cybersecurity, breach of professional standards, bias, misinformation,
accountability, harm and intellectual property. These risks have always been present, but prior
to generative AI adoption, they were tightly bundled and managed in specific people, jobs and
organisations. Our claim is that AI adoption in work has two effects: the first on capabilities
(i.e. task-level productivity) and the second on the locus of responsibility (compensated or
otherwise). AI disruption is occurring in both dimensions, but analytics and policy attention
has focused on capability disruption and not on responsibility disruption. This paper seeks to
correct that oversight by developing an empirical measure of the separability of task-level risk
due to AI-driven job transformation. Using recent Australian skill classification and innovative
synthetic data methodology, we map the exposure to these eight risks and generative AI across
126 cultural and creative occupations. We identify five job transformation zones highlighting
the intensity, timing, and nature of risks that are unbundling responsibility from human skills.
We suggest these might be focal points for new cultural and creative industries policy.

Keywords: Generative AI, AI risk, Synthetic Data, Technology exposure


JEL Codes: J24, O33, Z19

1
1. Introduction
Artificial intelligence technology has been developing since the 1950s. However, a series of
recent breakthroughs, particularly in deep learning and the transformer architecture (LeCun et
al., 2015) and in the implications of the scaling hypothesis on resources given to training and
compute power, have driven an explosion of innovation in generative AI into the global
economy (Agrawal et al., 2022, 2023; Mollick, 2024; Mollick and Euchner, 2023). Many
economists, and primarily labour economists, have sought to theorise and quantify the
unfolding mass disruption to jobs and work due to the arrival of this powerful new general-
purpose technology (Bresnahan, 2024; Brynjolfsson et al., 2023; Eloundou et al., 2024;
Gmyrek et al., 2023; Goldfarb et al., 2023; Noy and Zhang, 2023; Tolan et al., 2021; Webb,
2019). The cultural and creative industries are a particularly interesting front-line case of the
adoption of generative AI, as trained models available at low cost become increasingly
performant at producing text, audio and images that, on some margins, can usefully substitute
for human-made content across a range of cultural and creative sectors (Amankwah-Amoah et
al., 2024; Anantrasirichai and Bull, 2022; Bordàs Vives, 2023; Peukert, 2019). Because
generative AI takes human creative culture as both its input (the training set) and output (next
token predictions, in the form of creative product), the disruption brought by the technology
has swiftly been felt in cultural and creative industries, as seen during the 2023-2024 strikes in
Hollywood.

However, in the creative industries, as with all sectors that have urgently sought to understand
better the burgeoning economic disruption of generative AI to jobs and work, the predominant
focus has been on how these new machine capabilities variously substitute or complement
existing human skills and capabilities. Labour economists, in particular, have developed an
empirical methodology of task-based decomposition (Acemoglu and Autor, 2011; Autor et al.,
2003) to analyse and estimate the specific disruptive impact of AI-exposed industries and jobs
(Brynjolfsson et al., 2018; Caunedo et al., 2023; Eloundou et al., 2024; Felten et al., 2023;
OECD, 2021). The results of these studies provide estimates of which types of jobs might
experience the consequences of AI-driven capital-labour substitution, variously resulting in job
losses where machines can directly substitute at lower cost for human production or in
improved productivity (and therefore wages or rents) where machines can augment and
complement human skills. This impacts economic expectations concerning creative industries
jobs and careers, as well as investments in education and training, and so also affects public
policy in labour markets, education and social welfare, as well as directly affecting cultural
industries budgets and priorities. The arrival of AI as a new general-purpose technology into
creative industries is also disrupting organisational strategy, industrial organisation and
business models (Cremer et al., 2023). As a shorthand, let us call all of this capability
disruption.

In this paper, we theorise and empirically estimate a new form of disruption that AI innovation
is bringing into the economy that we will call responsibility disruption (but will refer to
generically as risk). While capability disruption is particularly salient in the cultural and
creative industries, owing to the primary innovation of artificial intelligence enabling a form
of machine creativity, we argue that responsibility disruption is an equally important analytic
lens on the pathways of AI innovation in creative industries. This paper will define this concept
and, using a novel task-based decomposition methodology, will empirically estimate the overall
shape of how AI reveals or creates new risks as a consequence of unbundling capability and
responsibility in creative production. Our central claim is that the way this risk is realised and

2
handled will fundamentally shape the path of AI adoption and disruption in the creative
industries.

Our contribution to cultural economics is theoretical and empirical. Firstly, we introduce the
theory of unbundled creative human work and responsibility and explain why AI has two
significant impacts on cultural and creative industries. The first is the capability disruption,
which can be in terms of labour-capital substitution or complementarity, depending on each
specific task. In a world where all creative capabilities were human, responsibility came
bundled with human creative work. But one of the novel effects of AI is to unbundle that
relationship, unmooring responsibility for creativity. This responsibility disruption reveals a
whole new class of risks, which must be absorbed, managed or otherwise allocated in some
way and which properly should trade-off against the effects of the capability disruption. Our
hypothesis is that as AI is adopted into the whole economy, this unbundling effect will be first
and foremost felt in the creative industries before spreading to other sectors (such as consulting,
piloting, education and healthcare, and other parts of the service economy).

Secondly, we develop a new methodology for empirically estimating the shape and scale of
responsibility disruption. Our approach combines a task-based framework with innovative
synthetic data methodologies recently used to measure the magnitude of capability disruption
of generative AI in the workforce (Eloudnou et al., 2024). To evidence the responsibility
disruption, we first map the unbundling of eight AI risks across 593 specialist skills of 126
occupations in the cultural and creative industries. Our analysis focuses on Australia, a global
leader in developing innovative and detailed measures of labour markets in creative industries
(Cunningham and Potts, 2015; Higgs and Cunningham, 2008). As in December 2023, the
statistical system introduced a new Australian Skill Classification with substantially improved
definitions of jobs in cultural and creative industries, our task-based methodology relies on
detailed, precise and up-to-date information on the content of these jobs. Secondly, we identify
five zones of responsibility disruption within jobs that highlight the intensity, timing, and nature
of risks to guide the implementation of risk mitigation and upskilling strategies.

We proceed as follows. Section 2 develops the theoretical arguments on how AI unbundles


creative work and responsibility. Section 3 sets out our empirical approach to measuring AI
risks within jobs. Section 4 presents some statistics on the exposure to AI risks and GenAI of
jobs in cultural and creative industries. Section 5 proposes a categorisation of jobs in creative
and cultural industries based on the responsibility disruption. Section 6 concludes with a
discussion of the policy implications of our findings.

2. AI unbundles creativity and responsibility


Our motivating thesis is that generative AI innovation and adoption have exposed a previously
tightly bundled product of creative capabilities and creative responsibilities (or what we will
call risks as a generic category) to the economic disruption unbundling this relationship. This
process is unfolding simultaneously at the task, occupation, and organisation levels, but our
method here will focus on the most granular level, namely the task. This unbundling is
(assumed to be) an unintended consequence of the capability-driven logic of AI adoption into
cultural and creative industries, whether as labour-capital substitution (i.e. job loss and
replacement by machine) or labour capital complementarity (i.e. productivity enhancement of
labour due to improved machine capital). Most economic analyses of the effect of AI adoption
have sought to differentiate and estimate the structure and dynamics of substitution or
complementarity effects (i.e. is this good or bad for labour, and if so, for which specific jobs).
However, labour dynamics involve not only risks of labour displacements or increased
3
opportunities for productivity, depending on the mechanisms of substitution or
complementarity, but also the potential spreading of AI risks at the task level (Walkowiak,
2023). We focus here on the job transformation due to AI adoption, which unbundles and
disrupts risk. Our claim is that AI adoption (both as a substitute and as a complement to human
work) is revealing, creating or releasing a new class of risks previously contained within the
norms and institutions of the creative professions. We argue that a proper understanding of the
impact of AI adoption on the broader economy can be foreseen in the cultural and creative
industries as the locus of the initial unbundling of creative capability and creative
responsibility. Our theory is that the creative industries are a bellwether for this evolutionary
process in other industries (Potts, 2012).

In pre-AI cultural and creative industries, it was axiomatic that all creative production was done
by humans. The role of capital was to furnish leverage and scale in markets (Cowen and
Tabarrok, 2000), or capital, which was understood as embodied in humans, viz cultural capital
(Throsby, 1999). In consequence, traditionally, there has been minimal separation between the
creative work and the locus of responsibility for the creative work, even when that was
institutionally distributed to a governing organisation or profession. Responsibility matters
because ideas have consequences, and creative acts and the mediums of they work through
(speech, sound, visual, text, code, design and so on) all have economic value and social
significance, because of their intention and power to influence the actions and internal states
of other people. Moreover, creative workers often do so by reusing and referencing other
people's ideas or property. Thus, the navigation of this space, as the production of creative
content in an economic context, requires a degree of responsibility to the creative agency
proportional to its intended effect.

This principle is unremarkable and applies almost everywhere on the producer side of the
economy. Our point here is to emphasise that in the cultural and creative industries, this locus
of responsibility has historically almost everywhere been tightly bundled with the creative
human work, usually at the point of the individual job or profession. In many instances, this
manifested as the integrity of a creative profession, made legitimate and held accountable by
cultural norms, professional codes, and standards working through community monitoring and
correction and more formal processes of gatekeeping and sanctions. The most overarching
consequence of this tight bundling is, or at least was until generative AI started to wedge them
apart, that the costs associated with risks that accrue to creative responsibility are endogenous
to the rewards and benefits of creative capabilities. However, AI is a new type of creative
capability that does not come bundled with a native sense of creative responsibility. So, the
questions are : for any given act of disruption through AI adoption, where does that creative
responsibility accrue? Where is it redistributed, and to what measure or by what principles?
They are the empirical questions that we seek to answer in the remainder of this paper.

The types of risks we refer to here are a mix of extant responsibilities that have always been
incumbent in creative industries tasks and jobs, such as dealing with legal liability concerning
intellectual property, but are now newly shifted to quasi-autonomous machines (Epstein et al.,
2023), as well as newly created responsibilities that are entirely due to the affordances of the
new technology (such as AI manipulation and certain types of privacy violations). For example,
in the media industry, GenAI makes personalised persuasion scalable (Matz et al., 2024) and
unleashes new models for online misinformation that may involve information manipulation
and propaganda (Acemoglu et al., 2023, 2021; Caled and Silva, 2022; Costello et al., 2024;
Goldstein et al., 2024). In creative marketing activities, data that can be captured, aggregated,

4
processed, stored, modelled, programmed and designed through AI (Quach et al., 2022)
involves new privacy risks due to data repurposing, persistence and spillovers (Tucker, 2019).

The unbundling of creative capabilities from creative responsibilities that generative AI brings
means that these risks can no longer reliably be managed through human judgement and
discretion, and have become somewhat unglued and are in the early stages of a process of being
redistributed through the economic system. Many of these issues have already surfaced in
concern with the societal and cultural impact of generative AI, and there have been numerous
calls for strident regulatory attention or even moratoria, as well as mounting pressure on the
technology firms building these capabilities to address these hazards at the software or
hardware level. But our argument is that these risk issues also arise directly at the level of jobs
and professional tasks and might well be best managed with adaptation at the level of work and
professions.

Building on this literature in both labour and cultural economics, we seek to introduce a range
of risk exposure indicators to analyse the disruption of responsibility that is changing the nature
of work in cultural and creative industries. When considering the adoption dynamics of a
general-purpose technology, the analytic focus is naturally on productivity gains or benefits of
investment, adoption and use and balancing that against the price of the capital investment. But
risk exposure takes the other side of that equation in both instances and focuses on the costs of
adoption, the extent to which those costs are uncertain (in scale, time, and location), and where
they fall upon. The purpose of the empirical model we develop and estimate is to offer a map
of those risks associated with the dispersal and reallocation of creative responsibility.

3. Methodology
Our methodology relies on a task-based framework that matches GenAI capabilities with
occupational task descriptions, enabling an evaluation of how risks manifest within jobs
exposed to GenAI. Task-based frameworks are widely used to analyse the impact of
technologies on work at theoretical and empirical levels (Acemoglu and Autor, 2011; Autor et
al., 2003; Brynjolfsson et al., 2018; Caunedo et al., 2023; Eloundou et al., 2024; OECD, 2021;
Walkowiak, 2023). In the empirical literature, exposure indicators measure tasks within a job
that can be exposed to a technology to draw empirical evidence about the potential change in
employment of technological change associated with the capability disruption. We adapt and
extend this methodology to find quantitative evidence of the responsibility disruption in
cultural and creative industries in Australia, by measuring the exposure to AI risks associated
with the use of generative AI at the occupational level. For our analysis, surveying employees
or employers about their perceived AI risk exposure when using GenAI might not be
appropriate, due to the emergence of the technology and workers’ or employers’ potential
limited awareness of AI risks. Moreover, surveying AI specialists or experts is not suitable,
since they might lack sufficient knowledge of specific tasks completed in the 126 cultural and
creative occupations analysed in our research. Using synthetic data to measure the existence of
AI risks ex-ante is more appropriate strategy for our research.

[Link]

In labour economics, researchers mostly use the O*NET to measure technology exposure. For
example, (Brynjolfsson et al., 2018) used it to measure indicators of “suitability for machine
learning” and (OECD, 2021; Tolan et al., 2021; Webb, 2019) measured exposure to automation
and AI. Recently (Eloundou et al., 2024; Felten et al., 2023) used this approach to map the
impact of ChatGPT and large language models on the U.S. workforce and . After the Covid-19

5
pandemic, (OECD, 2021) used the O*NET classification to map the exposure of the Australian
workforce to automation. While the U.S. O*NET classification helps carry out international
comparisons of exposure to automation, it does not capture the nuances of Australian job
descriptions associated with national collective agreements or regulations (Walkowiak and
MacDonald, 2023).

This paper uses the Australian Skills Classification (ASC) from the National Skills
Commission, released in December 2023. The ASC draws from Australian resources such as
job advertisements, career advice, educational qualifications, training details, and regulatory
information (Jobs and Skills Australia, 2023). For our research, it provides the most up-to-date
information on specialist tasks relevant for daily activities in different occupations of cultural
and creative industries, which is essential to understand how jobs might change in response to
GenAI. This last classification presents several strengths that fit the purpose of analysing
cultural and creative industries. A substantial number of jobs in cultural and creative industries
that are not classified in the Australian and New Zealand Standard Classification of
Occupations (ANZSCO) classification are detailed in the ASC, which allows an exhaustive
representation of jobs in cultural and creative industries. The AZNSCO is a skill-based
classification produced by the Australian Bureau of Statistics for analysing of occupation
statistics, updated in 2022. For example, the ASC of December 2023 distinguishes the tasks
completed by multimedia artists from the ones completed by leadlighters, quilters or textile
artists, knowing that all these occupations were grouped in previous classifications under the
single occupation of visual arts and crafts professional. By providing a more detailed
description of specific occupations, previously grouped under a single occupation, the last ASC
provides a precise cartography of the task-content of occupations that is highly relevant for
cultural and creative industries. Another example is the category of journalists and other
writers, which was not detailed in previous classifications. In this paper, we can distinguish the
tasks completed by heterogeneous occupations such as blogger, critic, editorial assistant,
photojournalist or vlogger. This level of granularity in the description is important since these
different occupations involved heterogeneous tasks and responsibility that can be exposed
differently to the unbundling effect of GenAI. Moreover, contrary to the O*Net classification,
the ASC provides data on the time spent on each task within individual occupations, which is
critical for our research.

To identify jobs in the cultural and creative industries, we followed different specific steps that
are detailed in Appendix 1. Our sample includes 126 occupations and 593 specialist tasks for
which we measured our exposure indicators. They can be aggregated into 20 broader categories
using 4-digit code of ANZSCO to extrapolate industry trends in employment.

3.2. Variables of exposure

To measure exposure to AI risks and generative AI, we wrote a code that interfaces with the
GPT-4 API “chat completion” endpoint. We designed a system prompt to instruct the model,
outlining its role in assisting with an annotation exercise based on a detailed task-scoring
rubrics. This method of synthetic data generation requires two inputs to interact with GPT4: 1)
a statement on the specialist task to code (which will be different for the 593 specialist tasks to
code); and 2) a rubric clearly detailing all instructions on the scoring and the output to be
generated (which is constant for all tasks). In our case, the output is a scoring of tasks and a
textual explanation about the score allocated to be used to check the accuracy of the coding.

As a statement for the specialist task to code, we used the long description of tasks provided
by the ASC to contextualize the content of a task with a maximum level of information. For

6
example, rather than specifying the task Gather information for news stories, which is one of
the tasks performed by journalists, we used the long statement: Investigate and gather
information for news stories through research, interview, and other investigative
methodologies. This will involve analysing and verifying sources and information and applying
journalism ethics and law in context, including attributing information, striving for
independence, accuracy, fairness, and disclosure of all essential facts without unnecessary
emphasis. It may also include capturing audio or visual evidence or records.

The exposure of a task to GenAI is an indicator of its capability disruption on work. We built
an exposure rubric following the same structure as (Eloundou et al., 2024). The full instructions
of the GenAI exposure rubric are detailed in Appendix 2. This rubric precisely shows how each
modality of exposure is coded. We just give a summary here. To be considered as exposed to
GenAI, a task must meet two conditions. Firstly, using GenAI must allow workers to complete
the task more rapidly (with a reduction of completion time set at a minimum of 50%). Secondly,
using GenAI must not deteriorate the output quality. While we acknowledge that these
conditions are subjective, they are extensively used in the literature in GenAI and considered
sufficiently restrictive to measure exposure (Eloundou et al., 2024; Gmyrek et al., 2023;
Walkowiak and MacDonald, 2023). Let Ei denotes the GenAI exposure for task i, with i= 1,
…, 593. The variable Ei is categorical and defined in four modalities as follows:

• Ei = 0: No exposure;
• Ei = 1: Direct exposure to LLMs capabilities;
• Ei = 2: Indirect exposure means that the two conditions of exposure are not met by
direct access to an LLM alone to satisfy Ei=1, but additional software developed on top
of an LLM could meet these conditions.
• Ei = 3: Direct exposure mostly due to image capabilities for tasks involving viewing,
captioning, and creating images
• Ei = 4: Direct exposure primarily due to video capabilities of GenAI

To check the accuracy of the GenAI exposure, we followed several steps explained in Appendix
3. Then, we derived three dummy variables for tasks directly exposed to GenAI, noted
Di_direct (where Di_direct = 1 if Ei = 1, 3, 4 and Di_direct = 0 otherwise), for tasks indirectly
exposed to GenAI noted Di_indirect (where Di_indirect = 1 if Ei = 2 and Di_indirect = 0
otherwise) and tasks that are not exposed noted Di_no (Di_no = 1 when Ei = 0 and Di_no = 0
otherwise).

Finally, to analyse how specialist tasks are combined within an occupation, the ASC provides
the time spent on each task within an occupation. At the occupation level, each task is weighted
by this time. 1 Let us denote wij the time spent on task i within occupation j. The level of direct
and indirect exposures for each occupation j can be written as follows:

Exposure j, direct = ∑ wij . Di_direct for all tasks i

Exposure j, indirect = ∑ wij . Di_indirect for all tasks i

1
For occupations where this time was missing, we allocated the same weight to each task. Another strategy
would be to impute the mean score of the nearest upper occupational level in ANZSCO. However, as the
composition of tasks included at each level changes in the ASC, this strategy would involve losing information
on the task content of occupations.

7
To measure the responsibility disruption, we quantify AI risk exposure, we devised eight
scoring rubrics for the risks defined in Table 1: privacy, cybersecurity, breach of professional
standards, bias, misinformation, accountability, and intellectual property. For the eight risks
scrutinized, we specified in each rubric the most recent laws and regulations implemented in
February 2024 in Australia to accurately describe the legal and regulatory environment
following desk research on these regulations. Existing regulations and laws were our
benchmark to define responsibility in sufficient details. For example, concerning privacy laws,
we referenced privacy laws adopted in the past that are still relevant (the Privacy Act of 1988,
the Australian Privacy Principles, Telecommunications -Interception and Access- Act of 1979)
and more recent laws adopted in Australia (including the Privacy Legislation Amendment -
Enforcement and Other Measures- Bill 2022, the Children's Online Privacy Code for online
services likely to be accessed by children who has not reached 18 years of age and the agreed
reforms to Australia's Privacy Laws). To be considered as exposed to a risk, a task must meet
the condition that a worker using GenAI when completing this task could contravene Australian
laws and regulations currently implemented. As for the exposure indicators presented above,
our risk indicators are categorized into four modalities (no exposure, direct exposure to LLM,
indirect exposure to LLM, direct exposure to image capabilities, and direct exposure to video
capabilities), and we derived for each risk, dummy variables of direct, indirect and no exposure.
We also measured an indicator of cumulative risks, which sums these eight dummy indicators.
Then, by aggregating tasks into occupations, we calculated the level of direct and indirect risk
exposure for 126 occupations.

Table 1: Risk exposure indicators

Risk-exposure Definition
Privacy GenAI can potentially generate, distribute, memorize, or reproduce personal
information, violating privacy laws and regulations.
Cybersecurity Malicious actors can exploit vulnerabilities (unauthorized access, system manipulation
or data theft).
Professional Using GenAI to complete tasks requiring a professional license (e.g., a medical
standards diagnosis or legal advice) can breach regulations or professional guidelines.
Bias GenAI can diffuse discriminatory or biased content
Misinformation GenAI can generate, disseminate, or propagate false or misleading information or be
used to manipulate information.
Harm Using instructions given by GenAI or integrating GenAI with other systems can lead to
physical or psychological harm.
Accountability Using GenAI can involve an unclear assignment of responsibility when GenAI makes
mistakes or causes harm.
Intellectual GenAI can contravene copyrights, trademarks, or patents.
property

4. Exposure to GenAI and risks of cultural and creative industries

[Link] at the task level in cultural and creative industries

Figure 1 illustrates how task exposure varies for our nine indicators across the 593 specialist
tasks that can be performed by workers in cultural and creative industries. Regarding the
capability disruption, we find that 20.4% of tasks are directly exposed to GenAI when
considering its language, image and video capabilities, and the indirect exposure rate with
additional software is 20.4%. Most tasks (52%) are not currently exposed.

Our risk indicators, indicating different dimensions of the responsibility disruption, are directly
comparable through the common metric of exposure levels, although the nature of the risks

8
varies across indicators. Using GenAI directly exposes 17% of tasks to accountability risks,
7% to bias, 15% to privacy risks, 9% to psychological or physical harm, 11% to IP risks, 11%
to misinformation risks, 11% to privacy risks and 9% to professional standards risks. Over
time, with additional software investments, risks of bias and accountability could substantially
increase by an additional 16% and 14%. When looking at the accumulated risks within tasks
(which can vary between 0 and 8), the mean direct risk is 0.9, and the mean indirect risk is 0.7.
However, there is a wide heterogeneity across tasks in the accumulation of risks.

Graph 1: Task Exposure

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%

Generative AI

Accountability

Bias

Cybersecurity

Harm

IP

Misinformation

Privacy

Standard

Direct exposure Indirect exposure No Exposure

Sample: 593 tasks performed by workers in creative and cultural industries.

At the task level, a high level of risks (i.e. responsibility disruption) can be associated with all
GenAI capabilities (i.e. capability disruption). For example, the task consisting of documenting
events, items, or evidence, using photographic or audio-visual equipment, which is directly
exposed to image capabilities reach the same level of risk exposure as the task consisting in
recording information from meetings or other formal proceedings completed by editorial
assistants, which is directly exposed to LLM capabilities. Both tasks exhibit a direct exposure
to the eight AI risks. Video, image, and text capabilities are all risky. This finding shows that
what differentiates the level of risk is more the job content or design of the task than GenAI
capabilities.

To better understand and interpret the capability and responsibility disruptions, let us give
examples of quotes of explanations collected for two tasks (knowing that these explanations
were generated independently for 593 tasks and manually checked the authors).

As an input for the task, Documenting events, items, or evidence, using photographic or audio-
visual equipment, we used the detailed skill statement for this task: Document events, items, or
evidence using photographic or audio-visual equipment, identifying relevant subjects or detail,
and ensuring they are captured clearly and accurately. This may be in order to collect data for
research, evidentiary, analysis, journalistic or reporting purposes. It may involve preparing
documentation or research objectives, setting up and calibrating equipment, as well as

9
obtaining permissions, ethical approval, or licensing. In our analysis, this task was coded as
exposed to image and audio-visual capabilities and exposed to the eight AI risks, demonstrating
the concomitance of the capability disruption and the responsibility disruption in work. On the
one hand, the exposure to GenAI pointed to the capacity of GenAI to document events
accurately since it “can significantly reduce the time required for documentation by assisting
in organizing and captioning images and videos.” On the other hand, it scored all AI risks as
direct exposure. For risks variable, explanations related to the GenAI potential to “extract and
misuse sensitive image data”, “to contravene IP laws especially if the documentation copies or
replicates protected visual content” and “to increase the potential for accountability issues due
to the image and video manipulation capabilities of AI, which could contravene liability laws
especially in sensitive contexts like evidence gathering”. The use of image and audio-visual
capabilities was also associated with the risk of “creating outputs that may convey harmful
stereotypes or biased information” of “manipulating content which can propagate misleading
narratives”, thus increasing misinformation risk, the potential to use GenAI to “interpret data
specific to regulated professions, raising barriers due to potential contravention of Australian
laws if mimicking professional analysis” with the risk of “potentially analyzing photographs
or audio-visual elements, misleading interpretations or documentation could lead to safety and
harm violations”.

The task of recording information from meetings or other formal proceedings, while
substantially different from the one presented above, show a similar pattern associated with
using LLM capabilities. The detailed description of this task is: compose notes or minutes
during a meeting or other formal proceeding, compiling or transcribing information for later
use, analysis, record keeping or reporting. This may also include the usage of recording
equipment for future transcribing. Ensure key details are captured according to work
requirements which may include attendees and apologies, whether there is quorum or other
voting requirements are met, date and time of the meeting, matters discussed, and outcomes of
important decisions. It was scored suitable for LLM as "it involves processing and summarizing
text data, where the use of LLM would streamline the transcription and summarization of
meetings effectively, cutting down the time required for these activities significantly.” On the
other hand, different risks were identified as “this task involves managing potentially sensitive
information captured in written form, which is exactly what LLMs are equipped to handle,
posing a direct cybersecurity risk.” A similar explanation was provided for privacy risks where
“sensitive information discussed in meetings (…) could inadvertently memorize and leak this
sensitive data” when using a LLM. IP risks were related to “generating written records and
transcriptions which could closely mimic protected works.” For bias risk, the explanation
pointed that “composing and transcribing notes or minutes, which an LLM can assist with
directly, potentially influencing the textual content it generates. As such, there is a direct risk
of reinforcing stereotypes or producing biased output if the language model's training data is
skewed.” Misinformation risks involved in “capturing contextual nuances and factual
information that if misgenerated or manipulated, directly impacts the factual integrity of
organizational records, leading to misinformation.” Finally, for professional standards and
harm risks, some legal aspects were mentioned when using LLM “could allow for generating
detailed meeting minutes or notes, potentially substituting for a professional role in a regulated
setting like legal or corporate governance” and “accurately interpreting and recording critical
information in a real-time context, where misinformation or lack of clarity can lead to
significant misunderstandings or legal repercussions, hence direct LLM application could be
harmful”.

10
Interestingly, GenAI exposure (i.e. capability disruption) is not a necessary condition for
potential AI risks at work (i.e. responsibility disruption). In our dataset, three tasks coded as
not exposed to GenAI cumulated seven AI risks. These three tasks have the specificity to
require negotiation skills. Their scoring as not exposed is directly tied to the nature of the
negotiation task that involves “complex human interactions, personalized communications,
and legal creativities beyond the capacity of current LLM applications”, in addition to “real-
time human judgment, persuasion skills” and more strategic thinking in “complex decision-
making, negotiations, and strategic conversations that require human cognitive and emotional
skills”. Said differently, negotiation skills were considered as intrinsically human skills, which
would limit the capability of GenAI to support these tasks efficiently. However, a low exposure
in a task does not exclude the scenario that workers use GenAI to perform this task, which
could pose “moderate risks related to data security potentially resulting in unauthorized
access” (cybersecurity), “processing of sensitive organizational or personal data” (privacy)
“could involve preparing documents or strategies that may inadvertently use protected content,
particularly in professional communications or agreements” (IP), “present possible liabilities
in case of erroneous outputs influencing contract term” (accountability), misleading
information in negotiation, “potentially substituting professional negotiation advice”
(professional standards), with “significant consequences” if the LLM information is inaccurate
concerning critical service term (harm). Their risk of bias was not considered as meaningful,
considering that “negotiations focus on clear, factual elements.”

4.2. Exposure of occupations in cultural and creative industries

Table 2 provides a summary of the direct and indirect exposure indicators at the occupational
level and the average total exposure, which combines direct and indirect exposures. The last
row of the table shows the cumulative risk, which is the number of risks involved in different
tasks in an occupation, weighted by the time spent on each task.

On average, workers of cultural and creative industries spend 40% of their time on tasks that
expose them to GenAI, with 22% of direct exposure and 18% of indirect exposure. In the labour
market, one-quarter to one-fifth of the time is allocated to tasks exposed to cybersecurity, harm,
IP, and privacy risks, primarily due to direct exposure. Liability and bias risks are widespread,
with 30% and 29% of the workforce's working time exposed in cultural and creative industries.
Finally, professional standards risk account for 17% of the time exposed on the job.

The occupations with the highest GenAI exposure in cultural and creative industries are analyst
programmer, digital marketing analyst, developer programmer, records manager, and author.
These roles all have a GenAI exposure exceeding 80%, indicating a significant capability
disruption of work. In contrast, only four occupations in our sample are not exposed to GenAI,
meaning that tasks in these roles could not be completed with the same quality and significant
reduced time. These occupations, including singer, musical instrument maker and repairer,
composer, and exhibition designer. All involve physical performance that is not automatable
and to combine stylistic or emotional considerations with expert knowledge that cannot reach
the same quality when using GenAI. When taken individually, all minima of exposure are at 0,
even for the direct and indirect accumulation of risk variables. The mean total number of risks
is the only variable with a non-zero minimum but at 0.03, demonstrating that jobs with zero
risk do not exist in cultural and creative industries. The mean number of risks within jobs is
1.25 direct risk and 0.63 indirect risk, reaching 1.88 risk for the total mean.

11
Table 2: Exposure of occupations in cultural and creative industries

Total exposure Direct exposure Indirect exposure


Exposure Variables Mean Std Max Mean Std Max Mean Std Max
Dev Dev Dev
Generative AI 40% 0.24 88% 22% 0.16 80% 18% 0.18 80%
Accountability 30% 0.22 90% 20% 0.16 76% 10% 0.13 56%
Bias 29% 0.20 80% 13% 0.16 75% 16% 0.14 73%
Cybersecurity 25% 0.20 76% 19% 0.16 72% 6% 0.08 37%
Harm 20% 0.17 71% 10% 0.11 50% 10% 0.13 50%
IP 24% 0.19 86% 21% 0.18 86% 2% 0.06 51%
Misinformation 20% 0.19 83% 16% 0.18 83% 4% 0.08 41%
Privacy 22% 0.18 76% 15% 0.13 68% 8% 0.12 60%
Standard 17% 0.16 76% 11% 0.13 76% 6% 0.12 54%
Cumulative number of risks 1.87 1.10 5.41 1.25 0.89 4.76 0.62 0.71 4.10
Sample: Calculation on a sample of 126 occupations in creative and cultural industries. Task exposure aggregated
at level 6-digit ANZSCO using the ASC of December 2023.

Table 3: Direct exposure by categories of occupations

Occupations Account Bias Cyber Harm IP Misinfo Privacy Standard GenAI


Music Professionals 8% 0% 0% 0% 23% 0% 0% 0% 8%
Photographers 35% 6% 21% 43% 32% 10% 24% 8% 27%
Visual Arts and Crafts Professionals 12% 6% 15% 14% 41% 6% 8% 6% 9%
Film, Television, Radio and Stage Directors 6% 3% 4% 4% 26% 6% 0% 9% 26%
Journalists and Other Writers 64% 63% 33% 9% 35% 75% 17% 30% 47%
Advertising and Marketing Professionals 4% 6% 33% 1% 7% 8% 3% 0% 30%
Architects and Landscape Architects 19% 0% 21% 29% 27% 9% 26% 21% 23%
Fashion, Industrial and Jewellery Designers 0% 7% 18% 7% 24% 7% 0% 0% 6%
Graphic and Web Designers, and Illustrators 12% 15% 20% 26% 58% 21% 23% 10% 8%
Interior Designers 12% 33% 3% 33% 43% 34% 11% 0% 2%
ICT Business and Systems Analysts 25% 9% 23% 10% 15% 10% 28% 8% 27%
Multimedia Specialists and Web Developers 34% 11% 58% 20% 42% 25% 32% 17% 23%
Software and Applications Programmers 25% 3% 38% 4% 8% 12% 16% 4% 15%
Graphic Pre-press Trades Workers 6% 2% 2% 6% 6% 2% 2% 2% 18%
Printers 13% 0% 3% 31% 20% 3% 0% 0% 18%
Gallery, Museum and Tour Guides 32% 10% 7% 16% 0% 3% 10% 11% 39%
Librarians 22% 2% 19% 5% 1% 2% 10% 1% 20%
Urban and Regional Planners 12% 15% 15% 6% 6% 14% 19% 3% 20%
Jewellers 2% 0% 5% 3% 12% 0% 0% 3% 2%
Signwriters 5% 0% 0% 9% 5% 0% 0% 1% 5%
Sample: Task exposure to risk aggregated at level 4-digit ANZSCO with using the ASC of December 2023 on 20
categories of occupations

12
Graph 2: Mean cumulative risk across aggregated occupations

Signwriters
Jewellers
Urban and Regional Planners
Librarians
Gallery, Museum and Tour Guides
Printers
Graphic Pre-press Trades Workers
Software and Applications Programmers
Multimedia Specialists and Web Developers
ICT Business and Systems Analysts
Interior Designers
Graphic and Web Designers, and Illustrators
Fashion, Industrial and Jewellery Designers
Architects and Landscape Architects
Advertising and Marketing Professionals
Journalists and Other Writers
Film, Television, Radio and Stage Direc
Visual Arts and Crafts Professionals
Photographers
Music Professionals
0.00 1.00 2.00 3.00 4.00
Direct Indirect

Sample: Task exposure to aggregated at level 4-digit ANZSCO using the ASC of December 2023 on 20
categories of occupations

Table 3 provides a heatmap of the distribution of direct risk exposure aggregated by occupation
categories (ANZSCO 4-digit). The colour red show high level of exposure and blue low level
of exposure. IP risks are transversal to most occupations, with graphic and web designers being
the most exposed. Three categories of jobs require immediate attention because they are
exposed to all risks, which represents an intense responsibility disruption. Journalists and
writers face high accountability, bias, and misinformation risks. Multimedia specialists and
web developers, graphic and web designers are particularly vulnerable to cybersecurity, IP, and
privacy risks. When not considering IP risks, music professionals, printers, graphic pre-press
trade workers and fashion designers remain relatively safe. Graph 2 shows the mean number
of direct and indirect risks cumulated in these group of occupations. ICT business and system
analysts cumulate the larger number of risks. Journalists and other writers are the most directly
exposed to cumulated risks.

5. Trajectories of job transformation in cultural and creative industries


[Link], timing and nature of the transformation of jobs

The responsibility disruption is complex and multidimensional. When assessing emerging


risks, each exposure indicator contributes to an overall picture of this disruption, but none alone
or isolated from the other can capture its full complexity. To deal with this complexity, we use
a Principal Component Analysis (PCA), which produces a simplified representation of our data
while preserving its variability. At the occupation level, each exposure variable is numeric, and
the PCA generates quantitative scores, called components, which maximize the average
correlation among our variables. These components are synthetic indicators, each capturing a
dimension of risk that is statistically independent from the other. Their interpretation relies on
variables that play a prominent part in their construction. As a first step, we standardized our

13
exposure indicators, which improved their comparability. These indicators are significantly
correlated. In a second step, we implemented a PCA to identify underlying patterns in risk
exposure (among our 30 direct, indirect, total, and cumulated risk indicators) and assess their
relationship with GenAI exposure on our sample of 126 occupations (ANZSCO 6-digit). Based
on the eigenvalues, we kept the three first components that explain more than 76% of the
variance and interpreted these components with the eigenvectors.

The first component represents the intensity of the responsibility disruption brought by GenAI.
High values represent occupations that are intensively transforming without informing the
nature of this transformation. They are reflecting a high level of responsibility disruption. The
second component differentiates direct and indirect exposure, independently from the level of
exposure. This differentiation informs about the timing of changes, with direct exposure
representing an immediate transformation and the indirect one an expected transformation
shortly. The third component differentiates the nature of the risk by opposing risks such as IP,
harm, and cybersecurity, which can produce tangible effects from bias and misinformation risks
that are more informational. We interpret these risks as opposing business risks and societal
risks, meaning that IP, harm and cybersecurity risks have measurable consequences for
businesses in terms of economic losses, reputational damage or compromised safety. Bias and
misinformation are less easily quantifiable but have significant societal consequences. To
summarise, we find that three parameters, all measurable ex ante, shape the disruption of
responsibility by AI: the intensity, timing and nature of risks.

[Link] jobs by their pattern of transformation

These three components are not correlated, so they are suitable for cluster analysis to classify
jobs based on their risk exposure contextualised by GenAI use. Using the ward method for
clustering and analysing the dendrogram, leads to classify our sample into five categories of
occupations that represent distinct shapes of responsibility disruption and job transformation
zones in cultural and creative industries. Graphs 3 shows the mean number of cumulated risk
and GenAI exposure for each cluster. Graph 4 illustrates the patterns of direct and indirect risk
exposure.

The first cluster A represents the high-risk job transformation zone, where there is a major
responsibility disruption. These jobs reveal intense direct risk levels (except for harm risk)
combining both business and societal consequences, and they are highly directly exposed to
GenAI (on average 51% of the working time). Among business risks, IP rights and
accountability are major concerns. These jobs are also significantly at risk of spreading
misinformation and bias. In our sample, 15 occupations belong to this category, including
music copyist, film and video editor, editorial assistant, vlogger, motivational speaker, public
speaker, radio presenter, author, book or script editor, copywriter, technical writer, blogger,
critic, art historian, and proofreader. In addition to experiencing a capability disruption for
content production, these jobs face a responsibility disruption requiring critical skills to manage
IP, cybersecurity, misinformation.

The cluster B represents an anticipated intense-risk job transformation zone, with an


expected responsibility disruption. It includes jobs that are the highly indirectly exposed to
GenAI (45% of working time), which may lead to a total exposure rate of 72% of the time in
the future. The transformation for these jobs is less rapid than in the previous category but is
expected to become more intense. 18 occupations of our sample belong to this category. For
14 of them, this intense expected transformation requires managing risks of privacy,

14
cybersecurity and accountability in new ways: records manager, web designer, ICT business
analyst, systems analyst, user experience designer (ICT), web developer, analyst programmer,
database developer, database programmer (systems), developer programmer, network
programmer, software developer, cyber security analyst, cyber security architect. For 4
occupations (market research analyst, digital marketing analyst, cyber security developer), the
nature of the risk is mostly societal (misinformation).
The cluster C represents the business-risk job transformation zone and includes 24
occupations. Their common feature is to involve harm and IP risks (directly or indirectly).
These occupations are not differentiated by the intensity or the timing of their risk, but by
potential the nature of the risk. They encompass occupations moderately transforming with
GenAI (such as artistic director, director of photography, technical director, multimedia
designer, interior designer, broadcast transmitter operator, camera operator, sound technician,
microphone boom operator, and photographer's assistant) and occupations more intensively
transforming (photographer, video producer, casting director, architect, landscape architect,
graphic designer, multimedia specialist, desktop publishing operator, performing arts road
manager, special effects person, disc jockey, media producer excluding video, photo journalist,
and architectural draftsperson). In this zone, the responsibility disruption mostly focuses on
business responsibility.
The cluster D named emerging societal-risk transformation zone includes 19 occupations.
These jobs are currently moderately exposed to GenAI, but their exposure is expected to
substantially increase in the future, that could involve spreading of biases. These occupations
include: library assistants, sales and marketing manager, advertising manager, public relations
manager, environmental manager, musicologist, archivist, gallery or museum curator, librarian,
advertising specialist, marketing specialist, pricing analyst, content creator (marketing),
instructional designer, urban and regional planner, park ranger, historian, community arts
worker, and library technician.
The last cluster E includes 50 jobs moderately or not exposed to a transformation by GenAI
compared to all other categories. We name this category stable job zone, even if this category
is heterogeneous. Some of these jobs requires skills to manage limited business risks in terms
of IP (arts administrator or manager, painter, program director and location manager for
television or radio, lighting director, costume designer, jewellery designer, and illustrator).
Other jobs require skills to manage limited societal risks (such as gallery, museum and tour
guides, actor, entertainer or variety artist, stunt performer, music director, music researcher, art
director for film, television or stage and director for radio, private art and music teacher,
makeup artist, and theatrical dresser). Some jobs are not directly exposed to GenAI (printers,
potter or ceramic artist, sculptor, stage manager, audio director, industrial designer, exhibition
designer, naval architect / marine designer, conservator, print finisher, screen printer, gallery or
museum technician, jeweller, musical instrument maker or repairer, signwriter, radio
despatcher, dancer or choreographer, circus trainer, composer, musician and singer, leadlighter,
multimedia and textile artist, quilter, fashion designer, dance and drama teacher, graphic pre-
press trades worker).

15
Graph 3: Cumulative risk and GenAI exposure by cluster of occupations

Cumulative risk exposure GenAI Exposure

All All
Cluster A Cluster A
Cluster B Cluster B
Cluster C Cluster C
Cluster D Cluster D
Cluster E Cluster E

0.00 1.00 2.00 3.00 4.00 0% 20% 40% 60% 80% 100%

Direct Indirect Direct Indirect No Exposure

Note: “All” gives the mean cumulative risk exposure for 126 occupations. Cluster A includes 15 occupations,
cluster B includes 18 occupations, cluster C includes 24 occupations, cluster D includes 19 occupations and
cluster E includes 50 occupations.

Graph 4: Direct and indirect risk exposure by cluster of occupations

Direct risk exposure


Accountability
60%
50%
Standard Bias
40%
30% Cluster A
20%
Cluster B
10%
Privacy 0% Cyber Cluster C
Cluster D
Cluster E
Misinformation Harm

IP

Indirect risk exposure


Accountability
40%

Standard 30% Bias


20% Cluster A
10% Cluster B
Privacy 0% Cyber Cluster C
Cluster D
Cluster E
Misinformation Harm

IP

Note: Cluster A includes 15 occupations, cluster B includes 18 occupations, cluster C includes 24 occupations,
cluster D includes 19 occupations and cluster E includes 50 occupations.

16
6. Discussion and conclusion
This paper has developed a new approach to analyse how GenAI technologies will affect jobs
in cultural and creative industries. The standard approach in the empirical labour economics
literature emphasises the industrial model of capital-labour substitution or complementarity.
Our approach extends this and introduces the concept of responsibility disruption of work,
whatever is the link of substitution or complementarity associated with capabilities of GenAI.
We consider the implications on risk due to the facts that: (a) GenAI brings a range of
significant and novel risks relating to privacy, cybersecurity, professional standards, bias,
misinformation, accountability, and intellectual property; and (b) those risks will need to be
addressed at the occupational level, may it be by workers or via managerial practices.
Our analysis shows that the human responsibility regarding risks and creative human work are
currently unbundling. Ignoring this aspect distorts the understanding of the links between
GenAI and work since it disregards the nature of the output co-produced though the human-
machine interaction. While GenAI can accelerate workers' completion of some tasks, this co-
production process generates new risks (Walkowiak, 2023). As a consequence, GenAI use
requires new or additional skills and capabilities, and different organisational and institutional
forms will condition how those risks are experienced and distributed and their consequences.
Our empirical approach disentangles this risk component when measuring the links between
work and the use of technology, to map the shape of the responsibility disruption that will likely
affect the cultural and creative industries.
Our findings suggest a breakdown of occupations into five job transformation zones requiring
different risk mitigation and upskilling strategies to cope with the responsibility disruption
associated with the growing digitalisation of the task content:

1. The high-risk job transformation zone, like a “stack overflow” in digital work, requires
immediate attention. It is about potential productivity gains and the multiple risks that
“overflow” beyond traditional responsibility and skills. Employers and regulators must
act now to redefine these jobs, focusing on risk management skills and practices.
2. The anticipated intense-risk job transformation zone, like a “pending update” in digital
work is expected to rapidly evolve given GenAI’s increasing capabilities. The
adaptation requires forthcoming updates in skills, practices and risk management
strategies.
3. The business-risk job transformation zone requires risk management “firewalls” that
enforces IP laws and prevents harm.
4. The emerging societal-risk transformation zone shows jobs, usually not scrutinised, that
can spread bias and replicate discriminatory behaviours. Beyond awareness, training,
and education, the diversity of the workforce should be prioritised in these occupations.
5. The stable job zone could be compared to a soft integration of digital work without
significant disruption of the task content of jobs by GenAI.

These clusters help explain the different shapes of the responsibility disruption we can expect
and elucidate the drivers of those changes. These categories might be useful for industry-level
public policy considerations as we deal with the coming wave of creative destruction in cultural
and creative industries due to this technological trajectory. Our findings show that at the task
level, high AI risks (i.e. responsibility disruptions) are present across all GenAI capabilities,
including video, image, and text. We found that it's not the GenAI capabilities themselves (i.e.

17
capability disruption) that primarily determine the level of risk, but rather the job content and
task design. Indeed, GenAI exposure (i.e. capability disruption) isn't a prerequisite for exposure
to AI risks in the workplace. Even tasks not benefiting from productivity gains or quality
improvement when using GenAI accumulated up to seven AI risks. Furthermore, in the cultural
and creative industries, jobs with zero risk simply do not exist. These findings highlight the
imperative to prioritize the analysis of responsibility disruptions to fully understand the impact
of AI technologies in cultural and creative workforce.
The findings are clear-cut: AI risk management is a critical new class of skills and talent in the
cultural and creative industries and one that is important, variable, and that we know little
about. Creative workers are risk-taking in the 'artist as cultural entrepreneur' sense of creative
industries (Hoffmann et al., 2021; Potts, 2012). Creative industries differ structurally from
other industries, such as finance, engineering, or logistics, where risk management is a critical
skill set in production. Our results highlight that the benefits of the integration of GenAI are
largely conditional on managing the range of risks they bring. Our analysis showed that the
innovation and transformation impact can be clustered into (i.e. the five above) different types
of risk and their proximity. These categories suggest a new way of thinking about the skills and
creative responsibility required in cultural and creative industries and how we might deliver,
assess and measure these. One approach as a direction for further research is to extend our
approach through the trident approach of creative talent (Higgs and Cunningham, 2008) to a
quadrant one that includes AI risk management.
Beyond the scientific questions on the conceptualisation and measurement of AI risks, there
are more practical and applied concerns with management, business operations, and strategy.
On the face of it, GenAI adoption in creative industries presents opportunities for cost savings
and productivity gains. However, value creation possibilities owing purely to technical
affordances could be completely washed out by the costs of new risks exposed if those are
mismanaged. The economic business model of GenAI adoption turns crucially and, in some
cases, sensitively on the organisational capabilities to manage the new risks it brings. We
further note that while this is a general consideration for all industries and sectors, this is
particularly acute in the cultural and creative industries, which, therefore, acts as a bellwether
to watch closely.
Lastly, public policy support for cultural and creative industries in a post-AI era might need to
be more explicitly geared to risk management and specifically to offload risk from particular
tasks and jobs. These policy interventions will likely require careful thinking about the detailed
implications of AI-related legislation and regulation regarding the impact on employment in
cultural and creative industries. We urge further work on this front.
References

Acemoglu, D., Autor, D., 2011. Chapter 12 - Skills, Tasks and Technologies: Implications for
Employment and Earnings, in: Card, D., Ashenfelter, O. (Eds.), Handbook of Labor
Economics. Elsevier, pp. 1043–1171. [Link]
Acemoglu, D., Ozdaglar, A., Siderius, J., 2023. A Model of Online Misinformation#. The
Review of Economic Studies rdad111. [Link]
Acemoglu, D., Ozdaglar, A., Siderius, J., 2021. Misinformation: Strategic Sharing,
Homophily, and Endogenous Echo Chambers (Working Paper No. 28884), Working
Paper Series. National Bureau of Economic Research. [Link]

18
Agrawal, A., Gans, J., Goldfarb, A., 2022. Power and Prediction: The Disruptive Economics
of Artificial Intelligence. Harvard Business Review Press, La Vergne, UNITED
STATES.
Agrawal, A.K., Gans, J.S., Goldfarb, A., 2023. The Turing Transformation: Artificial
Intelligence, Intelligence Augmentation, and Skill Premiums. NBER Working Paper
Series. [Link]
Amankwah-Amoah, J., Abdalla, S., Mogaji, E., Elbanna, A., Dwivedi, Y.K., 2024. The
impending disruption of creative industries by generative AI: Opportunities,
challenges, and research agenda. International Journal of Information Management
102759. [Link]
Anantrasirichai, N., Bull, D., 2022. Artificial intelligence in the creative industries: a review.
Artif Intell Rev 55, 589–656. [Link]
Autor, D.H., Levy, F., Murnane, R.J., 2003. The Skill Content of Recent Technological
Change: An Empirical Exploration. The Quarterly Journal of Economics 118, 1279–
1333.
Bordàs Vives, A., 2023. Artificial Intelligence and the Creative Industries.
Bresnahan, T., 2024. What innovation paths for AI to become a GPT? Journal of Economics
& Management Strategy 33, 305–316. [Link]
Brynjolfsson, E., Li, D., Raymond, L.R., 2023. Generative AI at Work. NBER Working Paper
Series. [Link]
Brynjolfsson, E., Mitchell, T., Rock, D., 2018. What Can Machines Learn, and What Does It
Mean for Occupations and the Economy? AEA Papers and Proceedings 108, 43–47.
[Link]
Bureau of Communications, Arts and Regional Research, 2023. Cultural and Creative
Activity Satellite Accounts Methodology Refresh—Consultation paper. Department of
Infrastructure, Transport, Regional Development and Communications.
Caled, D., Silva, M.J., 2022. Digital media and misinformation: An outlook on
multidisciplinary strategies against manipulation. J Comput Soc Sc 5, 123–159.
[Link]
Caunedo, J., Jaume, D., Keller, E., 2023. Occupational Exposure to Capital-Embodied
Technical Change. American Economic Review 113, 1642–1685.
[Link]
Costello, T.H., Pennycook, G., Rand, D., 2024. Durably reducing conspiracy beliefs through
dialogues with AI. [Link]
Cowen, T., Tabarrok, A., 2000. An Economic Theory of Avant-Garde and Popular Art, or
High and Low Culture. Southern Economic Journal 67, 232–253.
[Link]
Cremer, D.D., Bianzino, N.M., Falk, B., 2023. How Generative AI Could Disrupt Creative
Work [WWW Document]. Harvard Business Review. URL
[Link] (accessed
4.2.24).
Cunningham, S., Potts, J., 2015. Creative industries and the wider economy, in: Jones, C.,
Lorenzen, M., Sapsed, J. (Eds.), The Oxford Handbook of Creative Industries. Oxford
University Press, United Kingdom, pp. 387–404.
[Link]
Eloundou, T., Manning, S., Mishkin, P., Rock, D., 2024. GPTs are GPTs: Labor market
impact potential of LLMs. Science 384, 1306–1308.
[Link]
Epstein, Z., Hertzmann, A., Creativity, the I. of H., Akten, M., Farid, H., Fjeld, J., Frank,
M.R., Groh, M., Herman, L., Leach, N., Mahari, R., Pentland, A. “Sandy,”

19
Russakovsky, O., Schroeder, H., Smith, A., 2023. Art and the science of generative
AI. Science. [Link]
Felten, E.W., Raj, M., Seamans, R., 2023. Occupational Heterogeneity in Exposure to
Generative AI. [Link]
Gmyrek, P., Berg, J., Bescond, D., 2023. Generative AI and Jobs: A global analysis of
potential effects on job quantity and quality | International Labour Organization. ILO
Working paper.
Goldfarb, A., Taska, B., Teodoridis, F., 2023. Could machine learning be a general purpose
technology? A comparison of emerging technologies using data from online job
postings. Research Policy 52, 104653. [Link]
Goldstein, J.A., Chao, J., Grossman, S., Stamos, A., Tomz, M., 2024. How persuasive is AI-
generated propaganda? PNAS Nexus 3, pgae034.
[Link]
Higgs, P., Cunningham, S., 2008. Creative Industries Mapping: Where have we come from
and where are we going? Creative Industries Journal 1, 7–30.
[Link]
Hoffmann, R., Coate, B., Chuah, S.-H., Arenius, P., 2021. What Makes an Artrepreneur? J
Cult Econ 45, 557–576. [Link]
Jobs and Skills Australia, 2023. Australian Skills Classification Methodology. Jobs and Skills
Australia, Commonwealth of Australi.
LeCun, Y., Bengio, Y., Hinton, G., 2015. Deep learning. Nature 521, 436–444.
[Link]
Matz, S.C., Teeny, J.D., Vaid, S.S., Peters, H., Harari, G.M., Cerf, M., 2024. The potential of
generative AI for personalized persuasion at scale. Sci Rep 14, 4692.
[Link]
Mollick, E., 2024. Co-Intelligence: Living and Working with AI. WH ALLEN, London.
Mollick, E., Euchner, J., 2023. The Transformative Potential of Generative AI: A
Conversation with Ethan Mollick. Research-Technology Management 66, 11–16.
[Link]
Noy, S., Zhang, W., 2023. Experimental evidence on the productivity effects of generative
artificial intelligence. Science 381, 187–192. [Link]
OECD, 2021. Preparing for the Future of Work Across Australia. Organisation for Economic
Co-operation and Development, Paris.
Peukert, C., 2019. The next wave of digital technological change and the cultural industries.
Journal of Cultural Economics 43, 189–210.
Potts, J., 2012. Creative Industries and Economic Evolution. Edward Elgar Pub, Cheltenham.
Quach, S., Thaichon, P., Martin, K.D., Weaven, S., Palmatier, R.W., 2022. Digital
technologies: tensions in privacy and data. J. of the Acad. Mark. Sci. 50, 1299–1323.
[Link]
Throsby, D., 1999. Cultural Capital. Journal of Cultural Economics 23, 3–12.
[Link]
Tolan, S., Pesole, A., Martínez-Plumed, F., Fernández-Macías, E., Hernández-Orallo, J.,
Gómez, E., 2021. Measuring the Occupational Impact of AI: Tasks, Cognitive
Abilities and AI Benchmarks. Journal of Artificial Intelligence Research 71, 191–236.
[Link]
Tucker, C., 2019. Privacy, Algorithms, and Artificial Intelligence, in: The Economics of
Artificial Intelligence: An Agenda. University of Chicago Press, pp. 423–437.
Walkowiak, E., 2023. Task-interdependencies between Generative AI and Workers.
Economics Letters 111315. [Link]

20
Walkowiak, E., MacDonald, T., 2023. Generative AI and the Workforce: What Are the Risks?
SSRN Journal. [Link]
Webb, M., 2019. The Impact of Artificial Intelligence on the Labor Market.
[Link]

21
Appendix 1 : Additional job titles compared to previous classification

To identify jobs in cultural and creative industries, we used the definition of cultural and
creative industries provided by the Bureau of Communications, Arts and Regional Research
(BCARR, 2023), which released an updated list of ANZSCO occupations. From this list,
we identified the 6-level digit ANZSCO codes that correspond to occupations in cultural
and creative industries. It is important to note that the BCARR did not use the most updated
version of the ANZSCO classification, which could potentially affect our findings. After
merging the BCARR updated list with the ASC, we checked the coherence of classification.
We observed that 54 occupations were reported under a unique job title in the updated
BCARR list, matching various occupations in the ASC. Our sample includes these 54
occupations as separate occupations and occupations for which the merging reported
strictly identical job titles. Precisely, our sample includes 126 occupations, with 123
occupations defined at level 6-digit in the ANZSCO and 3 occupations that are only defined
at level 4 digit in the new classification (printers, gallery, museum and tour guides, library
assistants).

To identify jobs in the cultural and creative industries, we followed different specific steps.
We used the definition of cultural and creative industries provided by the Bureau of
Communications, Arts and Regional Research (BCARR, 2023), which released an updated
list of ANZSCO occupations. From this list, we identified the 6-level digit ANZSCO codes
that correspond to occupations in cultural and creative industries. It is important to note
that the BCARR did not use the most updated version of the ANZSCO classification, which
could potentially affect our findings. After merging the BCARR updated list with the ASC,
we checked the coherence of classification. We observed that 54 occupations were reported
under a unique job title in the updated BCARR list, matching various occupations in the
ASC. Our sample includes these 54 occupations as separate occupations (reported in the
Table below) and occupations for which the merging reported strictly identical job titles.
Our sample includes 126 occupations, with 123 occupations defined at level 6-digit in the
ANZSCO and 3 occupations that are only defined at level 4 digit in the new classification
(printers, gallery, museum and tour guides, library assistants) 2. These 126 occupations
involve 593 specialist tasks for which we measured our exposure indicators. They can be
aggregated into 20 broader categories using 4-digit code to extrapolate industry trends in
employment.

The table below reports the details of 54 ANZSCO titles not detailed in the list of
occupations provided by the Bureau of Communications, Arts and Regional Research
(2023) and included in our sample of occupations. The first column reports the different
job titles in the ASC of December 2023 that are matched with a single job title (second
column) in the updated list of jobs in cultural and creative industries provided by Bureau
of Communications, Arts and Regional Research (2023). The third column indicates if the
job is classified as in cultural industries, creative industries, or both. When Yes is indicated,
it means that the job title was added by the author because it is relevant for cultural and
creative industries but not included in the initial list of Bureau of Communications, Arts
and Regional Research (2023).

2
In addition to introducing details about additional occupations, the December 2023 ASC classifies some
occupations previously at the level 6-digit in the ANZSCO to level 4-digit, or at both level.

22
ANZSCO Titles (ASC, 2023) Label in BCARR (2023) Industry
Circus Trainer Actors, Dancers and Other Entertainers nec Both
Disc Jockey (Nightclub) Both
Extra (Film or Television) Both
Motivational Speaker Both
Public Speaker Both
Stunt Performer Both
Music Copyist Music Professionals nec Both
Music Researcher Both
Musicologist Both
Leadlighter Visual Arts and Crafts Professionals nec Both
Multimedia Artist Both
Quilter Both
Textile Artist Both
Audio Director Film, Television, Radio and Stage Directors nec Both
Casting Director Both
Lighting Director Both
Location Manager (Film or Television) Both
Blogger Journalists and Other Writers nec Both
Critic Both
Editorial Assistant Both
Photo Journalist Both
Vlogger Both
Records Manager Added by authors Yes
Market Research Analyst Marketing Specialist Yes
Marketing Specialist Both
Pricing Analyst Both
Content Creator (Marketing) Added by authors Yes
Digital Marketing Analyst Added by authors Yes
Costume Designer Fashion Designer Both
Fashion Designer Both
Exhibition Designer Graphic Designer Both
Graphic Designer Both
Instructional Designer Multimedia Designer Both
Multimedia Designer Both
User Experience Designer (ICT) Added by authors Yes

23
ANZSCO Titles (ASC, 2023) Label in BCARR (2023) Industry
Cyber Security Developer Developer Programmer Creative
Database Developer Creative
Database Programmer (Systems) Creative
Developer Programmer Creative
Network Programmer Creative
Software Developer Creative
Cyber Security Analyst Added by authors Yes
Cyber Security Architect Added by authors Yes
Art Historian Historian Cultural
Economic Historian Cultural
Historian Cultural
Desktop Publishing Operator Graphic Pre-press Trades Worker Cultural
Graphic Pre-press Trades Worker Cultural
Microphone Boom Operator Performing Arts Technicians nec Cultural
Performing Arts Road Manager Cultural
Special Effects Person Cultural
Theatrical Dresser Cultural
Proof Reader Added by authors Yes
Radio Despatcher Added by authors Yes
Note: correspondence table made by the authors.

24
Appendix 2: Exposure rubric

This is the exposure rubric adapted from (Eloundou et al., 2024) used to label the task
exposure to GenAI.

Consider the most powerful OpenAI large language model (LLM). This model can
complete many tasks that can be formulated as having text input and text output where the
context for the input can be captured in 2000 words. The model also cannot draw up-to-
date facts (those from <1 year ago) unless they are captured in the input. Assume you are a
worker with an average level of expertise in your role trying to complete the given task.
You have access to the LLM as well as any other existing software or computer hardware
tools mentioned in the task. You also have access to any commonly available technical tools
accessible via a laptop (e.g., a microphone, speakers, etc.). You do not have access to any
other physical tools or materials. Please label the given task according to the rubric
below. Equivalent quality means someone reviewing the work would not be able to tell
whether a human completed it on their own or with assistance from the LLM. If you aren’t
sure how to judge the amount of time a task takes, consider whether the tools described
exposed the majority of subtasks associated with the task.

## E1 – Direct exposure. Label tasks E1 if direct access to the LLM through an interface
like ChatGPT or the OpenAI playground alone can reduce the time it takes to complete the
task with equivalent quality by at least half. This includes tasks that can be reduced to: -
Writing and transforming text and code according to complex instructions, - Providing edits
to existing text or code following specifications, - Writing code that can help perform a task
that used to be done by hand, - Translating text between languages, - Summarizing medium-
length documents, - Providing feedback on documents, - Answering questions about a
document, - Generating questions a user might want to ask about a document, - Writing
questions for an interview or assessment, - Writing and responding to emails, including
ones that involve refuting information or engaging in a negotiation (but only if the
negotiation is via written correspondence), - Maintain records of written data, - Prepare
training materials based on general knowledge, or - Inform anyone of any information via
any written or spoken medium.

## E2 – Exposure by LLM-powered applications. Label tasks E2 if having access to the


LLM alone may not reduce the time it takes to complete the task by at least half, but it is
easy to imagine additional software that could be developed on top of the LLM that would
reduce the time it takes to complete the task by half. This software may include capabilities
such as: - Summarizing documents longer than 2000 words and answering questions about
those documents, - Retrieving up-to-date facts from the Internet and using those facts in
combination with the LLM capabilities, - Searching over an organization’s existing
knowledge, data, or documents and retrieving information, - Retrieving highly specialized
domain knowledge, - Make recommendations given data or written input, - Analyze written
information to inform decisions, - Prepare training materials based on highly specialized
knowledge, - Provide counsel on issues, and - Maintain complex databases.

## E3 – Exposure given image capabilities. Suppose you had access to both the LLM and
a system that could view, caption, and create images as well as any systems powered by the
LLM (those in E2 above). This system cannot take video as an input, and it cannot produce
video as an output. This system cannot accurately retrieve very detailed information from
image inputs, such as measurements of dimensions within an image. Label tasks as E3 if
there is a significant reduction in the time it takes to complete the task given access to a

25
LLM and these image capabilities: - Reading text from PDFs, - Scanning images, or -
Creating or editing digital images according to instructions. The images can be realistic,
but they should not be detailed. The model can identify objects in the image but not
relationships between those options.

## E4 - Exposure given video capabilities. With recent advancements in generative AI,


suppose you had access to both the LLM and a system that can understand, interpret,
generate, and edit video content. Label tasks as E4 if there is a significant reduction in the
time it takes to complete the task given access to a LLM and these video capabilities: -
Generating short video clips based on textual descriptions, - Editing video clips by adding,
removing, or modifying elements, -Translating spoken text within videos between
languages, - Captioning video content with accurate context and descriptions, - Analyzing
video content to summarize its themes, sentiments, or key points, - Creating educational or
training videos from textual or spoken descriptions, - Automated video content moderation
by identifying and flagging inappropriate content.

## E0 – No exposure. Label tasks E0 if none of the above clearly decrease the time it takes
for an experienced worker to complete the task with high quality by at least half. Some
examples: - If a task requires a high degree of human interaction (for example, in-person
demonstrations) then it should be classified as E0. - If a task requires precise measurements,
then it should be classified as E0. - If a task requires reviewing visuals in detail, then it
should be classified as E0. - If a task requires any use of a hand or walking then it should
be classified as E0. - Tools built on top of the LLM cannot make any decisions that might
impact human livelihood (e.g., hiring, grading, etc.). If any part of the task involves
collecting inputs to make a final decision (as opposed to analyzing data to inform a decision
or make a recommendation) then it should be classified as E0. The LLM can make
recommendations. - Even if tools built on top of the LLM can do a task, if using those tools
would not save an experienced worker significant time completing the task, then it should
be classified as E0. - The LLM and systems built on top of it cannot do anything that legally
requires a human to perform the task. - If there is existing technology not powered by an
LLM that is commonly used and can complete the task then you should mark the task E0
if using an LLM or LLM-powered tool will not further reduce the time to complete the
task. When in doubt, you should default to E0.

26
Appendix 3: Checking of GenAI exposure variables

To check the validity of the GenAI task exposure we used three independent iterations for the
593 tasks. We changed the temperature of the model or the level of details on the tasks
description which is the input for the model:

- First iteration: the input was the long description of tasks, and we selected a parameter
of high temperature in the LLM.
- Second iteration: the input was a short description of task, and we selected a parameter
of high temperature in the LLM.
- Third iteration: the input was a long description of task, and we selected a parameter of
high temperature in the LLM.

As an output of the scoring, we also requested a concise explanation of the coding to identify
its relevance and potential errors or hallucination.

We carefully and systematically checked the relevance of the coding and explanation. We
identified the high temperature model with long description of tasks as the best strategy (first
iteration). Opting for a long task description allows us to accurately contextualize each task.
The high temperature model avoids a too deterministic explanation, noting that we did not
observe any hallucination in the explanation provided. Noting that the output can change from
one strategy to another, we systematically kept the lowest level of exposure. From our total
tasks sample (593 tasks), 7 tasks were coded Ei=0 (no exposure) during the first iteration using
a high temperature model with the long task description. In contrast, they were all coded Ei=2
(indirect exposure) during the second and third iteration using the low temperature model or
the short description of tasks.

27

You might also like