0% found this document useful (0 votes)
84 views31 pages

Literacy in The Time of Artificial Intelligence 03apr24

alfabetización en inteligencia artificial
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views31 pages

Literacy in The Time of Artificial Intelligence 03apr24

alfabetización en inteligencia artificial
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Literacy in the Time of Artificial Intelligence

Mary Kalantzis
Bill Cope

Abstract

The latest mutation of Artificial Intelligence, Generative AI, is more than anything a
technology of writing. Generative AI is a machine that can write. In a world-historical frame,
the significance of this cannot be understated. It is a technology in which the unnatural
language of code tangles with the natural language of everyday life. Its form of writing,
moreover, is multimodal, able not only to write text as conventionally understood, but also to
“read” images by matching textual labels and to “write” images from textual prompts. Within
the scope of this peculiarly machinic writing are mathematics, actionable software procedure,
and algorithm. This paper explores the consequences of Generative AI for literacy teaching
and learning. In its first part, we speak theoretically and historically, suggesting that this
development is perhaps as momentous for society and education as Pi Sheng’s invention of
moveable type and Gutenberg’s printing press—and in its peculiar ways just as problematic.
In the paper’s second part, we go on to propose that literacy in the time of AI requires a new
way to speak about itself, a revised “grammar” of sorts. In a third part, we discuss an
application we have developed that puts Generative AI to work in support of literacy and
learning. We end with some broad-brushstroke implications for education.

[He] allowed himself to be swayed by his conviction that human beings are not born once and
for all on the day their mothers give birth to them, but that life obliges them over and over again
to give birth to themselves.
Gabriel García Márquez, Love in the Time of Cholera

When we came to the University of Illinois in 2006, the people at the National Center for
Supercomputing Applications (NCSA) were working on a grant application to build the world’s
biggest research computer. At $208m, it would be the largest single grant made by the National
Science Foundation. NCSA won the grant, and today “Blue Waters” (as it was subsequently
named) stands as large as an apartment block at the edge of the campus. Beside it, a massive
cooling tower reminds us that computing is still an industrial enterprise.
While Blue Waters was under construction, as literacy researchers with an interest in digital
media, we mused: just about every published word has already been ingested into the universal
library of the internet. What then, if the relationship of every word to every other word could be
calculated? At the time, this was even too big for Blue Waters.
Well, now it has happened, more or less. It’s called Generative AI. This, we venture to say, is
likely a milestone in human history as big as Pi Sheng’s moveable type of 1039 and Gutenberg’s
printing press of 1430. This is 1039 or 1450 again, depending on whether you are inclined to
look East or West for your historical waystations. And it’s just as problematic as these inventions.

1
This is because literacy and its means of production have been as menacing in social practice as
often as they have been liberatory.

I LITERACY, DISRUPTED

“Literacy,” say the authors of their entry, with this unadorned title, in the latest edition of the
Handbook of Educational Psychology, is “the ability to read and write” (Kendeou et al. 2023:
553). Then, for twenty-two pages they tell us the conventional and excruciatingly dull ways in
which schools squeeze literacy out of learners, using standardized assessments to show that
certain kinds of disciplined intervention produce “growth” measured in test scores.
In one perspective, the study of “learning to read and write” is based on a theoretical premise
so commonplace and outdated that it is hardly worth examining again. Schools do literacy
because for quite a while this has been what schools have done (and mathematics, of course, the
third of the three “Rs,” but as we will argue shortly, literacy and mathematics are converging as
textual forms). Within their narrowly instrumental frame, traditional literacy researchers work to
figure out how to extract better scores on the self-imposed measure of standardized tests of old-
school reading and writing.
But what if there is more to literacy since the rise of digital media, and even now with
Generative AI? And what if conventional schooling is moving into a phase of “disruption” at
least (Christensen, Horn and Johnson 2008), or even a crisis of fundamental institutional form
(Gee 2013)? If this is the case, narrow definitions of literacy from the legacy model of schooling
will not do.

Literacy, Technology, and Society: A Very Short History

Briefly, to recap the prehistory of modern education, literacy in its different social contexts has
forever played a tortured role in the progress and cruelties of relentlessly unequal societies. It has
set limits for some kinds of people as often as it has opened for other individual persons.
In the human beginning, “first languages” were profoundly multimodal. Their mnemonics of
human experience were written through the amalgam of image, song, dance, totem, sacral place,
and more. For at least one hundred thousand years, the synesthetic multimodality of human
meaning defined us as a peculiarly self-reflexive species (Cope 1998). Here we might consider
writing in the broadest terms, as writing our human selves and participating in meaning. The
modes of social participation for first peoples were broadly egalitarian, at least compared to the
inequalities of slavery, caste, and property that were to follow.
Then came writing in the narrower form we understand today: regularized systems of
repeatable, symbolic graphemes. This happened in Mesopotamia about five thousand years ago
then after that, separate inventions in India, China, and Mesoamerica. Writing in this sense
emerged in parallel with, and in support of, the rise of radically unequal societies, supplanting
the relatively egalitarian lifeways of first peoples (Kalantzis and Cope 2006).
The first writing was a mechanism for the maintenance of inventories of ownership and
wealth. It was an instrument of state bureaucracy primarily used for the siphoning-off of
surpluses. Later, it became a font of religious power that maintained the social order as an

2
antidote to the deep social tensions generated by inequality (Goody 1986). In its founding
moments, says the towering anthropologist Claude Levi-Strass, “the primary function of written
communication is to facilitate slavery.” Writing “favored the exploitation of human beings rather
than their enlightenment. ...The only phenomenon with which writing has always been
concomitant is the creation of cities and empires, that is the integration of large numbers of
individuals into a political system, and their grading into castes or classes” (Lévi-Strauss 1955
[1976]: 392).
Though, of course, we have learned to love writing and reading. If its first motivation was to
support the institutionalization of radical inequality, it also became a medium for the creation of
great literature, profound philosophy, universal social memory, new knowledge, and a conduit
for virtual telepresence that defies time and space. Writing also became a medium for lyrical
appeals to our better natures and a source of inspiration for emancipation.
Then, in 1039 and 1450, there comes the mechanization of writing. While Pi Sheng’s
invention of moveable type in China in 1039 was not applied beyond his workshop (Tsien and
Needham 1985: 201), Gutenberg’s printing press of 1450 became the prototype for modern
technologies of modularization, repetition, and division of labor (Eisenstein 1979). The first
properly industrial technology was a writing machine.
In its first centuries, the printing press served two functions, as an instrument of power and to
draw illiteracy as a line of exclusion. Print literacy consolidated the power of a ruling class that
depended for their power on the elite literacy of printed laws, administrative memoranda, and
accounting ledgers. In Europe, the educated ruling class wrote for each other in the scientific,
literary, and religious language of Latin, inaccessible to the masses (Waquet 2001). Gutenberg’s
Bible was in Latin, requiring the chants of priests, the lordliness of their robes, the imagery of
icons, and the grandeur of church architecture to serve as interlocutors between mystically
inaccessible ruling text and popular belief. Literacy served to draw a line of social division,
creating a dividing line between the literate ruling class and the illiterate masses. Books
remained expensive. Literacy levels stayed low.
It is not until the nineteenth and twentieth centuries that universal literacy was adopted as a
social objective. This emerges in conjunction with the establishment of mass-institutionalized
education. In industrial modernity, parents worked away from home, so the state committed to a
new duty of care, the socialization of children. For most students, the learning objectives and
outcomes of schooling were minimal. The three “Rs” of reading, writing, and arithmetic were
taught to the basic level required for modern industrial work. Historian of literacy Harvey Graff
adds, more than minimally functional, “literacy's place was not always as a skill or technology. It
was the best medium for tutelage in values and morality,” shaping “properly schooled workers—
possessed a number of qualities: punctuality, respect, cleanliness, discipline, subordination, and
the like,” contributing to the formation of a “controllable, docile, respectful workforce, willing
and able to follow orders” (Graff 1987: 261). Behind the content of the literacy curriculum there
was a moral economy of social practice.
Now that literacy had become mass, it had to regulate inequality in new ways. It did this
through the ideology of opportunity—basic reading and writing for everyone, but with an
insistence on the unequal distribution of educational outcomes according to school results. Only
the smartest and most disciplined, as measured in school tests, could go on to further education.
However, the opportunity offered by literacy was always fictive, because by virtue of their
privileged access, some social groups were always afforded better chances to do well at the tests
than others.

3
The redistribution of inequality through literacy reaches its scientific apotheosis with the
normal distribution curve, judiciously spreading children across a numbered spectrum in which
only a few can be labeled “genius,” “gifted,” or even “above average;” and where the majority
warrant classification merely “average,” or “below average,” descending from there to “moron,”
“imbecile,” or “idiot” (Goddard 1920). The inventor of the “normal” distribution curve failed
adequately to distinguish native intelligence from social conditions. It was useful to be able to
rationalize social conditions as a function of intelligence and inequality as a natural state.
Schooling provides an ideology by means of which persons are sorted into mediocrity and
worse. With formal school results, are told they have only themselves to blame for the inequality
of their social outcomes. Educational opportunity has always been an ideological illusion
because some social groups were given privileged access. Schooling afforded some more
opportunity than others. Of course, liberal democrats and working-class advocates hoped for
something more. Nevertheless, the stubborn gaps in performance have persisted.
Many also hoped that standardized literacy could help iron out the differences of language
and dialect—indigenous, immigrant, regional, class, ethno-racial—so creating the “imagined
community” of the culturally and linguistically homogenous state, and bolstering its ideology of
nationalism (Anderson 1991). However, rather than ironing out the differences more often than
not, this created new vectors of failure in the awkward disjunctions between the literacies of
schooling and working class speech (Bernstein 1971), dialects such as “Black English
Vernacular” (Labov 1972), and the summary exclusion of indigenous, immigrant other
“minority” languages from school (Kalantzis, Slade and Cope 1984, Phillipson 1992).
As much as standardized literacy hoped to provide a pathway to social opportunity, it was
also designed not to work. It foreclosed opportunity more often than it realized its promise. If in
a sense it did work, it was not in the ways that its liberal proponents would have hoped. As often
as it tempted learners with the promise of opportunity, it reinforced and reproduced social
division. This millennia-long story shows that we need to define literacy not by what it is, but by
what it does.
And this is very much the case today because Generative AI is potentially big. Indeed we
would argue it is big on a scale that also marks the printing press and modern institutionalized
education as significant waystations. The consequences for schooling could be big too—big bad,
big good, or big both.

Multimodality Returns

Before we get to artificial intelligence, we want to highlight the changing shape of literacy in
terms of its technological affordances. Gutenberg invented a particular kind of printing machine,
the letterpress (Cope and Black 2001). Inked types were pressed into a page. Images, however,
required a different technology, lithography. The different technologies shaped a radical
separation of text from other forms of meaning, necessarily to be printed on different pages in the
same book. First peoples without writing in its modern textual form and illiterate people after the
arrival of writing made their life meanings through multimodal synesthesia. But the coming of
print literacy had the effect of separating out and privileging written text.
A series of inventions in the twentieth century offered the possibility of a return to
multimodality. Photolithography comfortably put text and image onto the same page with
“offset” printing, replacing letterpress almost entirely by the mid-twentieth century. Also, until

4
the turn of the twentieth century there were no technologies for telepresence across time and
space other than other than those text and image. In the twentieth century there emerged
technologies of telepresence for sound, speech, and embodied gesture. Telephone and radio
transmitted sound and speech across space, and the gramophone across time. Cinema and
television brought image and sound together, allowing the simultaneous representation of
moving body, dynamic object, sound, speech, and titling with text.
However, in these analogue technologies for the production and reproduction of meaning,
there were still large resistances to the mass transition to multimodality. The first was that
although analogue technologies had converged for printed image and text, sound and speech
were still quite separate—the soundtrack of the movie film, for instance was literally sticky-
taped alongside the images.
These technologies also required large capital investment and professional training for their
operators—hence the printing factories, and television studios, and radio stations. Control was in
the hands of the owners of the machines. As a consequence, the privileged interests of media
moguls tended to dominate the content they produced. These became “mass media” that
“manufactured consent” (Herman and Chomsky 1988 [2002]). They also disseminated
“propaganda” (Bernays 1928 [2005]) to maintain their class interests in the existing social order.
The owners of the means of production of meaning shaped the content of meaning, while the
masses were left few options but to consume and conform.
The arrival of digital means of production, reproduction and distribution of meaning
overcomes these resistances. Forms of meaning converge—everything can be represented in a
common elementary modular unit of manufacture: binary notation. On this foundation,
computing machines capture and process text and image in two-dimensional pixel array, space in
3D imaging and virtual worlds, object 3D capture and printing, body in wearables and video, and
sound and speech in digital audio.
Meanwhile, economical devices and near-zero cost of reproduction and distribution mean
that, while most people were passive consumers in the era of analogue media, most people at
least had the capacity to be creators of meaning in the digital era. We call this a change in the
balance of agency from print literacy to digital literacy (Kalantzis et al. 2012 [2016]: 54-56). In
the previous era of print and mass media capitalism, it was powerful humans who owned the
means of production of meaning and dominated its contents to serve their interests. But with the
advent of digitization, billions of humans have available to them the tools to make multimodal
meanings for themselves and share them through internet.
However, even with this dramatic return of multimodality, the practices of schooled literacy
still mostly separate out text as if none of these transformations had occurred. This is why we
developed the agenda of “multiliteracies” in the plural. This is at the very least a necessary
supplement to the legacy practices of literacy in the singular and the passive acquisition of its
standardized forms (Cope and Kalantzis 2023c, Kalantzis and Cope 2023, New London Group
1996).
Here we want to apply the notion of “affordance” (Gibson 1977). We have not changed our
meaning-making practices because technology has forced us. We have changed because we can,
and because when we can, we mostly do. In world-historical terms, this has prompted a return to
ancient synesthesia, long suppressed by print modernity. But even with the change in the balance
of meaning-making agency and notwithstanding the hopes of advocates of an open, digital
commons (Benkler 2006), there has been no return to ancient egalitarianism. Even with the

5
means of production of meaning in hand, it is hard to think beyond the commonsense and
seemingly inexorable powers of the present.

Now, Here Comes Computable Writing

Until now, humans had been in sole control of the production of their meanings. Machines could
reproduce meaning, but they could not make meaning. But with Generative AI we have machines
that can make meaning. This is a machine that can produce written texts and multimodal
derivatives that are entirely coherent and meaningful to humans. Generative AI will write an
impeccably well-formed text in seconds. It will create a perfectly executed image. Literacy, and
more than that, multimodal literacy, has become computable. This, surely, is as big as the
printing press, and as problematic—both with respect to multimodality and the balance of agency
and social relationships of power.
How do these new literacy machines work? Faster computing systems and advanced statistics
have allowed every word among billions scraped from the web to be analyzed in relation to its
surrounding words. This is the “P” of GPT—pre-trained. Generative AI is a next-word predictor.
When prompted by a user’s chosen subject and style, it writes (the “T” or transformer) by
placing after each word the statistically most probable next word. It also generates images
derivatively from text. From a library of labeled images, a textual command by a user will
prompt the transformer to produce a new image according to the particular requirements of the
user (Cope and Kalantzis 2023b, Munn, Magee and Arora 2023a).
This throws literacy teaching into an immediate crisis. Writing is a laborious process. It is
almost impossible even for the best writer to avoid at least some small flaws. Then why write,
when a machine can do it instantly and flawlessly? Why read, when a text can be spoken to you,
and can be told to speak with just the right level of difficulty for you?
More broadly, Generative AI also throws education into crisis. In the history of modernity,
the mechanization of agriculture drew people from farms into industrial cities, then automation
reduced the scale of manufacturing employment increasing the size of a promised knowledge
economy (Cope and Kalantzis 2022a, Peters and Beasley 2006). Generative AI will now
automate knowledge work. We may need far fewer lawyers, accountants, advertising writers,
commercial artists, editors, architects, translators, and customer service agents, to name just a
few of the jobs that could be impacted by AI. And teachers—the AI will be able to be more
responsive to a particular learner’s needs in a one-to-one relationship than a human teacher
working in a 1 to n relation. Previous waves of technological transformation have mechanized
low-pay, low-skill labor, and for the better. AI now mechanizes well-paid cognitive work. The
economic consequences could be devastating (Eloundou et al. 2023, OECD 2023). For
professional educators specifically, it will likely fuel the “What’s the use of education?”
discourse.

The Literate Agent in the Time of AI

Meanwhile, what has happened to the literate or multiliterate agent? Digital media opened
avenues for meaning production and social sharing, but these were not without constraints and
limitations. Far from opening out a public square or “creative commons” (Lessig 2004), internet

6
barons have taken it upon themselves to micromanage our sociability. The reasonably paid
journalists and television producers of the mass media have been displaced by the unpaid
creative work of users in social media (Kalantzis-Cope 2016). On the backs of unpaid laborers,
the new media barons have made themselves fabulously rich. More agency may have been
granted to users than in the older mass media allowed, but this has come at a price.
Generative AI changes the game again. Social media is increasingly driven more by interest
algorithms, and less by user navigation. Google was the first to build user profiles from search
and by scraping user-created documents and emails (Zuboff 2019). TikTok led the way in the
application of AI to social media, soon to be followed by content delivery in Facebook,
Instagram, and YouTube. Less and less is the web “reading” of posts by chosen friends and feed
subscriptions. More and more, social media feeds are driven by AI interest algorithms. Linger a
few seconds longer on a video, and you’ll be served another video that the algorithm determines
to be related. Generative AI learns about you in the same way, from the kinds of prompts you
serve.
Meanwhile, the skill and effort bar for content creators has risen, with short form videos
requiring more time and expertise to produce than a phone photo or a short textual message.
With algorithms that valorize and magnify the popular ahead of other possible values,
influencers have come to dominate this new, low cost, largely junk media. Politics is downplayed
because it has become abusive, divisive, and prone to fakery. Social participation is reduced to
prudish flashes of sexual suggestion, disgusting domestic interactions, and half-funny accidents.
These are some of the new ways in which agency is corralled in the time of artificial intelligence.
Though perhaps, “artificial” is a misnomer. To create the Large Language Models (LLMs)
upon which Generative AI depends, its owners have copied just about everything published on
the web including the scans of nearly every printed book and every labelled image. These have
been copied with no recompense to their creators. Now, in the case of Generative AI, this
happens without even non-monetary recognition because the sources have been mashed together
and lost in the convolutions of neural nets. Artificial Intelligence is a new colonialism, not of
material space like the old colonizers, but the colonization of social intelligence by these
insurgent moguls of human meaning.
This means, literacy—or we would like to say, multiliteracies—has a big, new job to do.

II LITERACY, REDEFINED

Here are the traditional canons of literacy, roughly in the order of their teaching and learning: 1.
learn sound-letter correspondences (in alphabetic languages, at least); 2. understand the meaning
of combinations of letters in words; 3. understand and write sentences and more; 4. read and
understand extended texts; 5. appreciate literary greatness. It takes schools about a decade to get
students from 1. to 5. At every step, this version of literacy is anachronistic, and it has been that
for a long time. Generative AI makes the situation worse.
Let's start with the underlying premise: that there are consistent, stable, and always correct
things to be learned: phonemes; spelling; grammar; meaning that is comprehensible because it
can be the same from one person to another; genres of text that should be privileged; and high or
classical literary forms. This version of literacy is a project for imposing textual rules, principles,

7
and literary values in order to transmit them from one generation to the next. Not much room is
left for learner agency or differences in this version of literacy.

Designing

As a counterpoint to traditional literacy, we have proposed in the theory of multiliteracies a


process of meaning-making that we call “design” (Fig. 1). Our premise is that all meaning is
fluid and transformative. Meaning making draws on found designs, and to be sure this includes
the sounds of letters, the arrangement of letters into words, the ordering of words into sentences,
and the genres of larger texts. But there is also a lot more, because written text and its meaning
cannot be isolated from their surrounding images, spaces, objects, bodies, contexts, and lifeworld
experiences.
Using these found designs, design work occurs. Take writing or reading. This work uses both
material resources (pens, papers, computers, books) and the ideal resources of interested human
agency. In the endless variety of contexts and interests, no two meanings are ever exactly the
same. Rarely is the same sentence or sequence of sentences written twice. Never do two pieces
of text mean exactly the same thing from one writer to the next or one reader to the next (Cope
and Kalantzis 2020: 68-72).
The notion of design recognizes the agency of the meaning maker. The traces the designer
leaves are uniquely voiced. The designed artifacts they deposit in the world leave the world
transformed (Cope and Kalantzis 2020: 68-72, 301-303, Kress 2000). This is how Florentino
Ariza in Gabriel García Márquez’s Love in the Time of Cholera comes to his conclusion about
human beings, that “life obliges them over and over again to give birth to themselves” (Márquez
1988: 165).

Fig. 1: Meaning as a design process.

Literacy in this view is a social process of participation in meaning: from representation as


meaning for oneself; to communication as meaning made for others; to interpretation as the
meaning one makes of the material traces of communication left by others (Fig. 2). The three are

8
very different. Meaning for oneself is multimodal, a mixture of words, images, and embodied
feelings. It takes context for granted. Conceived like a sentence, it has predicates that don’t need
subjects. Communication, however, has to identify its subject explicitly, or the other person
won’t know what you mean. It has to turn the meaning into words (or images, or other forms of
meaning) that will minimally carry the intended meaning across time and space. Then
interpretation is the meaning somebody else makes of a meaningful artifact that they have
encountered—a text, a picture, an object, or whatever. Interpretation can be as varied as the
people of the world, the contexts of their living, and the interests they have. At every point the
meaning changes, and it is the person doing the meaning who changes it. This is why we call
meaning a process of transposition. It is a cognitive process and a material process, making
meaning in media (Kalantzis and Cope 2020: 47-63).

Fig. 2: Participation in textual meaning

This, incidentally, makes a mockery of traditional comprehension tests of reading, forced into
select response assessments. “B” can’t straightforwardly be the correct answer. And what if a
reader is attracted to “D,” a trick or “distractor” item in the mind of the test maker but an answer
that nicely satisfies the test-taker’s interpretive frame of reference? To avoid the inevitable range
of interpretations, comprehension and understanding are frequently reduced to trivial factoids
that a readers may happen to remember when they have finished reading or have to look up again
because they are hardly relevant to their interpretation. Tragically, these kinds of tests have
become a proxy for literacy, a cheap and lazy way to put a number on literacy performance.
Writing, by comparison, has until now been expensive to mark and open to variations in human
judgment. In the era of Generative AI, we’re going to have much better ways to assess literacy,
including writing and the highly variable depths of interpretation in reading.
We have said this before, and for those who are hearing it again now, we beg your
forbearance. Traditional literacy pedagogy is by comparison static, stable, rule-bound, culturally
monolithic, and diminishes the agency of learners. “Comprehension” and “understanding” are

9
based on a conduit or transmission model of communication, while participation in meaning is
transpositional and fluid.
From one person to another, the differences are as important as the continuities in meaning.
Far from getting things right every child redesigns the meaning of the world in their own way.
The differences in interpretation based on the varied life experiences and interests of learning are
of greater pedagogical significance than the capacity to repeat factual details from a text. The
design alternative focuses on change, agency, difference, and the world-transformative capacities
of every meaning-maker (Cope and Kalantzis 2023c, Kalantzis et al. 2012 [2016], New London
Group 1996). But we’re saying it again now because there’s a point we need to add about
Generative AI. Until now, only humans could do design in this definition. But now...

For the first time in human history, a machine can design and communicate text, image, and
sound that have never been created before but are nevertheless coherent and meaningful to
humans.

The consequences for literacy are enormous. We will start with the one that was first to
startle and scare educators— because every AI generated text is unique, there is no reliable way
to detect whether text has been written by a human or a machine. Until now, the measure of
whether the work was by a student themself was none other than our design measure: is it
unique? Because if there it is a copy, there will be a discoverable identical source somewhere
else. Without quoting and referencing that source, this is cheating. To avoid this, you could ask
the students to hand scribe, but unless you lock them in a room and disconnect them from the
internet, this could be a transcription from a Generative AI output. Or you could be suspicious
when the work handed over by the student is free of even the smallest error, but that’s easily
fixed by adding a few strategic typos.
Since the internet, the unique design measure has been mechanized in plagiarism checkers,
searching the universe of digitized texts for matches. But Generative AI has put the plagiarism
checkers out of business. Of course, there have always been ways to get around the mechanical
checkers. With more than a hint of irony, a leading AI in education researcher concluded that
Generative AI will “democratize cheating,” undercutting the expensive essay mills that have
until now been used by an estimated 15% of higher education students (Sharples 2022: 1120).
So, what is literacy in the time of Generative AI? What is this generative thing until now only
humans could do, and now AI as well? We’ll focus on written text for the moment and get to
multimodal multiliteracies a little later in the paper.

Scribing

What then does it mean to write and to read? This is a peculiar kind of design work, not simply
cognitive, but embodied in eyes and hands, and materialized in media. Written text is the work of
scribing graphemes in a two-dimensional spatial array. For much of the time writing is linear or
one-dimensional, but a second dimension comes into play in lists, tables, headings, page or
screen layouts, diagrams, infographics, and the like.
Traditional literacy educators focus just on written text, but even that has changed. Our
definition for the digital age:

10
Written text is that which can be scribed in or transcribed into the universal graphemic
symbology, Unicode, and organized into two-dimensional array.

Nearly every digitized text can be read on nearly every digital device because all use a single,
universal symbology, Unicode. The latest version at the time of writing, Standard 15.1,
catalogues 149,813 encoded characters, or graphemes. A handful represent spoken sounds or
phonemes, for instance: a, ‫ض‬, ŋ. Most represent ideas or ideographs, for instance: 8, +, @, 知,
☻ (Fig. 3). Every regularized grapheme from the human experience can be found in Unicode,
ranging from lost and still unintelligible ancient languages to emojis and icons that have only
recently entered our digital cultures. These might look slightly different from device to device,
font to font. But across devices, Unicode standardizes their meaning as graphemes. When a
machine scans your handwriting, it turns what you have written into Unicode (Cope and
Kalantzis 2020: 23-25).

Fig. 3: Written text consists of graphemes, nowadays regularized, standardized and universalized
in Unicode.

In a number of practical ways, this expands the scope of literacy. From the earliest of ages,
children are exposed to a graphemic symbology that includes navigational ideographs such as
play or pause, thousands of emojis speaking to sentiment, and many other such symbolic
representations. Sometimes, these are embedded in-line within natural language. At other times
they bring organizational order to multimodal screen and sign meanings. These new graphemes
could have been represented in phonemic text, but the tendency is increasingly to use
ideographic text. Literacy teaching and learning needs to recognize, and to some degree, go with
the flow of these changes. For instance, get your early writers to write messages that use emojis
with affect and effect. Get them to transpose emojis and their phonemic equivalents.
Not only does Unicode capture writing in natural language. It also supports writing in the
comparatively unnatural languages of mathematics, computer code and algorithmic procedure
using the same character set and on the same two-dimensional spatial array. Generative AI is
built on the foundation of Large Language Models (LLMs). Nearly everything of published and

11
digitized human experience has been recorded and analyzed for the statistical relations in the
sequence of Unicode characters without differentiation as to their kinds of language. The raw
material of LLMs is written text in this definition. This is how it is able to write code and
mathematics about as well as it writes natural language—because it treats them all in the same
way, as sequences of characters that can be rendered in Unicode.
As long ago as the so-called New Math (Beberman 1958, Phillips 2014), progressive teachers
have presented math integrated with text in the form of real world problems and required
students to make their mathematical reasoning explicit in natural language think-alouds.
Generative AI chatbots now establish this dialogue with students through written text.
Meanwhile, in computer programing, best practice has always required in-code written
documentation. However, Generative AI has rapidly precipitated a greater mix of natural
language and abstract code (Yang et al. 2024). The purely procedural and mechanical parts of the
code have been automated, and code can be generated with natural language prompts. In a
certain way, computer coding and mathematics have always been specialized—if unnatural—
writing practices by virtue of their extreme formality. But now natural language plays a closer
role in both. In fact, Generative AI’s prompt engineering is a form of natural language
programming. Literacy used to be two of the old three “Rs.” Now it’s deeply interwoven into the
third. Practically speaking, these developments blur these disciplinary boundaries.

Textual Meaning

Phonemes in Unicode are meaningless to the machine. For humans, the smallest meaningful unit
of written text is a morpheme. Like humans, Generative AI works at the level of morphemes. It
combines Unicode characters into objects it calls tokens.
Take the word “walk” in the sentence, “I walked to work.” “Walked” consists of two tokens,
“walk” indicating the kind of action, and “ed” because the events described in the sentence
happened in the past. The other words in this sentence are single tokens. Analyzing natural
language, LLMs find slightly more tokens than there are words. Across billions of words scraped
from the internet, Generative AI calculates the statistical probability of the next token based on
the words surrounding that token. It finds “walk” in many different relations to surrounding
words. There’s “walk to work,” and “walk the dog.” Now we have two different “walk” tokens
because the words near each “walk” are different. The “parameters” in an LLM are the number
of surrounding words that are examined and thus the number of potential variations of “walk.”
This is how LLMs come to have a vocabulary of sorts numbering in the billions of words.
Not that the LLM can ever know the meaning of “walk.” Computers can’t mean anything
other than zero or one. All they can do is calculate by textual transposition: recorded Unicode >
chunked into tokens > binary notation > calculation of the probability of the next token > token >
readable Unicode. The calculation happens in convolutional neural nets, the results processed
through a transformer. These calculations are so vast and excruciatingly dull in their zero-and-
one-ness that it is impossible to trace exactly how the next word has been calculated. It’s a black
box. There’s a ghost in the machine.
Now, to compare how humans make meaning, we are going to take three “walked”
sentences, illustrated here with images created by prompts served to the image generator,
Leonardo AI (Figs. 4-6).

12
Fig. 4: He walked to work.

Fig. 5: He walked the dog.

Fig. 6: He walked the prisoners to their cells.

13
In school, we have traditionally done some basic parsing of the meaning of sentences like
these. There’s a subject (he) and a predicate (his walking). There are nouns, pronouns that can
point to a noun, a verb, and we can see that it’s in the past tense. And there are some handy rules
like, a sentence should always have a verb, and check subject-verb agreement—it can’t be “he
walk” in some dialects of English, though it can be in others.
Nevertheless, there are subtleties that school grammars miss. These are three very different
kinds of “walk.” Walking to work is goal and direction-oriented, from A to B. Walking the dog is
from A and back to A. But when the dog is enthusiastically pulling at the leash, isn’t the dog
really walking its human carer? And walking the prisoners is walking that is meant for them but
against their will.
“Walk” can mean importantly different things. Professional linguists can tease out these
differences grammatically in the nuances of transitivity, mood, voice, case, and more. Yet even
when we don't have the technical words for it, we know the differences. People do grammar in
their brains, sometimes consciously but mostly unconsciously. Grammar is how we make sense
of the world. (A little later in the paper we’re going to suggest a way to make grammar more
manageable, bringing more meaning to explicit consciousness in teaching and learning.)
Generative AI understands these three different kinds of “walk” along with perhaps
thousands of other kinds of “walk” by the statistical relation of each “walk” with the words
around it. “Walk” is not just one token, but thousands or tens of thousands determined by textual
context. This is how the computer produces meanings for humans. The computer has no capacity
to mean. It is just a calculating machine with a vocabulary of billions of pseudo-words. The
semantics it chances upon by calculation are latent but no more.
Compared to LLMs, human brains work in a completely different way (Cope and Kalantzis
2024b, Siemens et al. 2022). There is no way a brain could know the billions of words that have
been copied from the web for the LLM, and the billions of parameters that show the statistical
probability of connection of each unique token with its surroundings.
The human mind, by contrast, classifies the world grammatically. “He” is a kind of person;
“walk” is a kind of action; “...ed” means the action has been completed. Grammar is an
elementary theory of how the world works. This is how we make human sense of the world’s
otherwise endless and bewildering complexity.

The human mind works grammatically. Generative AI works statistically.

Multimodal Meaning

Generative AI is essentially a technology of written text. Only derivatively is it multimodal. Take


images, for instance. The resource for image generation is billions of digitized images. However
the AI can’t know what is in the images other than the array of pixels represented in zeros and
one. It only “knows” the textual labels that have been applied such as “man walking,” “cream
colored standard poodle,” and “rural footpath.” There will be thousands of images labelled with
one or more of these attributes. Then the only way to generate an image is with a textual prompt:
“give me an image—man walking, cream standard poodle on a rural footpath.” For the era of
Generative AI, learners need to become proficient in multimodal transpositions such as these.
They need to become good “prompt engineers.” This is a multimodal, text-to-image art.

14
Nevertheless, even though written text is primary in Generative AI, the technology is
nevertheless powerfully multimodal. The transpositions are dazzlingly effective—sufficiently
effective to warrant application of our notion of design. “Man walking cream colored standard
poodle on a rural footpath” (Fig. 5) is an entirely coherent, beautifully formed image the likes of
which has never been made or seen before.

Speech—Text—Speech

When it handles speech, Generative AI is also only derivatively phonological. To be included in


the LLM as a source text, or when a prompt is oral, speech must first be transliterated into text.
Then if the Generative AI response is oral, transliteration needs to occur back from text to
speech. In this process, the characteristic meaningful features of speech are mostly lost including
prosody, dialect, gesticulation, embodied context, redundancy, hesitation, circumlocution, and
more. In any event, LLMs are biased towards the grammar of written text if for no other reason
than most of their sources are published and digitized writing. The ordering of words is more
carefully crafted in text than speech and thus more amenable to statistical processing.
In comparison with the limited extent and inadequate ways in which Generative AI manages
text-to-speech transpositions, the human transpositions are deceptively difficult. They are
certainly much more challenging than the sound-letter transpositions at the center of beginning
literacies that are focused principally on phonemics. Speech is organized across time. On the
human sensorium speech can happen purely in sound, though it is frequently aligned with other
temporally ordered meanings such as embodied gesture. Text, on the other hand, is arranged in
two-dimensional space. It can be purely a matter of vision, and frequently aligned with image. As
material, embodied, and cognitive processes, text and speech could hardly be more different
from each other (Kalantzis and Cope 2022). This is why literacy is so important and so
challenging for learners, not to be trivialized by reduction to simplistic handful of sound-letter
transliterations. It’s much, much harder than that.
The phonics advocates are right about this much: it’s a good thing to call out explicitly key
patterns in the meaning making process. In the limited time for learning in school, this is more
efficient than immersion models. It also has the benefit of exercising the relation between
cognition (the text) and metacognition (generalizations about its patterning).
There are forty-four basic sound-letter combinations in English, but these do not bear
belaboring for too long. Forty-four things are not too hard for young minds to learn. But literacy
teachers do also need to take into account the vastly different cognitive and material processes of
arranging meaning in time (speech, sound, body) compared to space (text, image, space, object),
and the necessary multimodal transpositions and complementarities. Literacy—even text-
oriented literacy—is of necessity always multimodal. Kindergarten teachers and children’s book
authors have known this forever.
On the scale of the challenge of multimodal literacy, the matter of sound-letter
correspondences is probably best left to the AI. It can likely drill these better than any human
teacher. Put phonics in some fun computer games, and the one-to-one AI will do a better job of
tracking and supporting individual learner progress than the one-to-n teacher. Besides, there are
other more important things for the teacher to do that a machine cannot, such as nurturing the
socio-emotional environment of learning.

15
In any event, after phonics, human reading and writing is semantically rather than
phonologically oriented. When reading, one’s eyes jump along the line from one meaning unit to
another in movements called saccades. Each unit of attention is much the same as the Generative
AI token in the machine’s only latent semantics. In beginning literacy it is helpful to work on the
transliteration of the sounds of speech into phonemic graphemes. But get this done quickly! And
just get it done in a rough and ready kind of way because, beyond the forty-four stand-out
contrasts in English, there are thousands of exceptions to rules and subtler sound combinations,
the nuances of which can only be learned at the whole word level and the elision of words in
speech. It is no accident that text-speech technologies break speech into morphological units, not
smaller phonological units.
We’ve said that the transposition between temporally ordered speech and spatially ordered
written text is enormously challenging cognitively as well as performatively. Perhaps this is the
most challenging of all transpositions between forms of meaning—text is closer to image, and
speech is closer to sound an embodied presence. Helpfully perhaps, the digital world has brought
to us hybrids, where we have the best and the worst of both worlds. Text messaging, for instance,
is temporal to the extent that there is the pressure of other person waiting, and greater tolerance
for the spatially ill-formed arrangement of graphemes. Nevertheless, there are some, if limited,
opportunities for spatial design—looking back quickly over a message, correcting the most
egregious errors, removing redundancies, elaborating on things that may on second glance seem
less explicit than needed given the contextual distance between the interlocutors. Pedagogically,
having children text message each other can be a connecting pedagogy, bridging the enormous
differences between text and speech as forms of meaning. In the era of Generative AI, the
learner’s interlocutor could also be a helpful AI, working to ease the learner’s transpositions from
the temporal, linear design of speech to the spatial, multilinear design of text. And a few
encouraging emojis might help!

A Multimodal, Transpositional Grammar

Generative AI creates meaning by paying statistical attention to the connections between words
(tokens) to each other. This technology of attention is the basis of the “transformer” part of GPT
(Vaswani et al. 2017 [2023]).
Humans, by contrast, pay grammatical attention. The nouns and verbs people use embody a
theory of the world where there are things and actions. Even when we don’t call things out
grammatically, our subconscious minds do. Putting things and actions together is what we do to
make meaning. A key question for education is, to what extent do we call out explicitly these
meanings of meaning, this metameaning? How much of unconscious meaning do we want to
bring to consciousness in literacy pedagogy?
Immersion models of literacy tell us that we hardly need to do this at all. Just give the
learners easy then progressively harder texts, so these theories go, and they will make
increasingly sophisticated sense for themselves. They’ll absorb the complexities and subtleties in
use. Children in school can learn to read and write in the same way babies learn to speak.
(Though don’t underestimate the explicit callouts that parents provide babies!)
Our counterargument is that education is a limited opportunity in terms of time and
resources. Generalization is a more efficient way of learning. Explicit call-out is pedagogically

16
powerful, exercising the capacity to move between cognition and metacognition, or between
knowledge specifics and knowledge transfer.
Importantly too, immersion pedagogy favors insiders whose informal life experience means
they seem naturally to “get” the discourse of schooled literacy. Explicit pedagogy is particularly
beneficial for learners whose lifeworlds are more distanced from the culture of schooling and for
this reason have historically been failed by literacy (Cope and Kalantzis 1993, Delpit 1988).
These are some of the reasons why pedagogical discourses should be characteristically more
explicit than vernacular ones. We want to call this explicit version of literacy “grammar,” but in a
sense that extends well beyond nouns and verbs:

Grammar is an educational metadiscourse that describes and explains the patterning of


meaning.

In this expanded definition, phonics may as well be our starting point. We have created a map
of forms of human meaning according to the basis of their design in time or space (Fig. 7). Here,
we have put text and speech together because this is such a big focus in literacy, but used
maximum color contrast to indicate how very different they are. As we have argued, the
transpositions are difficult—so difficult that we need to spend years working on them in school.
Phonics is just a start in the project of explicitly calling out one aspect of the transposition. Image
is more closely aligned to text in its two-dimensional, spatial array. Sound and body closely align
to speech in their presentation across time. So, it makes pedagogical sense to work on these
easier transpositions first, or at least in parallel to the very difficult speech-to-text transposition.
Besides, digital media make the other transpositions practicable, attractive, contemporary, or just
cool.

Fig. 7: Forms of human meaning

17
Then, across these forms of meaning, we can speak about a variety of meaning functions
(Fig. 8). Rather than calling “dog” a noun and seeing a dog in the picture, we can speak of
reference because the sentence and the image both reference “dog.” Rather than calling “walk” a
verb, we can speak of agency both in the sentence and the image. We question both text and
image, “Who or what is this about?” (reference) and “What is happening”? (agency). The answer
to these questions can be at least as subtle and nuanced as Generative AI as we interpret the
different kinds of walking that are possible. Except, rather than the brute force of statistics, we
humans use our grammatical brains to make the fine distinctions.
Then structure—How do we organize a sentence? How to we organize an image? And how
do we use text to get Generative AI to organize an image for us? Prompt engineering for image
generation is a text-to-image transposition, an art demanding careful design. Next, context: Who
is “he”? What do we need to know outside of the sentence and outside of the frame of the image
to make sense of it? Generative AI can only look to surrounding words in text or image labels for
clues, but we humans have broader capacities for understanding. Finally, interest: What makes
someone go to work, walk a dog, or imprison people? What drives the meaning? Only humans
can know that.

Fig. 8: A functional grammar

As that great linguist Michael Halliday said, “A grammar is a resource for meaning, the
critical functioning semiotic by means of which we pursue our everyday life. It therefore
embodies a theory of everyday life; otherwise it cannot function in this way… A grammar is a
theory of human experience” (Halliday 2000 [2002]: 369-70). If the discipline of science hangs
together around theories of the natural world, then the discipline of multimodal literacy is a
theory of human meaning.

18
The processes we have called multiliteracies center around two vectors of transposition. On
the one dimension, we have our necessarily wavering attention to the meaning functions of
reference, agency, structure, context, and interest. We make sense of each of these functions in
relation to the others. On the other dimension, we want a grammar that will work for all forms of
meaning, separately or in multimodal combination. Here, we have multimodal transpositions
where we can say things in one form of meaning or another or in all sorts of combinations,
though of course the meanings are never quite the same (Fig. 9) (Cope and Kalantzis 2020,
Kalantzis and Cope 2020).

Fig. 9: A transpositional grammar of multiliteracies

Overview and Summary: Metalanguage of AI Literacy

So far, we have been speaking at a level of broad generalization, exploring a metalanguage for
parsing multimodal meanings in the time of AI. Now, we’re going to get more specific in the
form of a little glossary of the key terms of AI, focusing particularly on Generative AI. We have
introduced many of these terms already, but now we will summarize, define, and string them
together in a roughly theoretical, narrative order.

• Binary Notation - Computers name things in zeros and ones and calculate their relations
base two, nothing more.
• Unicode - The universal symbolic character set for digital meaning. Each character has a
unique name in binary notation.
• Token - The smallest meaningful sequence of Unicode characters, a word or part of a
word.
• Large Language Model (LLM) - A corpus of text scraped from the web, billions of
words of published text, pretrained with calculations as to the statistical probability of
one token following another.
• Vector - A number representing the proximity of one token to a nearby token, reflecting
its shades of meaning. Think of the three different kinds of “walk” in the example we
gave earlier.

19
• Parameters - The scale and functions of the LLM: the number of tokens, its “context
widow” (see below), and the things it can do, for instance translation from one language
to another. Typically, an LLM has billions of parameters.
• Machine Learning - During its training phases, the LLM learns about the tokens stored
in its database. There are two kinds of machine learning: supervised and unsupervised.
• Supervised Machine Learning - The method by which humans teach the system how to
behave, for instance applying “filters” (see below).
• Unsupervised and Reinforcement Learning - Parsing sentence after sentence in the
corpus, the machine asks itself billions of times, “what is most likely the next word?”
Then it gives itself the answer, right or wrong, refining its probability calculations each
time. In education, we have long abandoned the most mechanical versions of behaviorist
psychology. However behaviorism has found a new home in the machine, of the most
mind-numbing kind and on an industrial scale.
• Chatbot - When a person talks to a computer in natural language, originally in a pre-
programmed dialogue (Weizenbaum 1966), but now in dialogue with an LLM.
• Prompt Engineering - A trigger to the LLM to respond, just like a classroom essay
prompt. The results at times feel like a parody of school writing, which also means
“cheating” is one of the most immediate uses of the technology (Mollick and Mollick
2023).
• Context Window - The amount of text an LLM can consider in a prompt.
• Fine-Tuning - A generic or “foundation” LLM can be provided supplementary specialist
text such as validated scientific knowledge. In a process called Retrieval Augmented
Generation (RAG) (Lewis et al. 2020), trusted text is uploaded into a vector database
where the relations of tokens are calculated. In this way, the LLM becomes more reliably
knowledgeable for the chosen domain of knowledge.
• AI Agents - Prompting an LLM from multiple agent or actor perspectives and relating
these perspectives to each other, somewhat like a debate team (Li et al. 2024).
• AI Bias - The sexism, racism, violence, profanity, and all manner of social evil that are to
be found in the source texts used by LLMs. This is because the Generative AI has scraped
from the web anything and everything it can, good and bad. In a sense, they are true to
the legacy of the written word. They express bias to the extent that the world they
captured expresses bias (Magee et al. 2021).
• Filters - Removing AI bias. LLMs do this, by covering it up, excluding responses that
express views offensive to liberal sensibility. Euphemistically, the LLM makers do this
with “supervised machine learning.” A not-funny joke says that “AI” stands for “Absent
Indian”—the cheap global labor-force that laboriously trains the AI not to say certain
things.
• Jailbreak - A clever prompt that gets past the filters (Shen et al. 2023).
• Hallucination - When Generative AI makes up facts: It only knows how to write good
sentences but has no way of checking whether the content is true (Klein 2023, Munn,
Magee and Arora 2023b).
• Black Box - There is no knowing exactly which source texts have been used in response
to a prompt and how the AI came to generate a particular sentence. The underlying
statistics of its neural nets are convoluted beyond recovery. The machine is an inscrutable
black box (Ashby 1956: 86).

20
• Multimodal AI - Generative AI can create images, video, and sound, but only on the
basis of textual labelling of sources and written-textual prompts to generate (Zhang et al.
2023). Software and mathematics are, in our definition, already written-textual.

In the time of AI literacy, teachers and learners need to know at least some of this. If Generative
AI can write, they need to know how it writes in its peculiarly non-human ways, and what
problems and limitations arise from this machinic writing.

III WHAT IS TO BE DONE? LITERACY PEDAGOGY FOR THE TIME OF AI

Changing the Frame of Reference for Literacy Learning—In Theory

On some measures, Generative AI is a better writer than most humans, producing texts that are
well formed, grammatically perfect, and typo-free. Now we have a machine that can write, why
bother to teach writing in school?
Our reason for teaching reading and writing needs to change from a matter of utility to the
project of human growth. Learning to write is learning to think—to transpose inner speech
(Vygotsky 1934 [1986]: 119) into externalized, two-dimensional textual space. The grammar of
speech is fundamentally different from the grammar of text. The grammar of inner speech is
even more distant from text for many reasons, prominent among which is the need for
explicitness if the meaning is to carry across time and space (Kalantzis and Cope 2020: 50-52,
Kalantzis and Cope 2022). And to the extent that inner speech is an amalgam of images—
“mindsight” as Colin McGinn calls it (McGinn 2004)—“imagining” is very different from
making and viewing pictures. These are huge transpositions, at once cognitive, multimodal, and
materialized through work with media. So, even if there is a machine to do it for us now, writing
remains an important thing to learn.
Generative AI puts the narrow, utilitarian literacy pedagogies out of business, with their
standardized tests to match. Literacy can no longer afford to be narrowly instrumental and
functional. This moves literacy into a more serious, challenging, and much more interesting
place—cognitively as an embodied and material practice.
With Generative AI, the machine can also help learners develop the deeper cognitive
processes and embodied capacities that underlie writing—the transposition of our
representations-for-ourselves into communication-for-others and the empathetic interpretation of
the varied social meanings we encounter. For this, we can develop pedagogies in which students
learn with and through the machine. This is not the same as having the machine do it for them,
otherwise labelled “cheating.” In the broad definition of grammar we have developed in this
paper we might now ask, how can the machine help bring to consciousness the patterning of
meaning? How can the machine help you learn how to exercise your grammatical capacities to
mean for yourself? How can it suggest a range of alternative interpretations to support you in
forming your own interpretation of the text? Remarkably, when properly calibrated for
educational application, Generative AI can do all of these things. We want to call this human-
computer relation “cyber-social literacy learning.”
As an aside—and we expand on this argument elsewhere (Cope and Kalantzis 2024b)—we
think that “artificial intelligence” is an unhelpful idea. It’s as if the machine can replicate and

21
even someday replace human intelligence. It also implies that the brain works in broadly similar
ways to a computer, as if the brain were just a calculating machine working in binary notation.
On the contrary, machines and humans are profoundly different. Computers are much better than
humans at some things. In their tedious and laborious calculating ways, they can relieve humans
of tedium and boredom. Cyber is a feedback relationship (Cope and Kalantzis 2022b), where
these two kinds of “intelligence” (which we place here in inverted commas because they don’t
even deserve the same word), come together in a complementary relationship. The value of their
pairing arises from their profound differences. But alas, everyone speaks of AI these days, so we
do too. Nevertheless, we want to propose that:

Cyber-social literacy learning is the complementary relationship between a machine that can
write and a human writer.

In cyber-social relation, here are some tedious things the Generative AI machine will be able
to do much better than a human teacher. We’ve already mentioned teaching phonics. After that, it
will be able to offer on-the-fly feedback as students write. Keystroke capture will be able to work
out the extent of the help provided by the machine. Generative AI will be able to track learner
progress as they become progressively more independent writers. It will calibrate learning
activities and assessments to different learners across many dimensions including, not only
literacy capacities narrowly conceived, but experiential lifeworld differences. It will grade work.
Much more effectively than old-fashioned summative assessments, it will provide continuous
formative assessment and provide summative progress assessments based on all the work
students have done (Hao et al. 2024). In reading, it will ask, “How do you interpret this text,” not
because there can be a straightforwardly correct ABCD answer, but in a dialogue that probes the
depth of the student’s interpretation, distinctive as that might be given the peculiarities of their
life history and interests.
Of course, in life from now on, there are going to be times when the machine writes for us.
When it does, how do we engineer the best prompts? Then there is a new critical role for the
reader. Is it hallucinating? Does it express AI bias? Or have the AI bias filters themselves
distorted meaning? What sources may it have left unacknowledged? What intellectual property
may have been stolen or treated with disrespect by the AI’s failure to acknowledge? Has my
privacy and security been compromised by prompting this writing by the machine? These are
key questions for a critical AI literacy.

Changing the Frame of Reference for Literacy Learning—In Practice

Since 2000, we have been building experimental online writing and writing assessment spaces,
in a number of loosely linked applications under the overall platform name, Common Ground
Scholar or CGScholar.com (Cope and Kalantzis 2023a).1 Early in 2023, we added an AI review

1
Funding acknowledgements: Learning Analytics: US Department of Education, Institute of Education Sciences:
“The Assess-as-You-Go Writing Assistant” (R305A090394); “Assessing Complex Performance” (R305B110008);
“u-Learn.net: An Anywhere/Anytime Formative Assessment and Learning Feedback Environment” (ED-IES-10-C-
0018); “The Learning Element” (ED-IES-lO-C-0021); and “InfoWriter: A Student Feedback and Formative
Assessment Environment” (ED-IES-13-C-0039). Bill and Melinda Gates Foundation: “Scholar Literacy

22
component into the writing project workflow, shown in Fig. 10. After a first draft, an AI review
provides feedback to the writer. The writer then revises and submits their work for human peer
review. When they receive this peer feedback, they give the reviewer feedback on their feedback.
After that, they write a change note, discussing their knowledge gains from the AI and human
reviews, and comparing the AI review with the human review. After revision, they submit their
work for final instructor review and for publication to their personal portfolio and the community
knowledge bank.

Fig. 10: CGScholar Workflow

We implemented AI review on a trial basis from the beginning of 2023 in our master’s and
doctoral program at the University of Illinois. By the end of 2023, 353 students in 15 courses
across six cycles of intervention had used the environment. Each cycle of intervention produced
a new software release with research and development proceeding according to a mix of agile
programming and educational design research methodologies that we have called “cyber-social
research” (Tzirides et al. 2023a).
Each of the 353 students produced major projects of 3-5,000 words with multimedia embeds.
Fig. 11 shows the first page of an example. Most students were practicing educators, many of
whom were specialized teachers of literacy. Others were educators working across a range of
disciplines who consider writing an important aspect of their teaching and their students’
learning.

Courseware.” National Science Foundation: “Assessing ‘Complex Epistemic Performance’ in Online Learning
Environments” (Award 1629161). Cybersecurity: Utilizing an Academic Hub and Spoke Model to Create a National
Network of Cybersecurity Institutes, Department of Homeland Security, contract 70RCSA20FR0000103;
Infrastructure for Modern Educational Delivery Technologies: A Study for a Nationwide Law Enforcement Training
Infrastructure, Department of Homeland Security, contract 15STCIR00001-05-03; Development of a Robust,
Nationally Accessible Cybersecurity Risk Management Curriculum for Technical and Managerial Cybersecurity
Professionals, Department of Homeland Security, contract 70SAT21G00000012/70RCSA21FR0000115. Medical
Informatics: MedLang: A Semantic Awareness Tool in Support of Medical Case Documentation, Jump ARCHES
program, Health Care Engineering Systems Center, College of Engineering, University of Illinois, contracts P179,
P279, P288.

23
Fig. 11: Screenshot of student writing in CGMap/CGScholar with multimodal student writing on
the left and AI feedback on the right, color coded by rubric criterion.

The AI review and the human reviews (peer, self, and instructor) use the same rubric, based
on the multiliteracies or Learning by Design pedagogical schema (Fig. 12). We have discussed
this pedagogy elsewhere (Cope and Kalantzis 2015), and there is a wealth of research—our own
and others—describing and critically analyzing its application at all levels of education from K-
12 schooling to college and university, reviewed elsewhere (Kalantzis and Cope 2023).

Fig. 12: A snapshot of the multiliteracies pedagogy in a version adapted for higher education.

24
For those interested in these rapidly evolving technologies, here is quick technical
description of what happens in the AI review. The underlying LLM or “foundation model” we
have been using has been through successive versions of OpenAI’s GPTs as they have been
released. CGMap is connected to these via API (Application Programming Interface), but is
designed to connect to any LLM, including soon, we hope, more transparent and secure open
source LLMs.
As we have argued elsewhere, Generative AI is not suitable for unmediated use in education
contexts (Cope and Kalantzis 2023d, Cope and Kalantzis 2024a). Our starting point is that
CGMap must heavily recalibrate the GPT. This is accomplished in two ways. The first is via
prompt engineering. For education, this involves creating a different kind of rubric which by
normal standards might be considered verbose and prolix, spelling out each criterion and rating
level in terms that would typically be considered excessive and redundant. However, this is what
works best in the GPT to get quality outcomes. We have the students use the same review criteria
when reviewing their and their peers’ works not only for the sake of full transparency, but also
for them to gain experience in this new universe of prompt engineering. Then the software passes
over the student work multiple times, once for each criterion. In our project, we prompted the
GPT ten times, using the eight Learning by Design ‘knowledge processes” plus two more for
expression and referencing protocols.
When the calls from the prompts reach the LLM, we have supplemented the foundation
model with a technology called retrieval augmented generation (RAG). Here, we have put all the
instructors’ published writings plus every piece of work our students have written for the past
five years into a vector database in which tokens have been processed according to their relations
to each other. This is the program’s knowledge source—more than 35 million words in this
implementation—deeply informed by the theoretical and empirical research literature on literacy
pedagogy and innovative applications of technology in learning. In a profound sense, it is not the
AI that is writing the review, but the specialized collective intelligence of the graduate students
and professors in our program.
We have written up some of the early implementations (Tzirides et al. 2023b, Zapata et al.
2024), and we are now writing up the more recent ones. We are by nature cautious and skeptical
of techno-enthusiasts. But to be honest, we’re shocked to find ourselves confessing that the AI
feedback is more detailed and more helpful than we have ever been as professors. If this is the
case for these hardest of literacy texts—writing about writing—what does that mean for writing
at every other level of learning? We’re working now on K-12 applications.

Towards Cyber-Social Literacy Learning

What follows is our proposal to set a new agenda for literacy in the time of Generative AI.

1. Broaden the definition of written text.


Literacy teaching needs to embrace the full scope of Unicode in today’s textual practices,
including emojis, icons, and other ideographs increasingly interwoven into text. It also needs to
embrace the convergence of writing into mathematics and coding. The literacy teacher may not
have to become a teacher of mathematics or computer science, but the mathematics and

25
computer science teachers certainly need to become literacy teachers in ways now integral to ther
discipline areas. Indeed, in the time of Generative AI, every teacher is a literacy teacher.

2. Recognize that literacy is of necessity multimodal.


For some decades, digitization and the internet have juxtaposed written text with other forms of
meaning, rendering anachronistic literacies that studiously separated away text as their object of
study. Prior to that even literacy was multimodal in mostly unacknowledged ways— multimodal
transpositions between speech (an essentially audio and temporal medium) and text (an
essentially visual and spatial medium) have always been much harder than the mere
transliteration of speech sounds into phonemes. Generative AI reintroduces multimodality, but in
a new way, producing meanings in many forms but only by transposition from text. This opens
out exciting new pedagogical possibilities for multimodal literacies.

3. Literacy is dialogical, interactive, interpretive, and cyber-social.


Contrary to the idea that literacy is straightforwardly communication in the sense of decoding or
comprehending a text’s intrinsic or intended meaning, literacy involves the interaction of humans
whose lifeworld experiences and interests are inevitably varied. As much as anything, literacy is
a question about the depth and connectedness of the reader’s and the writer’s interpretation.
Generative AI has become a coherent interlocutor. Via AI agents, it can address learners
according to a range of perspectives and judge the level of sophistication in their responses. On
this basis, it can also create a profile or model of each learner that allows it to personalize or
customize responses sensitive to their diversity on a wide range of dimensions. This opens new
opportunities for what we have called cyber-social learning, as well as bringing writing and
reading closer as pedagogical practices—the student writes to elicit a readable response from the
AI. Reading AI, however, must be critical, always on the lookout for hallucinations, AI bias,
breeches of intellectual property and Generative AI’s other known deficiencies.

4. Teach Grammar Again.


We have defined grammar broadly for the digital and AI era as the patterning of meaning.
Grammar is an educational metadiscourse that describes and explains the patterning of meaning.
In ordinary life, we mostly live grammar unconsciously. Starting with phonics, schooling brings
this discourse to consciousness—partly because there are not enough hours in the school year for
immersion models and partly because metacognition is one of the fundamental objectives of
school learning. It is the basis for knowledge transferrable from school learning to a wide range
of social contexts, even those not immediately anticipated in school learning. For this, we need to
build a grammar suitable for multimodal meaning in the digital and AI age.

5. Create New Literacy Assessments.


With Generative AI, we can have entirely new and better literacy assessments. We know all too
well the flaws of old literacy assessments: small samples in time, with a narrow view of
comprehension as a proxy for reading and even literacy as whole, and writing assessments that
are notoriously variable in their judgments, offering gross ratings with just a few levels and
limited feedback. Generative AI opens the possibility of always-helpful, on-the-fly, continuous
formative feedback and progress assessments that analyze everything a student has written
within a class or course.

26
6. Seize the Day, Take Control of the AI.
It will, of course, be impossible to control unfiltered use of GPTs. In their unfiltered form, they
are of more use for cheating than anything else. But layered over the foundation LLMs,
educational software applications can be made more attractive to learners and their teachers than
the unfiltered, publicly accessible chat sites. We have mentioned in this paper the techniques of
prompt engineering, Retrieval Augmented Generation, and multiple agents. Dedicated
educational applications can be much more helpful and learner-friendly than the public sites, in
the wild so to speak. More than merely helpful, dedicated educational applications can catch
cheating via keystroke and logfile analysis. They can carefully track learner progress—and of
course, all such tracking must be fully transparent to learners, teachers, and parents. This requires
AI literacy, where teachers and students learn the lingo and understand the basic mechanics of
text-centric AI. It also requires that there is no AI use without human moderation.

7. Develop a Program of Education Justice for the Time of Artificial Intelligence.


During the first decades of this century, we have witnessed the widespread application of
computers in learning. But, let’s be honest, this has not had any discernable impact on the wicked
problem of educational and social equality. Literacy outcomes are a significant marker, if not
cause, of this stubborn and persistent reality. Our question now must be, can Generative AI help
change the game? Can it help calibrate learning to address the great differences between students
across many dimensions? Can inexpensive, one-to-one, AI-supported literacy teaching close the
gap? To do so will require new pedagogical approaches and changed classroom ecologies.

Starkly, we face several different scenarios: one in which AI fails to address or exacerbates
unequal differential opportunity; and another in which it might be possible to ameliorate the
social divisions historically encountered by and often tragically reproduced through education. In
this context, we ask the overarching programmatic question: What might be the shape of an
agenda of education justice in a time of artificial intelligence?

References

Anderson, Benedict, Imagined Communities: Reflections on the Origin and Spread of


Nationalism, London: Verso, 1991.
Ashby, W. Ross, An Introduction to Cybernetics, London: Chapham & Hall, 1956.
Beberman, Max, An Emerging Program of School Mathematics: The Inglis Lecture, Harvard
University, Cambridge MA: Harvard University Press, 1958.
Benkler, Yochai, The Wealth of Networks: How Social Production Transforms Markets and
Freedom, New Haven CT: Yale University Press, 2006.
Bernays, Edward, Propaganda, Brooklyn NY: IG Publishing, 1928 [2005].
Bernstein, Basil, Class, Codes and Control: Theoretical Studies Towards a Sociology of
Language, London: Routledge & Kegan Paul, 1971.
Christensen, Clayton M., Michael B. Horn and Curtis W. Johnson, Disrupting Class: How
Disruptive Innovation Will Change the Way the World Learns, New York: McGraw Hill,
2008.

27
Cope, Bill and Mary Kalantzis, "The Power of Literacy and the Literacy of Power,” pp.63-89 in
The Powers of Literacy: A Genre Approach to Teaching Writing, edited by Bill Cope and
Mary Kalantzis, London: Falmer Press, 1993.
Cope, Bill, "The Language of Forgetting: A Short History of the Word,” pp.192-223 in Seams of
Light: Best Antipodean Essays, edited by Morag Fraser, Sydney: Allen & Unwin, 1998.
Cope, Bill and Robert Black, "Print Technology in Transition,” pp.151-71 in Creator to
Consumer in a Digital Age: Book Production in Transition, Vol. 1, edited by Bill Cope
and Dean Mason, Melbourne AU: Common Ground, 2001.
Cope, Bill and Mary Kalantzis, "The Things You Do to Know: An Introduction to the Pedagogy
of Multiliteracies,” pp.1-36 in A Pedagogy of Multiliteracies: Learning by Design, edited
by Bill Cope and Mary Kalantzis, London: Palgrave, 2015.
Cope, Bill and Mary Kalantzis, Making Sense: Reference, Agency and Structure in a Grammar of
Multimodal Meaning, Cambridge UK: Cambridge University Press, 2020, doi:
https://doi.org/10.1017/9781316459645.
Cope, Bill and Mary Kalantzis, "Artificial Intelligence in the Long View: From Mechanical
Intelligence to Cyber-social Systems,” Discover Artificial Intelligence, 2(13):1-18,
2022a, doi: https://doi.org/10.1007/s44163-022-00029-1.
Cope, Bill and Mary Kalantzis, "The Cybernetics of Learning,” Educational Philosophy and
Theory, 54(14):2352-88, 2022b, doi: https://doi.org/10.1080/00131857.2022.2033213.
Cope, Bill and Mary Kalantzis, "Creating a Different Kind of Learning Management System:
The CGScholar Experiment,” pp.1-18 in Promoting Next-Generation Learning
Environments Through CGScholar, edited by Matthew Montebello, Hershey PA: IGI
Global, 2023a, doi: https://doi.org/10.4018/978-1-6684-5124-3.ch001.
Cope, Bill and Mary Kalantzis, "A Multimodal Grammar of Artificial Intelligence: Measuring
the Gains and Losses in Generative AI,” Multimodality and Society, Online First, 2023b,
doi: https://doi.org/10.1177/26349795231221699.
Cope, Bill and Mary Kalantzis, "Towards Education Justice: Multiliteracies Revisited,” pp.1-33
in Multiliteracies in International Educational Contexts: Towards Education Justice,
edited by Gabriela C. Zapata, Mary Kalantzis and Bill Cope, London: Routledge, 2023c.
Cope, Bill and Mary Kalantzis, "Generative AI Comes to School (GPT and All That Fuss): What
Now?,” Educational Philosophy and Theory:13-17, 2023d, doi:
https://doi.org/10.1080/00131857.2023.2213437.
Cope, Bill and Mary Kalantzis, "Generative AI as a Writing Technology: Challenges and
Opportunities for Education,” in Encyclopedia of Educational Innovation, edited by
Michael A. Peters and Richard Heraud, Singapore: Springer, 2024a.
Cope, Bill and Mary Kalantzis, "On Cyber-Social Learning: A Critique of Artificial Intelligence
in Education,” in Trust and Inclusion in AI-Mediated Education: Where Human Learning
Meets Learning Machines, edited by Theodora Kourkoulou, Anastasia O. Tzirides, Bill
Cope and Mary Kalantzis, Cham CH: Springer, 2024b.
Delpit, Lisa D., "The Silenced Dialogue: Power and Pedagogy in Educating Other People's
Children,” Harvard Educational Review, 58(3):280-98, 1988, doi:
https://doi.org/10.17763/haer.58.3.c43481778r528qw4.
Eisenstein, Elizabeth L., The Printing Press as an Agent of Change: Communications and
Cultural Transformation in Early-Modern Europe, Cambridge UK: Cambridge
University Press, 1979.

28
Eloundou, Tyna, Sam Manning, Pamela Mishkin and Daniel Rock, "An Early Look at the Labor
Market Impact Potential of Large Language Models,” arXiv, 2303.10130, 2023, doi:
https://doi.org/10.48550/arXiv.2303.10130.
Gee, James Paul, The Anti-Education Era: Creating Smarter Students through Digital Learning,
New York NY: Palgrave Macmillan, 2013.
Gibson, James J., "The Theory of Affordances,” pp.67-82 in Perceiving, Acting, and Knowing:
Toward an Ecological Psychology, edited by Robert Shaw and John Bransford, 1977.
Goddard, Henry H., Human Efficiency and Levels of Intelligence, Princeton NJ: Princeton
University Press, 1920.
Goody, Jack, The Logic of Writing and the Organization of Society, Cambridge UK: Cambridge
University Press, 1986.
Graff, Harvey J., The Legacies of Literacy: Continuities and Contradictions in Western Culture
and Society, Bloomington IN: Indiana University Press, 1987.
Halliday, M.A.K., "Grammar and Daily Life: Concurrence and Complementarity,” pp.369-83 in
On Grammar: The Collected Works of M.A.K. Halliday, Volume 1, edited by Johnathon J.
Webster, London UK: Continuum, 2000 [2002].
Hao, Jiangang, Alina A. von Davier, Victoria Yaneva, Susan Lottridge, Matthias von Davier and
Deborah J. Harris, "Implications of LLM and Generative AI on Assessments,” In Review,
2024.
Herman, Edward S. and Noam Chomsky, Manufacturing Consent: The Political Economy of
Mass Media, New York NY: Pantheon Books, 1988 [2002].
Kalantzis, Mary, Diana Slade and Bill Cope, "Minority Languages and Mainstream Culture:
Problems of Equity and Assessment,” pp.196-213, 1984.
Kalantzis, Mary and Bill Cope, "On Globalisation and Diversity,” Computers and Composition,
31(4):402-11, 2006, doi: https://doi.org/10.1016/j.compcom.2006.09.002.
Kalantzis, Mary, Bill Cope, Eveline Chan and Leanne Dalley-Trim, Literacies, Cambridge UK:
Cambridge University Press, 2012 [2016].
Kalantzis, Mary and Bill Cope, Adding Sense: Context and Interest in a Grammar of Multimodal
Meaning, Cambridge UK: Cambridge University Press, 2020, doi:
https://doi.org/10.1017/9781108862059.
Kalantzis, Mary and Bill Cope, "After Language: A Grammar of Multiform Transposition,”
pp.34-64 in Foreign Language Learning in the Digital Age: Theory and Pedagogy for
Developing Literacies, edited by Christiane Lütge, London: Routledge, 2022, doi:
https://doi.org/10.4324/9781003032083-4.
Kalantzis, Mary and Bill Cope, "Multiliteracies: Life of an Idea,” International Journal of
Literacies, 30(2):17-89, 2023, doi: https://doi.org/10.18848/2327-0136/CGP/v30i02/17-
89.
Kalantzis-Cope, Phillip, "Whose Data? Problematizing the ‘Gift' of Social Labour,” Global
Media and Communication, 12(3):295-309, 2016.
Kendeou, Panayiota, Kristen L. McMaster, Danielle S. McNamara and Bess Casey Wilke,
"Literacy,” pp.553-75 in Handbook of Educational Psychology, edited by Paul A. Schutz
and Krista R. Muis, New York: Routledge, 2023, doi:
https://doi.org/10.4324/9780429433726-28.
Klein, Naomi. 2023. "AI Machines Aren’t ‘Hallucinating,’ But Their Makers Are." Pp.
https://www.theguardian.com/commentisfree/2023/may/08/ai-machines-hallucinating-
naomi-klein in The Guardian.

29
Kress, Gunther, "Design and Transformation: New Theories of Meaning,” pp.153-61 in
Multiliteracies: Literacy Learning and the Design of Social Futures, edited by Bill Cope
and Mary Kalantzis, London: Routledge, 2000.
Labov, William, Language in the Inner City: Studies in the Black English Vernacular,
Philadelphia PA: University of Pennsylvania Press, 1972.
Lessig, Lawrence, Free Culture, New York NY: Penguin Press, 2004.
Lévi-Strauss, Claude, Tristes Tropiques, Harmondsworth UK: Penguin, 1955 [1976].
Lewis, Patrick, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman
Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, im Rocktäschel, Sebastian Riedel and
Douwe Kiela, "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks,”
arXiv, 2005.11401, 2020, doi: https://doi.org/10.48550/arXiv.2005.11401.
Li, Junyou, Qin Zhang, Yangbin Yu, Qiang Fu and Deheng Ye, "More Agents Is All You Need,”
arXiv, 2402.05120, 2024, doi: https://doi.org/10.48550/arXiv.2402.05120.
Magee, Liam, Lida Ghahremanlou, Karen Soldatic and Shanthi Robertson, "Intersectional Bias
in Causal Language Models,” arXiv, 2107.07691, 2021, doi:
https://doi.org/10.48550/arXiv.2107.07691.
Márquez, Gabriel García, Love in the Time of Cholera, New York: Alfred A. Knopf, 1988.
McGinn, Colin, Mindsight: Image, Dream, Meaning, Cambridge MA: Harvard University Press,
2004.
Mollick, Ethan R. and Lilach Mollick, "Assigning AI: Seven Approaches for Students, with
Prompts,” SSRN, 2023, doi: http://dx.doi.org/10.2139/ssrn.4475995
Munn, Luke, Liam Magee and Vanicka Arora, "Unmaking AI Imagemaking: A Methodological
Toolkit for Critical Investigation,” arXiv, 2307.09753:1-14, 2023a, doi:
https://doi.org/10.48550/arXiv.2307.09753.
Munn, Luke, Liam Magee and Vanicka Arora, "Truth Machines: Synthesizing Veracity in AI
Language Models,” AI and Society, 2023b, doi: https://doi.org/10.1007/s00146-023-
01756-4.
New London Group, "A Pedagogy of Multiliteracies: Designing Social Futures,” Harvard
Educational Review, 66(1):60-92, 1996, doi:
https://doi.org/10.17763/haer.66.1.17370n67v22j160u.
OECD, OECD Employment Outlook 2023: Artificial Intelligence and the Labour Market Paris
FR: OECD, 2023, doi: https://doi.org/10.1787/08785bba-en.
Peters, Michael A. and Tina (A.C.) Beasley, Building Knowledge Cultures: Education and
Development in the Age of Knowledge Capitalism, Lanham MD: Rowman & Littlefield,
2006.
Phillips, Christopher J., The New Math: A Political History, Chicago: University of Chicago
Press, 2014.
Phillipson, Robert, Linguistic Imperialism, Oxford: Oxford University Press, 1992.
Sharples, Mike, "Automated Essay Writing: An AIED Opinion,” International Journal of
Artificial Intelligence in Education, 32:1119–26, 2022, doi:
https://doi.org/10.1007/s40593-022-00300-7.
Shen, Xinyue, Zeyuan Chen, Michael Backes, Yun Shen and Yang Zhang, ""Do Anything Now":
Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language
Models,” arXiv, 2308.03825, 2023, doi: https://doi.org/10.48550/arXiv.2308.03825.
Siemens, George, Fernando Marmolejo-Ramos, Florence Gabriel, Kelsey Medeiros, Rebecca
Marrone, Srecko Joksimovic and Maarten de Laat, "Human and Artificial Cognition,”

30
Computers and Education: Artificial Intelligence, 3:1-9, 2022, doi:
https://doi.org/10.1016/j.caeai.2022.100107.
Tsien, Tsuen-hsuin and Joseph Needham, Science and Civilisation in China, Vol. 5: Chemistry
and Chemical Technology, Pt. I: Paper and Printing, Cambridge UK: Cambridge
University Press, 1985.
Tzirides, Anastasia O., Akash K. Saini, Bill Cope, Mary Kalantzis and Duane Searsmith, "Cyber-
Social Research: Emerging Paradigms for Interventionist Education Research in the
Postdigital Era,” pp.86-102 in Constructing Postdigital Research edited by Petar Jandrić,
Alison MacKenzie and Jeremy Knox, Cham CH: Springer, 2023a, doi:
https://doi.org/10.1007/978-3-031-35411-3_5.
Tzirides, Anastasia O., Gabriela Zapata, Akash Saini, Duane Searsmith, Bill Cope, Mary
Kalantzis, Vania Carvalho de Castro, Theodora Kourkoulou, John Jones, Rodrigo
Abrantes da Silva, Jen Whiting and Nikoleta Polyxeni Kastania, "Generative AI:
Implications and Applications for Education,” arXiv, 2305.07605, 2023b, doi:
https://doi.org/10.48550/arXiv.2305.07605.
Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez,
Lukasz Kaiser and Illia Polosukhin, "Attention Is All You Need,” arXiv, 1706.03762,
2017 [2023], doi: https://doi.org/10.48550/arXiv.1706.03762.
Vygotsky, Lev Semyonovich, Thought and Language, Cambridge, MA: MIT Press, 1934 [1986].
Waquet, Françoise, Latin, Or the Empire of the Sign, London: Verso, 2001.
Weizenbaum, Joseph, "ELIZA—A Computer Program for the Study of Natural Language
Communication Between Man and Machine,” Communications of the ACM, 9(1):36-45,
1966.
Yang, Ke, Jiateng Liu, John Wu, Chaoqi Yang, Yi R. Fung, Sha Li, Zixuan Huang, Xu Cao,
Xingyao Wang, Yiquan Wang, Heng Ji and Chengxiang Zhai, "If LLM Is the Wizard,
Then Code Is the Wand: A Survey on How Code Empowers Large Language Models to
Serve as Intelligent Agents,” arXiv, 2401.00812, 2024, doi:
https://doi.org/10.48550/arXiv.2401.00812.
Zapata, Gabriela C., Akash K. Saini, Bill Cope, Mary Kalantzis and Duane Searsmith, "Peer-
Reviews and AI Feedback Compared: University Students’ Preferences,” EdArXiv
Preprints, 2024, doi: https://doi.org/10.35542/osf.io/uy8qp.
Zhang, Yiyuan, Kaixiong Gong, Kaipeng Zhang, Hongsheng Li, Yu Qiao, Wanli Ouyang and
Xiangyu Yue, "Meta-Transformer: A Unified Framework for Multimodal Learning,”
arXiv, 2307.10802, 2023, doi: https://doi.org/10.48550/arXiv.2307.10802.
Zuboff, Shoshana, The Age of Surveillance Capitalism: The Fight for a Human Future and the
New Frontier of Power, New York: Public Affairs, 2019.

31

You might also like