Weekly Readings June 21 2025
Weekly Readings June 21 2025
What the “cockroaches” of the ad world teach about dealing with AI?
A rosé-soaked meeting in Cannes is like a postcard from the future.
When advertising executives describe themselves as cockroaches, they are not being self-deprecating.
Admen have shown a remarkable ability to survive what look like extinction-level events. Copywriters
adapted to radio in the 1920s; artists embraced tv in the 1950s. Agencies clung on in the early 2000s as
ads moved online. This week, in the face of another technological revolution, the admen steadfastly
held their annual jamboree on the French Riviera.
The latest upheaval, brought by artificial intelligence (AI), is testing the cockroaches as never before.
Advertising is one of the sectors most radically affected by AI so far. As such, adland offers a postcard
from the future for other industries. Three lessons stand out.
The first is that the moat between human workers and chatbot rivals is narrower than most people
think. Creative work is often seen as immune from automation. Large language models (LLMs) are
designed to predict the most likely answer, which is often the opposite of the most original one. The
best ads remain too weird and wonderful for any machine to have dreamt up: consider the campaign
that attached step-counters to chickens to advertise free-range eggs.
Yet this week in Cannes TikTok, Meta, Google and other ad platforms showed off AI-powered features
that can create passable video or rewrite ad copy at the click of a button. Their output will not win any
awards. That does not matter. Most of the $1trn that is spent on ads each year goes towards
workmanlike campaigns, rather than Cannes trophy-bait. Sam Altman’s prediction that AI will one day
be able to do 95% of marketing may sound like boosterism for his firm, OpenAI. But the inspired human-
made content that people present as a counter-argument is firmly within the remaining 5%. Robots will
content themselves with the rest.
Another lesson is that the biggest companies have the most to gain. This runs counter to a popular
narrative, that AI will democratise skills and intelligence. It is true that the new tools from Meta and co
will allow millions of micro-businesses to produce video ads of a quality that was once out of their reach,
and translate text into several languages. Global campaigns can now be launched online for hundreds of
dollars; TV-worthy commercials are being put together for a few thousand.
But take a step back and it is clear that the serious money is being made by the giants. The selling of ads
was already becoming more concentrated: four tech firms that accounted for a third of the global ad
market five years ago now account for half of it. And America’s biggest companies are ramping up
their AI investment at a faster rate than the rest. No wonder: AI requires computing muscle and large
data sets, both of which are expensive. Whereas human intelligence is more or less randomly
distributed, the artificial kind can be bought. Rather than democratise access to intelligence, AI may
allow the richest to hoard it.
The last lesson from adland is that AI’s spread will have unpredictable consequences. Some advertisers
are shifting their budgets from tv to the humble outdoor billboard. Why? In part because AI has made it
possible to infer from vast data sets whether consumers who saw the ad bought the product, allowing
marketers to measure the campaign’s effectiveness rather than guess at it. Another unexpected winner
is old-school public relations. As consumers switch from search-engines to chatbots, brands need to
persuade LLMs to speak highly of them. The most effective way to do that is to influence the sources
that the model pays most attention to, such as news articles. In the AI age, high-tech “search-engine
optimisation” may be less effective than offline schmoozing (or so, at least, marketers can insist when
presenting their post-Cannes expenses claims).
Adland is an outlier in important ways. Ad spending is highly cyclical, so the industry has benefited more
than most from the AI-fuelled boom of recent years. The big tech firms that are active in ads also
happen to be leaders in AI, and have used ads to test their newest products. And not everyone has the
admen’s knack for survival. But the rest of the business world should pay attention to the cockroaches
of Cannes. The revolution in adland is a taste of what is to come.
This article appeared in the Leaders section of the print edition under the headline “Computers v
cockroaches”.
Underpinning the digital economy is a deep foundation of open-source software, freely available for
anyone to use. The majority of the world’s websites are run using Apache and Nginx, two open-source
programs. Most computer servers are powered by Linux, another such program, which is also the basis
of Google’s Android operating system. Kubernetes, a program widely used to manage cloud-computing
workloads, is likewise open-source. The software is maintained and improved upon by a global
community of developers.
China, which had long stood at the periphery of that community, has in recent years become an integral
part of it. After America and India, it is now home to the largest group of developers on GitHub, the
world’s biggest repository of open-source software. Chinese tech giants, including Alibaba, Baidu and
Huawei, have become prolific open-source funders and contributors. China has been particularly active
in the development of open-source artificial-intelligence (AI) models, including those from DeepSeek,
an AI startup that shook the world in January when it released the cutting-edge models it had developed
on a shoestring. According to Artificial Analysis, a website, 12 of the 15 leading open-source AI models
are Chinese.
This newfound interest in open-source has been fuelled by America’s efforts to hobble its rival. Curbing
China’s access to code that is readily available online is tricky for a foreign government. Ren Zhengfei,
Huawei’s founder, told People’s Daily, a Communist Party mouthpiece, that American tech restrictions
were nothing to fear since “there will be thousands of open-source software [programs] to meet the
needs of the entire society.”
Yet the rise in China of open-source, which relies on transparency and decentralisation, is awkward for
an authoritarian state. If the party’s patience with the approach fades, and it decides to exert control,
that could hinder the course of innovation at home and make it harder to export Chinese technology
abroad.
China’s open-source movement first gained traction in the mid-2010s. Richard Lin, co-founder of
Kaiyuanshe, a local open-source advocacy group, recalls that most of the early adopters were
developers who simply wanted free software. That changed when they realised that contributing to
open-source projects could improve their job prospects. Big firms soon followed, with companies like
Huawei backing open-source work to attract talent and cut costs by sharing technology.
Momentum gathered in 2019 when Huawei was, in effect, barred by America from using Android. That
gave new urgency to efforts to cut reliance on Western technology. Open-source offered a faster way
for Chinese tech firms to take existing code and build their own programs with help from the country’s
vast community of developers. In 2020 Huawei launched OpenHarmony, a family of open-source
operating systems for smartphones and other devices. It also joined others, including Alibaba, Baidu and
Tencent, to establish the OpenAtom Foundation, a body dedicated to open-source development. China
quickly became not just a big contributor to open-source programs, but also an early adopter of
software. JD.com, an e-commerce firm, was among the first to deploy Kubernetes.
AI has lately given China’s open-source movement a further boost. Chinese companies, and the
government, see open models as the quickest way to narrow the gap with America. DeepSeek’s models
have generated the most interest, but Qwen, developed by Alibaba, is also highly rated, and Baidu has
said it will soon open up the model behind its Ernie chatbot.
China’s enthusiasm for open technology is also extending to hardware. Unitree, a robotics startup based
in Hangzhou, has made its training data, algorithms and hardware designs available for free, which may
help it to shape global standards. Semiconductors offer another illustration. China is dependent on
designs from Western chip firms. As part of its push for self-sufficiency, the government is urging firms
to adopt RISC-V, an open chip architecture developed at the University of California, Berkeley.
Many Chinese firms also hope that more transparent technology will help them win acceptance for their
products abroad. That may not happen. Huawei’s operating system has found few users elsewhere.
Although some Western companies have been experimenting with DeepSeek’s models, an executive at
a global enterprise-software firm says that many clients outside China will not touch the
country’s AI tools. Some fear disruption from future American restrictions. Others worry about
backdoors hidden in the code that might allow them to be spied on.
China’s open-source ambitions could be derailed in other ways, too. Qi Ning, a Chinese software
engineer, points out that at international open-source conferences, attendees increasingly avoid naming
Chinese collaborators, as they worry about reputational risk or political blowback.
Version control
America’s government may also make life difficult for Chinese open-source developers. Fearing
nefarious meddling in the world’s code, it could seek to cut China off from GitHub, which is owned by
Microsoft. Mr Qi says many Chinese developers worry about “access issues in the future”. China’s
government has promoted Gitee, a domestic alternative. But few local coders use it. Last year some
American lawmakers argued for restricting China’s access to RISC-V—though Andrea Gallo, head of the
Swiss body that oversees the technology, contends that this is not feasible as it is a public standard,
much like USB.
Yet it is China’s own government that poses the biggest threat to the country’s open-source experiment,
despite supporting it in principle. In 2021 the government restricted access to GitHub, concerned that
the platform could be used to host politically sensitive content. Developers quickly turned to virtual
private networks (which mask a user’s location) to regain access, but the episode rattled many. In 2022
the government announced that all projects on Gitee would be subject to official review, and that
coders would need to certify compliance with Chinese law.
A similar pattern is playing out in AI. Chinese law prohibits models from generating content that
“damages the unity of the country and social harmony”. In 2023 Hugging Face, a Franco-American
platform for sharing open-source AI models, became inaccessible from within China.
China’s open-source movement is organic, driven by developers and tech firms. The government has so
far encouraged it because it serves its objectives of accelerating domestic innovation and reducing
reliance on Western technology. If China’s leaders constrain the culture of freedom and
experimentation on which open technology relies, however, they will limit its potential.
This article appeared in the Business section of the print edition under the headline “People’s code”.
THE DIZZYING array of letters splattered across the page of one of Jonathan Roberts’s visual-reasoning
questions resembles a word search assembled by a sadist. Test-takers aren’t merely tasked with finding
the hidden words in the image, but with spotting a question written in the shape of a star and then
answering that in turn (see below).
The intention of Mr Roberts’s anthology of a hundred questions is not to help people pass the time on
the train. Instead, it is to provide cutting-edge artificial-intelligence (AI) models like o3-pro, June’s top-
tier release from OpenAI, with a test worthy of their skills.
There is no shortage of tests for AI models. Some seek to measure general knowledge, others are
subject-specific. There are those that aim to assess everything from puzzle-solving and creativity to
conversational ability. But not all of these so-called benchmarking tests do what they claim to. Many
were hurriedly assembled, with flaws and omissions; were too easy to cheat on, having filtered into the
training data of AI models; or were just too easy for today’s “frontier” systems.
ZeroBench, the challenge launched by Mr Roberts and his colleagues at the University of Cambridge, is
one prominent alternative. It is targeted at large multimodal models—AI systems that can take images
as well as text as input—and aims to present a test that is easy(ish) for the typical person and impossible
for state-of-the-art models. For now, no large language model (LLM) can score a single point. Should
some upstart one day do better, it would be quite an achievement.
ZeroBench isn’t alone. EnigmaEval is a collection of more than a thousand multimodal puzzles
assembled by Scale AI, an AI data startup. Unlike ZeroBench, EnigmaEval doesn’t try to be easy for
anyone. The puzzles, curated from a variety of pre-existing online quizzing resources, start at the
difficulty of a fiendish cryptic crossword and get harder from there. When advanced AI systems are
pitted against the hardest of these problems, their median score is zero. A frontier model from
Anthropic, an AI lab, is the only model to have got a single one of these questions right.
Other question sets attempt to track more specific abilities. METR, an AI-safety group, for instance,
tracks the length of time it would take people to perform individual tasks that AI models are now
capable of (Anthropic is the first to break the hour mark). Another benchmark, the brashly named
“Humanity’s Last Exam”, tests knowledge, rather than intelligence, with questions from the front line of
human knowledge garnered from nearly a thousand academic experts.
One of the reasons for the glut of new tests is a desire to avoid the mistakes of the past. Older
benchmarks abound with sloppy phrasings, bad markschemes or unfair questions. ImageNet, an early
image-recognition data set, is an infamous example: a model that describes a photograph of a mirror in
which fruit is reflected is penalised for saying the picture is of a mirror, but rewarded for identifying a
banana.
It is impossible to ask models to solve corrected versions of these tests without compromising
researchers’ ability to compare them with models that took the flawed versions. Newer tests—produced
in an era when AI research is flush with resources—can be laboriously vetted to spot such errors ahead
of production.
The second reason for the rush to build new tests is that models have learned the old ones. It has
proved hard to keep any common benchmark out of the training data used by labs to train their models,
resulting in systems that perform better on the exams than they do in normal tasks.
The third, and most pressing, issue motivating the creation of new tests is saturation—AI models coming
close to getting full marks. On a selection of 500 high-school maths problems, for example, o3-pro is
likely to get a near-perfect score. But as o1-mini, released nine months earlier, scored 98.9%, the results
do not offer observers a real sense of progress in the field.
This is where ZeroBench and its peers come in. Each tries to measure a particular way AI capabilities are
approaching—or exceeding—those of humans. Humanity’s Last Exam, for instance, sought to devise
intimidating general-knowledge questions (its name derives from its status as the most fiendish such
test it is possible to set), asking for anything from the number of tendons supported by a particular
hummingbird bone to a translation of a stretch of Palmyrene script found on a Roman tombstone. In a
future where many AI models can score full marks on such a test, benchmark-setters may have to move
away from knowledge-based questions entirely.
But even evaluations which are supposed to stand the test of time get toppled overnight. ARC-AGI, a
non-verbal reasoning quiz, was introduced in 2024 with the intention of being hard for AI models.
Within six months, OpenAI announced a model, o3, capable of scoring 91.5%.
For some AI developers, existing benchmarks miss the point. OpenAI’s boss Sam Altman hinted at the
difficulties of quantifying the unquantifiable when the firm released its GPT-4.5 in February. The system
“won’t crush benchmarks”, he tweeted. Instead, he added, before publishing a short story the model
had written, “There’s a magic to it I haven’t felt before.”
Some are trying to quantify that magic. Chatbot Arena, for example, allows users to have blind chats
with pairs of LLMs before being asked to pick which is “better”—however they define the term. Models
that win the most matchups float to the top of the leaderboard. This less rigid approach appears to
capture some of that ineffable “magic” that other ranking systems cannot. They too, however, can be
gamed, with more ingratiating models scoring higher with seducible human users.
Others, borrowing an argument familiar to anyone with school-age children, question what any test can
reveal about an AI model beyond how good it is at passing that test. Simon Willison, an
independent AI researcher in California, encourages users to keep track of the queries that
existing AI systems fail to fulfil before posing them to their successors. That way users can select models
that do well at the tasks that matter to them, rather than high-scoring systems ill-suited to their needs.
All this assumes that AI models are giving the tests facing them their best shot. Sandbagging, in which
models deliberately fail tests in order to hide their true capabilities (in order to, for example, prevent
themselves from being deleted), has been observed in a growing number of models. In a report
published in May from researchers at MATS, an AI-safety group, top LLMs were able to identify when
they were being tested almost as well as the researchers themselves. This too complicates the quest for
reliable benchmarks.
That being said, the value to AI companies of simple leaderboards which their products can top means
the race to build better benchmarks will continue. ARC-AGI 2 was released in March, and still eludes
today’s top systems. But, aware of how quickly that might change, work on ARC-AGI 3 has already
begun.
This article appeared in the Science & technology section of the print edition under the headline “Extra
credit”.
Adecade ago Nature, a scientific publisher, began tallying the contributions made by researchers at
different institutions to papers published across a set of 145 respected journals. When the first
such Nature Index was published in 2016, the Chinese Academy of Science (CAS) ranked first, but
American and European institutions dominated the top ten. Harvard placed second, with Stanford
and MIT fifth and sixth; the French National Centre for Scientific Research (CNRS) and the German Max
Planck Society were third and fourth; Oxford and Cambridge took ninth and tenth (seventh and eighth
place went, respectively, to the Helmholtz Association of German Research Centres and the University
of Tokyo).
Gradually, however, the table has turned. In 2020 Tsinghua University, in Beijing, entered the top ten. By
2022 Oxford and Cambridge were out, replaced by two Chinese rivals. Come 2024 only three Western
institutions remained in the top ten: Harvard, CNRS and the Max Planck Society. This year, Harvard ranks
second and Max Planck ninth. Eight of the top ten are Chinese.
The shift reflects a real and rapid improvement in China’s research capabilities. Over the past decade the
country has increased its spending on research and development by roughly 9% annually in real terms.
In 2023, adjusting for purchasing power, China outspent both America and the European Union on
combined government and higher-education R&D. The country has also drawn back many Chinese
researchers who were once based abroad, a cohort known as haigui (sea turtles), a homophone for
“returning from across the sea”.
All this has paid off. The country now publishes more high-impact papers (those in the most-highly cited
1%) than either America or Europe. In fields like chemistry, engineering and materials science the
country is now considered a world leader. China also produces a huge volume of high-quality computer-
science research. Zhejiang University, fourth in the 2025 index, was the alma mater of Liang Wenfeng,
the founder of DeepSeek, China’s cutting-edge artificial-intelligence (AI) company.
Yet the way the rankings are created plays to China’s strengths. The journals included in the index are
chosen to be representative of top-tier research across the natural sciences, with the composition
regularly tweaked to reflect the state of the field. A growing number of publications in chemistry and
physical-science journals has led to their share increasing to just over half those used in the 2025 index.
Papers from health and biological-science journals, however, which remain an area of Western
dominance, account for only 20% of the index.
China’s research centres also tumble down the table when the studies under consideration are limited
to those published in Nature and Science, the two journals widely regarded as the most
prestigious. CAS is the only institution in that country near the top of that leaderboard, placing fourth.
Observers should treat these rankings with caution. Although the Nature Index is a useful measure of an
institution or country’s scientific might, its assessments are inevitably incomplete. Plenty of valuable
research is published in lower-tier journals, and world-changing innovation will not always come from
high-scoring institutions. That being said, Zhejiang, Peking and Tsinghua universities have earned their
place with CAS among the world’s best.
This article appeared in the Science & technology section of the print edition under the headline “Are
Chinese institutions world-beating?”
If finance has a single rule, it is that arbitrage should keep prices in line. If they do stray from
fundamentals, so the argument goes, savvy investors should step in to correct them.
All good in theory. In practice, less so. Markets can be swept by sentiment, detaching valuations from
fundamentals. Economists have surgically documented persistent distortions. Purely mechanical flows,
for instance, move markets even when they are known to investors in advance and unrelated to
earnings prospects. When a stock is added to an index, its price inflates. Predictable dividend
reinvestments also push up prices. Why does this happen? And who, in time, might correct the market?
Ask on Wall Street for the identity of such arbitrageurs, and you get the usual suspects. Hedge funds and
quant shops, armed with analysts and algorithms, are the most natural candidates. The industry has
ballooned from overseeing $1.4trn to $4.5trn in assets over the past decade, and is well positioned to
spot mispricings. Others suggest short-sellers, ever alert to signs of froth, or retail investors, now keen
dip-buyers. One candidate gets mentioned rather less often: staid corporates.
Such businesses are normally seen as passive capital-raisers, not active market participants and certainly
not market disciplinarians. Even though they can act on perceived mispricings, firms typically focus more
on expanding their own business than on searching for alpha. Bosses have operational backgrounds.
They are more fluent in capital spending than capital markets. And when financial officers do wade into
the market—to issue or buy back shares, for example—valuation is just one of many considerations,
alongside avoiding taxes, ensuring a healthy credit rating and making sure the firm does not take on too
much leverage.
And yet a growing body of work suggests that corporations, far from being passive observers, are some
of the market’s most effective arbitrageurs. In 2000 Malcolm Baker of Harvard University and Jeffrey
Wurgler, then of Yale University, found a tight connection between firms’ net equity issuance and
subsequent stockmarket returns. Years in which companies issued relatively more stock were typically
followed by weaker market performance. More tellingly, companies seemed to issue precisely when
valuations were rich, and especially when other frothy signals, such as buoyant consumer sentiment,
were drawing attention.
Timing the market is impressive; out-trading the professionals is even more so. Yet firms that issue or
retire their own shares routinely do exactly that. In 2022 David McLean of Georgetown University and
co-authors showed that corporate-share sales and buy-backs forecast future returns more accurately
than the trades of banks, hedge funds, mutual funds and wealth managers.
What explains this prowess? Part of the answer lies in firms’ access to private information. Few are
better placed to forecast a company’s future cashflows than insiders. When a company begins buying
back its own shares—or employees convert their options into stockholdings—investors should pay
attention.
But informational advantages go only so far. They do not explain why firm-level issuance predicts
aggregate stockmarket returns. And firms’ decisions are publicly disclosed: if they were merely signals of
private insight, copycat investors quickly ought to arbitrage away any return. Instead, the success of
companies may reflect not just what they know, but what they are able to do. They are unusually well
placed to act on mispricings.
Start with short-selling. Firms have a natural way to take a contrarian view: when they believe their
shares are overpriced, they can issue more of them. For a hedge fund to express a similar view, it must
sell short the stock or purchase more complex products, such as put options. These strategies are not
only expensive, requiring the payment of borrowing fees or option premiums, but also expose the
investor to large losses and margin calls if the stock price rises. Risks become particularly acute during
bouts of volatility, such as in January 2021, when retail investors sent GameStop’s share price to
astonishing heights. Hedge funds hesitated to short-sell for fear of making losses as investors piled in.
GameStop’s boss, by contrast, simply issued new shares.
Companies also operate across markets. Almost every business finances itself with some combination of
debt and equity. If one becomes unusually expensive, it can easily switch to the other. Yueran Ma of the
University of Chicago finds that firms routinely move towards whichever market looks cheap. Such
flexibility is rarely available to institutional investors, which are constrained by benchmarks and
mandates. Only 28 of Vanguard’s 267 funds can trade both bonds and stocks, for instance.
Last, businesses benefit from insulation. They may face unhappy shareholders, but they do not face
redemptions. When institutional investors mess up, their own investors pull out, forcing them to sell at
just the wrong time.
Firm hand
The agility this engenders makes companies valuable providers of liquidity, too. As passive investing has
grown to make up a fifth of the market, so has demand for stocks in the big indices. Who meets that
demand, helping anchor prices? Marco Sammon of Harvard and John Shim of the University of Notre
Dame suggest it is, once again, companies. Intermediaries such as active managers and pension funds
buy alongside their passive peers. Firms step in to take the other side of the trade by issuing new shares.
Similarly, when governments flood the market with short-term debt, firms respond by issuing longer-
dated bonds.
As asset managers become more passive, specialised or tied up by mandates, it is the firms they invest
in that keep the market ticking. So thank your nearest chief financial officer.
This article appeared in the Finance & economics section of the print edition under the headline “The
real wolves of Wall Street”.
Pity the ambitious youngster. For decades the path to a nice life was clear: go to university, find a
graduate job, then watch the money come in. Today’s hard-working young, however, seem to have
fewer options than before.
Go into tech? The big firms are cutting jobs. How about the public sector? Less prestigious than it used
to be. Become an engineer? Lots of innovation, from electric vehicles to renewable energy, now
happens in China. A lawyer? Artificial intelligence will soon take your job. Don’t even think about
becoming a journalist.
Across the West, young graduates are losing their privileged position; in some cases, they have already
lost it. Jobs data hint at the change. Matthew Martin of Oxford Economics, a consultancy, has looked at
Americans aged 22 to 27 with a bachelor’s degree or more. For the first time in history, their
unemployment rate is now consistently higher than the national average. Recent graduates’ rising
unemployment is driven by those who are looking for work for the first time.
The trend is not just apparent in America. Across the European Union the unemployment rate of young
folk with tertiary education is approaching the overall rate for the age group (see chart 1). Britain,
Canada, Japan—all appear to be on a similar path. Even elite youngsters, such as MBA graduates, are
suffering. In 2024, 80% of Stanford’s business-school graduates had a job three months after leaving,
down from 91% in 2021. At first glance, the students eating al fresco at the school’s cafeteria look
happy. Look again, and you can see the fear in their eyes.
Until recently the “university wage premium”, where graduates earn more than others, was growing
(see chart 2). More recently, though, it has shrunk, including in America, Britain and Canada. Using data
on young Americans from the New York branch of the Federal Reserve, we estimate that in 2015 the
median college graduate earned 69% more than the median high-school graduate. By last year, the
premium had shrunk to 50%.
Jobs are also less fulfilling. A large survey suggests that America’s “graduate satisfaction gap”—how
much more likely graduates are to say they are “very satisfied” with their job than non-graduates—is
now around three percentage points, down from a long-run advantage of seven.
Is it a bad thing if graduates lose their privileges? Ethically, not really. No group has a right to
outperform the average. But practically, it might be. History shows that when brainy people—or people
who think they are brainy—do worse than they think they ought to, bad things happen.
Peter Turchin, a scientist at the University of Connecticut, argues that “elite overproduction” has been
the proximate cause of all sorts of unrest over the centuries, with “counter-elites” leading the charge.
Historians identify “the problem of an excess of educated men” as contributing to Europe’s revolutions
of 1848, for instance. Luigi Mangione would be a member of the counter-elite. Mr Mangione, a
University of Pennsylvania graduate, should be living a prosperous life. Instead, he is on trial for the
alleged murder of the chief executive of a health insurer. More telling is the degree to which people
sympathise with his alienation: Mr Mangione has received donations of well over $1m.
Why are graduates losing their privileges? Maybe the enormous expansion of universities lowered
standards. If ivory towers admit less-talented applicants, and then do a worse job of teaching them,
employers might over time expect fewer differences between the average graduate and the average
non-graduate. A recent study, by Susan Carlson of Pittsburg State University and colleagues, suggests
that many students today are functionally illiterate. A worrying number of English majors struggle to
understand Charles Dickens’s “Bleak House”. Many are bamboozled by the opening line: “Michaelmas
term lately over, and the Lord Chancellor sitting in Lincoln’s Inn Hall.”
Certainly some universities do offer rubbish courses to candidates who should not be there. On the
other hand, there is little correlation between the number of graduates and the wage premium over the
long term: both grew in America in the 1980s, for instance. Moreover, talk to students at most
universities, especially elite ones, and you will be disabused of the notion that they are stupid. Those at
Stanford are ferociously intelligent. Many at Oxford and Cambridge once lounged around, and even
celebrated a “gentleman’s third”, if they were so honoured. No longer.
A new paper by Leila Bengali of the San Francisco branch of the Fed, and colleagues, is another reason
to question the graduates-are-thick explanation. They find that the change in the university wage
premium mainly “reflects demand factors, specifically a slowdown in the pace of skill-biased
technological change”. In plain English, employers can increasingly get non-graduates to do jobs that
were previously the preserve of graduates alone.
This is especially true for those jobs that require the rudimentary use of technology. Until relatively
recently, many people could get to grips with a computer only by attending a university. Now everyone
has a smartphone, meaning non-graduates are adept with tech, too. The consequences are clear. In
almost every sector of the economy, educational requirements are becoming less strenuous, according
to Indeed, a jobs website. America’s professional-and-business services industry employs more people
without a university education than it did 15 years ago, even though there are fewer such people
around.
Employers have also trimmed jobs in graduate-friendly industries. Across the EU the number of 15-to-
24-year-olds employed in finance and insurance fell by 16% from 2009 to 2024. America has only slightly
more jobs in “legal services” than in 2006. Until recently, the obvious path for a British student hoping to
make money was a graduate scheme at a bank. Since 2016, however, the number of twentysomethings
in law and finance has fallen by 10%. By the third season of “Industry”, a television drama about
graduates at a London bank, a big chunk of the original cast has been pushed out (or has died).
It is tempting to blame AI for these waning opportunities. The tech looks capable of automating entry-
level “knowledge” work, such as filing or paralegal tasks. Yet the trends described in this piece started
before ChatGPT. Lots of contingent factors are responsible. Many industries that traditionally employed
graduates have had a tough time of late. Years of subdued activity in mergers and acquisitions have
trimmed demand for lawyers. Investment banks are less go-getting than before the global financial crisis
of 2007-09.
So is college worth it? Americans seem to have decided not. From 2013 to 2022 the number of people
enrolled in bachelor’s programmes fell by 5%, according to data from the oecd. Yet in most rich
countries, where higher education is cheaper because the state plays a larger role, youngsters are still
funnelling into universities. Excluding America, enrolment across the OECD rose from 28m to 31m in the
decade to 2022. In France the number of students went up by 36%; in Ireland by 45%. Governments are
subsidising useless degrees, encouraging kids to waste time studying.
Students also may not be picking the right subjects. Outside America, the share in arts, humanities and
social sciences mostly grows. So, inexplicably, does enrolment in journalism courses. If these trends
reveal young people’s ideas about the future of work, they truly are in trouble.
This article appeared in the Finance & economics section of the print edition under the headline
“Crammed and damned”.
Books about geniuses tend to fall into predictable categories. There is hagiography, along the lines of
“How Picasso revolutionised art”. There are takedowns (“Picasso was a monster”). And there are how-to
manuals (“How you can become the new Picasso”). “The Genius Myth” by Helen Lewis is more original
and painfully timely. This is the high age of the genius, readers may conclude—but not in a good way.
If you have a brainwave in a forest, and no one to share it with, are you a genius? Not according to Ms
Lewis, a British journalist, because genius is a social status. You are one because you are different from
others—ascribed a place “somewhere between secular saint and superhero”—and because others say
so. Genius is a story as much as an achievement, requiring canny reputation management. Selection
criteria vary, so it has a political dimension, offering a way to elevate favoured groups.
For the Romantics, recounts Ms Lewis, genius was linked to passion, insanity and illness. Victorian
researchers thought it could be analysed and quantified, an approach that persisted in the 20th-century
interest in IQ. The corollary of this pseudoscientific genius-ology was a sense of the worthlessness of
those at the bottom of the scale—and an enthusiasm for eugenics. The cranks who championed these
ideas had a habit of discrediting them, and their own pretensions to genius, with bogus data.
Three uncomfortably familiar motifs crop up in this witty survey. One is “the deficit model of genius”,
whereby “exceptional talent extorts a price.” Sometimes that is paid by the geniuses, in ostracism,
alcoholism or depression. Often, through selfishness and worse, it is extracted from those around them,
including put-upon spouses and uncredited collaborators. The genius label “becomes a licensing scheme
for their eccentricities” and “a shield against questions”, Ms Lewis writes, name-checking Michael
Jackson and Roman Polanski.
A second archetype is the genius as rebel: dissidents who face down stale orthodoxies and are
vindicated by history. Told something is impossible, they prove it isn’t. “He knew he was right,” Ms Lewis
summarises, “and he was!” She mentions Galileo and the Impressionists, derided at their first exhibition
in 1874. Making “a fetish of contrarianism”, this model of genius is dangerous because, after all,
conventional wisdom is often wise.
Third and equally perilous is the enduring delusion that genius is “a transferable skill”: ie, the
assumption that accolades in one walk of life make someone an authority in others. This encourages the
anointed to sound off on subjects far beyond their competence. In reality, “A self-image as a ‘clever
person’ simply makes you more likely to hold your incorrect opinions extremely forcefully.” Genius, the
author urges, should properly be imputed to works, like paintings or inventions, rather than people.
A conviction that contrarianism is a mark of greatness. The belief that obnoxious behaviour is a price
worth paying. Faith that a person distinguished in one field will be right about everything. Does all this
remind you of anyone?
“He’s one of our great geniuses,” Donald Trump once proclaimed of Elon Musk (pictured), “and we have
to protect our genius.” Mr Musk, notes Ms Lewis, “performs the cultural role of genius”, sleeping under
his desk and hoping to die on Mars. Lionised for his triumphs with electric cars and space travel, he
has fallen into the genius trap, inferring “that he is therefore a special person”—qualified, for instance,
to remake the federal government. She describes his botched takeover of Twitter, but went to press
before his rift with Mr Trump.
Indeed, in her spiky, clarifying book, Ms Lewis refers to Mr Trump only fleetingly, possibly because she
doubts he belongs alongside Albert Einstein, Vincent van Gogh and Thomas Edison. But plenty of his
supporters consider him a genius, seeing his talent as excuse for his vulgarities. Mr Trump has called
himself “a very stable genius”, posing as a crusader against the deep state and the mainstream media’s
groupthink. His political success rests in part on the myth of transferable genius: supposedly a star in
business, and on TV, he was bound to make a fine president.
This article appeared in the Culture section of the print edition under the headline “The age of the
genius”.