Illustration image for AI (Kirill
Kudryavtsev/AFP/Getty Images)
Rising AI uptake and technical performance are set to transform business operations
and reshape the workforce
Global corporate investment in artificial intelligence (AI) reached a record USD252bn in 2024, up 26%
from the previous year. This takes place as a growing number of studies point to tangible company
productivity gains from using AI, though it is less clear when, and to what extent, these will translate
into productivity gains at the macroeconomic level. Meanwhile, scientists claim at least two AI models
have passed the 'Turing test', indicating they can converse like humans.
What next
While the debate in some technology circles over the achievement of Artificial General Intelligence
(AGI) -- or Superintelligence -- is largely sterile, recent technological development and growing
business use mean AI-backed productivity gains could accelerate in the coming years. However,
corporate adoption at the expense of junior workers might also create a shortfall of skilled managers in
the future. In the global superpower struggle for AI dominance, US protectionism could inadvertently
give China an advantage.
Subsidiary Impacts
◦ Despite growing hype over AGI, it remains an ill-defined concept.
◦ Corporate AI investment is likely to increase further in the coming years.
◦ US protectionism could boost China's relative position in the global AI race.
Analysis
Recent months have seen a proliferation of forecasts by technology-industry leaders -- including those
of OpenAI's Sam Altman, Anthropic's Dario Amodei and Google DeepMind's Demis Hassabis -- about
the likelihood of achieving AGI within the next few years.
The term AGI, however, remains poorly defined -- it is loosely understood as AI that surpasses human
cognitive abilities across a variety of fields. Hence, even though leaders such as Altman, Amodei and
Hassabis speak from a position of profound technical knowledge, the discussion lacks any clarity that
would help governments and businesses prepare for 'AGI' impact.
Global investment and trade war will shape AI future
Wednesday, May 28, 2025
Oxford Analytica Daily Brief®
© Oxford Analytica 2025. All rights reserved
No duplication or transmission is permitted without the written consent of Oxford Analytica
Contact us: T +44 1865 261600 or oxan.to/contact
Business adoption
In any case, business AI adoption continues to accelerate. In total, 78% of 1,491 executives from 101
countries and companies of different industries and sizes surveyed by McKinsey in mid-2024 said their
organisation was using AI in at least one business function -- up from 55% one year earlier.
The share of organisations reporting they used generative AI -- which creates content, including text,
images or audio, in response to prompts -- increased from 33% in 2023 to 71% last year.
Such figures indicate that businesses are adopting AI much more quickly than they did with previous
disruptive technologies. Moreover, as technological progress accelerates and fewer than half of
companies surveyed by McKinsey currently use AI in three or more business functions, corporate
adoption of AI is poised to expand significantly in the coming years.
In a projection pointing in that direction, US bank Morgan Stanley calculates that cumulative global
investment in AI infrastructure -- encompassing data centres (including chips and servers), power and
grid -- will surpass USD3tn by 2028.
Oxford Analytica Daily Brief®
© Oxford Analytica 2025. All rights reserved
No duplication or transmission is permitted without the written consent of Oxford Analytica
Contact us: T +44 1865 261600 or oxan.to/contact
Global investment and trade war will shape AI future
>USD3tn
Projected global investment in data centres accumulated by 2028
Technological progress
Software development that makes AI useful for business relies on hardware progress.
A potential landmark in hardware evolution came in January, when the Korea Advanced Institute of
Science and Technology (KAIST) announced the development of a self-learning 'memristor' or 'memory
resistor'.
Industry insiders consider memristors a leading hardware candidate for emulating human-brain
synapses in computers due to their ability to compute and store data simultaneously.
KAIST's announcement was significant because the existence of self-learning memristors hints at
computers that can learn like human brains. However, the technology still faces challenges to its
implementation -- such as latency problems and difficulties in producing memristors at scale.
Furthermore, while data centres using NVIDIA Graphic Processor Units (GPUs), which are at the heart
of the AI revolution, benefit from extensive software libraries and trained engineers, there is currently no
mature software 'stack' or wider associated ecosystem to incorporate memristors into AI workflows.
Hence, any significant real-world impact of KAIST's announcement is likely years away.
Turing test
A different indicator of progress focused on already largely deployed models was the pre-print
publication of an article (pdf) by Cameron Jones and Benjamin Bergen, two scholars from the
Department of Cognitive Science at the University of California San Diego, asserting that some Large
Language Models (LLMs) had passed the Turing test.
This test -- also known as 'the imitation game' -- was proposed in 1950 by UK mathematician and
computer scientist Alan Turing to assess whether a machine could display intelligent behaviour that is
indistinguishable from that of a human.
It involves a human judge who engages in simultaneous text-based conversations with a person and a
machine -- both of which seek to convince the judge that they are the human participant. If the human
judge is unable to distinguish reliably between their responses, it is said that the machine has passed
the Turing test.
When prompted to act as a young, introverted person knowledgeable about internet culture and using
slang, OpenAI's GPT-4.5 model convinced the judge it was a human 73% of the time -- performing
significantly better than the real person in the test. Meanwhile Meta's LLaMa-3.1 achieved the same
outcome 56% of the time -- a level similar to the one observed for the human participant.
Both, therefore, can be considered to have passed the Turing test under those circumstances.
The idea that passing the Turing test demonstrates machine
intelligence is now largely discredited
Without that prompt, though, both models were significantly less convincing, as were the other two
models included in the study, OpenAI's GPT-4o and ELIZA, an early chatbot built in the 1960s.
While the Turing test was for decades assumed by many in the field as a way to demonstrate that AGI
or machine intelligence had been achieved, this idea is now mostly discredited, with the test criticised
Oxford Analytica Daily Brief®
© Oxford Analytica 2025. All rights reserved
No duplication or transmission is permitted without the written consent of Oxford Analytica
Contact us: T +44 1865 261600 or oxan.to/contact
Global investment and trade war will shape AI future
for:
• assessing not machine intelligence as much as human gullibility, notably human tendency to
attribute intelligence to systems that merely mimic human mannerisms;
• focusing on behaviour in conversation, rather an actual ability to think; and
• concentrating exclusively on conversational ability, leaving out the many other facets of human
intelligence and thinking skills, such as the visual-spatial aspects of it.
Despite this, the existence of AI systems able to pass for people under some circumstances
increases an ethical dilemma facing humanity regarding the nature and consequences of human-AI
relations, accountability for AI actions and, more broadly, how to address the growing risk of --
intentional or unintentional -- deception and manipulation of people by these systems, or by people
deploying them.
AI and the future of work
Rapid AI progress also fuels concerns over the extent to which LLMs might increasingly replace
human workers in tasks based on short conversations. Two and a half years after the launch of
ChatGPT, the answer to that question remains unclear:
• Studies have arrived at contradictory results, suggesting both that less and more specialised
jobs would be particularly affected (see INTERNATIONAL: AI will reshape worker performance
gaps - March 10, 2025).
• While passing the Turing test could arguably be presented as a measure of substitutability, it
does not address concerns such as AI 'hallucinations' -- when systems make up information
that sounds accurate but is completely false.
Indeed, Lloyd's of London recently launched an insurance product that covers the costs faced by
companies that lose court cases brought by parties harmed by wrong information or other negative
consequences of underperformance of AI deployed by them.
Last year, for example, a court ordered Air Canada to honour a discount pledged by its customer
service chatbot which the airline had not planned to offer.
Beyond the discussion about the extent to which AI will be able to replace human labour, there is
substantial debate in academic, corporate and policy circles over the degree to which the technology
will transform work done by humans, and the nature of that change.
Labour value
In a recent article, Venkat Ram Reddy Ganuthula, of the Indian Institute of Technology Jodhpur,
proposes (pdf) the concept of "Agency-Driven Labour Theory (ADLT)" as a new theoretical framework
for understanding human work in AI-augmented environments.
Ganuthula argues that traditional labour theories have focused on specialisation and the relationship
between task execution and time, connecting human effort to value creation.
According to the scholar, this understanding of "specialisation and skill development as key drivers of
productivity" was relevant for the world of the Industrial Revolution, but does not account for changes in
the age of AI.
ADLT, instead, argues that human labour value increasingly derives from people's ability to make
"informed judgements, provide strategic direction, and design operational frameworks for AI systems" -
- in short, to orchestrate the work of systems combining both AI and human workers.
While not eliminating the importance of skill and specialisation, this may change the nature of abilities
required from workers.
Forthcoming bottleneck?
Oxford Analytica Daily Brief®
© Oxford Analytica 2025. All rights reserved
No duplication or transmission is permitted without the written consent of Oxford Analytica
Contact us: T +44 1865 261600 or oxan.to/contact
Global investment and trade war will shape AI future
Tech entrepreneur Scott Werner recently argued that, as AI-driven automation becomes widespread,
human judgement will increasingly become a bottleneck in modern economies.
He defines judgements as calls on whether solutions proposed by AI systems align with the objectives
of the project at hand and "solve the right problems".
As technology analyst Azeem Azhar notes, much office-based work has traditionally been based on a
model in which junior employees 'generate' and their more senior peers 'judge'.
Currently, however, AI is taking over many 'generation' tasks, such as drafting documents, writing
computer code and helping create PowerPoint presentations. Since experience performing those
tasks allow workers to hone their skills and become good 'judges' when they reach more senior roles,
a paucity of junior positions may lead to a shortage of capable senior workers a few years down the
line.
AI may already be creating difficulties in the job market for recent
university graduates
Companies already seem to be limiting hiring at entry levels, as suggested by recent data on a
growing gap in unemployment levels faced by recent college graduates and workers in general in the
United States. While there is no definitive proof that the replacement of young workers by AI is causing
this, it may be a contributing factor.
AI and protectionism
AI agents can work independently to achieve a programmed goal by completing tasks and solving
problems without constant human guidance or intervention. In theory, their rise could accelerate the
change towards AI-based 'generation' tasks at businesses, though it is not clear they will at any point
be able to have the final call on sensitive areas.
However, the global trade war could slow down the development of AI -- agentic or otherwise --
especially in the United States.
The value chains behind AI data centres - spanning everything from GPUs through servers, networking
equipment and industrial cooling -- are profoundly global, covering manufacturing and assembly in
geographies as varied as China, the Czech Republic, Malaysia, the Philippines, South Korea, Taiwan
and beyond.
Oxford Analytica Daily Brief®
© Oxford Analytica 2025. All rights reserved
No duplication or transmission is permitted without the written consent of Oxford Analytica
Contact us: T +44 1865 261600 or oxan.to/contact
Global investment and trade war will shape AI future
Tariffs on those components or the raw materials that go into them increase the upfront cost of
building the data-centre capacity needed to develop and run AI software.
Non-tech protectionism could slow down AI development in the United
States
At present, it is not clear what will be the 'final' status of tariffs on US imports of technology equipment
following different announcements, pauses and carve-outs announced in recent months and weeks.
In theory, as a signatory of the WTO's Information Technology Agreement (ITA), the United States is
committed to import such equipment free of tariffs or other charges.
However, even if Washington finally respects all its ITA commitments, tariffs on other relevant goods --
including steel and aluminium used in data-centre racks, copper used for electrical wiring and cooling
systems, and electrical infrastructure, such as transformers and power distribution units -- could slow
down data-centre constructions in the United States (see UNITED STATES: Tariffs will hurt tech supply
chains - April 8, 2025).
This could give China an advantage in the dispute for AI leadership between both countries, not least
as the rise of DeepSeek has demonstrated the Asian giant's ability to catch up with US rivals despite
being deprived of access to the most advanced US-designed chips (see CHINA: US curbs will drive
Huawei chip innovation - May 14, 2025 and, see CHINA: US chip curbs slow but do not stop AI
progress - May 9, 2025).
Oxford Analytica Daily Brief®
© Oxford Analytica 2025. All rights reserved
No duplication or transmission is permitted without the written consent of Oxford Analytica
Contact us: T +44 1865 261600 or oxan.to/contact
Global investment and trade war will shape AI future

Global investment and trade war will shape AI future

  • 1.
    Illustration image forAI (Kirill Kudryavtsev/AFP/Getty Images) Rising AI uptake and technical performance are set to transform business operations and reshape the workforce Global corporate investment in artificial intelligence (AI) reached a record USD252bn in 2024, up 26% from the previous year. This takes place as a growing number of studies point to tangible company productivity gains from using AI, though it is less clear when, and to what extent, these will translate into productivity gains at the macroeconomic level. Meanwhile, scientists claim at least two AI models have passed the 'Turing test', indicating they can converse like humans. What next While the debate in some technology circles over the achievement of Artificial General Intelligence (AGI) -- or Superintelligence -- is largely sterile, recent technological development and growing business use mean AI-backed productivity gains could accelerate in the coming years. However, corporate adoption at the expense of junior workers might also create a shortfall of skilled managers in the future. In the global superpower struggle for AI dominance, US protectionism could inadvertently give China an advantage. Subsidiary Impacts ◦ Despite growing hype over AGI, it remains an ill-defined concept. ◦ Corporate AI investment is likely to increase further in the coming years. ◦ US protectionism could boost China's relative position in the global AI race. Analysis Recent months have seen a proliferation of forecasts by technology-industry leaders -- including those of OpenAI's Sam Altman, Anthropic's Dario Amodei and Google DeepMind's Demis Hassabis -- about the likelihood of achieving AGI within the next few years. The term AGI, however, remains poorly defined -- it is loosely understood as AI that surpasses human cognitive abilities across a variety of fields. Hence, even though leaders such as Altman, Amodei and Hassabis speak from a position of profound technical knowledge, the discussion lacks any clarity that would help governments and businesses prepare for 'AGI' impact. Global investment and trade war will shape AI future Wednesday, May 28, 2025 Oxford Analytica Daily Brief® © Oxford Analytica 2025. All rights reserved No duplication or transmission is permitted without the written consent of Oxford Analytica Contact us: T +44 1865 261600 or oxan.to/contact
  • 2.
    Business adoption In anycase, business AI adoption continues to accelerate. In total, 78% of 1,491 executives from 101 countries and companies of different industries and sizes surveyed by McKinsey in mid-2024 said their organisation was using AI in at least one business function -- up from 55% one year earlier. The share of organisations reporting they used generative AI -- which creates content, including text, images or audio, in response to prompts -- increased from 33% in 2023 to 71% last year. Such figures indicate that businesses are adopting AI much more quickly than they did with previous disruptive technologies. Moreover, as technological progress accelerates and fewer than half of companies surveyed by McKinsey currently use AI in three or more business functions, corporate adoption of AI is poised to expand significantly in the coming years. In a projection pointing in that direction, US bank Morgan Stanley calculates that cumulative global investment in AI infrastructure -- encompassing data centres (including chips and servers), power and grid -- will surpass USD3tn by 2028. Oxford Analytica Daily Brief® © Oxford Analytica 2025. All rights reserved No duplication or transmission is permitted without the written consent of Oxford Analytica Contact us: T +44 1865 261600 or oxan.to/contact Global investment and trade war will shape AI future
  • 3.
    >USD3tn Projected global investmentin data centres accumulated by 2028 Technological progress Software development that makes AI useful for business relies on hardware progress. A potential landmark in hardware evolution came in January, when the Korea Advanced Institute of Science and Technology (KAIST) announced the development of a self-learning 'memristor' or 'memory resistor'. Industry insiders consider memristors a leading hardware candidate for emulating human-brain synapses in computers due to their ability to compute and store data simultaneously. KAIST's announcement was significant because the existence of self-learning memristors hints at computers that can learn like human brains. However, the technology still faces challenges to its implementation -- such as latency problems and difficulties in producing memristors at scale. Furthermore, while data centres using NVIDIA Graphic Processor Units (GPUs), which are at the heart of the AI revolution, benefit from extensive software libraries and trained engineers, there is currently no mature software 'stack' or wider associated ecosystem to incorporate memristors into AI workflows. Hence, any significant real-world impact of KAIST's announcement is likely years away. Turing test A different indicator of progress focused on already largely deployed models was the pre-print publication of an article (pdf) by Cameron Jones and Benjamin Bergen, two scholars from the Department of Cognitive Science at the University of California San Diego, asserting that some Large Language Models (LLMs) had passed the Turing test. This test -- also known as 'the imitation game' -- was proposed in 1950 by UK mathematician and computer scientist Alan Turing to assess whether a machine could display intelligent behaviour that is indistinguishable from that of a human. It involves a human judge who engages in simultaneous text-based conversations with a person and a machine -- both of which seek to convince the judge that they are the human participant. If the human judge is unable to distinguish reliably between their responses, it is said that the machine has passed the Turing test. When prompted to act as a young, introverted person knowledgeable about internet culture and using slang, OpenAI's GPT-4.5 model convinced the judge it was a human 73% of the time -- performing significantly better than the real person in the test. Meanwhile Meta's LLaMa-3.1 achieved the same outcome 56% of the time -- a level similar to the one observed for the human participant. Both, therefore, can be considered to have passed the Turing test under those circumstances. The idea that passing the Turing test demonstrates machine intelligence is now largely discredited Without that prompt, though, both models were significantly less convincing, as were the other two models included in the study, OpenAI's GPT-4o and ELIZA, an early chatbot built in the 1960s. While the Turing test was for decades assumed by many in the field as a way to demonstrate that AGI or machine intelligence had been achieved, this idea is now mostly discredited, with the test criticised Oxford Analytica Daily Brief® © Oxford Analytica 2025. All rights reserved No duplication or transmission is permitted without the written consent of Oxford Analytica Contact us: T +44 1865 261600 or oxan.to/contact Global investment and trade war will shape AI future
  • 4.
    for: • assessing notmachine intelligence as much as human gullibility, notably human tendency to attribute intelligence to systems that merely mimic human mannerisms; • focusing on behaviour in conversation, rather an actual ability to think; and • concentrating exclusively on conversational ability, leaving out the many other facets of human intelligence and thinking skills, such as the visual-spatial aspects of it. Despite this, the existence of AI systems able to pass for people under some circumstances increases an ethical dilemma facing humanity regarding the nature and consequences of human-AI relations, accountability for AI actions and, more broadly, how to address the growing risk of -- intentional or unintentional -- deception and manipulation of people by these systems, or by people deploying them. AI and the future of work Rapid AI progress also fuels concerns over the extent to which LLMs might increasingly replace human workers in tasks based on short conversations. Two and a half years after the launch of ChatGPT, the answer to that question remains unclear: • Studies have arrived at contradictory results, suggesting both that less and more specialised jobs would be particularly affected (see INTERNATIONAL: AI will reshape worker performance gaps - March 10, 2025). • While passing the Turing test could arguably be presented as a measure of substitutability, it does not address concerns such as AI 'hallucinations' -- when systems make up information that sounds accurate but is completely false. Indeed, Lloyd's of London recently launched an insurance product that covers the costs faced by companies that lose court cases brought by parties harmed by wrong information or other negative consequences of underperformance of AI deployed by them. Last year, for example, a court ordered Air Canada to honour a discount pledged by its customer service chatbot which the airline had not planned to offer. Beyond the discussion about the extent to which AI will be able to replace human labour, there is substantial debate in academic, corporate and policy circles over the degree to which the technology will transform work done by humans, and the nature of that change. Labour value In a recent article, Venkat Ram Reddy Ganuthula, of the Indian Institute of Technology Jodhpur, proposes (pdf) the concept of "Agency-Driven Labour Theory (ADLT)" as a new theoretical framework for understanding human work in AI-augmented environments. Ganuthula argues that traditional labour theories have focused on specialisation and the relationship between task execution and time, connecting human effort to value creation. According to the scholar, this understanding of "specialisation and skill development as key drivers of productivity" was relevant for the world of the Industrial Revolution, but does not account for changes in the age of AI. ADLT, instead, argues that human labour value increasingly derives from people's ability to make "informed judgements, provide strategic direction, and design operational frameworks for AI systems" - - in short, to orchestrate the work of systems combining both AI and human workers. While not eliminating the importance of skill and specialisation, this may change the nature of abilities required from workers. Forthcoming bottleneck? Oxford Analytica Daily Brief® © Oxford Analytica 2025. All rights reserved No duplication or transmission is permitted without the written consent of Oxford Analytica Contact us: T +44 1865 261600 or oxan.to/contact Global investment and trade war will shape AI future
  • 5.
    Tech entrepreneur ScottWerner recently argued that, as AI-driven automation becomes widespread, human judgement will increasingly become a bottleneck in modern economies. He defines judgements as calls on whether solutions proposed by AI systems align with the objectives of the project at hand and "solve the right problems". As technology analyst Azeem Azhar notes, much office-based work has traditionally been based on a model in which junior employees 'generate' and their more senior peers 'judge'. Currently, however, AI is taking over many 'generation' tasks, such as drafting documents, writing computer code and helping create PowerPoint presentations. Since experience performing those tasks allow workers to hone their skills and become good 'judges' when they reach more senior roles, a paucity of junior positions may lead to a shortage of capable senior workers a few years down the line. AI may already be creating difficulties in the job market for recent university graduates Companies already seem to be limiting hiring at entry levels, as suggested by recent data on a growing gap in unemployment levels faced by recent college graduates and workers in general in the United States. While there is no definitive proof that the replacement of young workers by AI is causing this, it may be a contributing factor. AI and protectionism AI agents can work independently to achieve a programmed goal by completing tasks and solving problems without constant human guidance or intervention. In theory, their rise could accelerate the change towards AI-based 'generation' tasks at businesses, though it is not clear they will at any point be able to have the final call on sensitive areas. However, the global trade war could slow down the development of AI -- agentic or otherwise -- especially in the United States. The value chains behind AI data centres - spanning everything from GPUs through servers, networking equipment and industrial cooling -- are profoundly global, covering manufacturing and assembly in geographies as varied as China, the Czech Republic, Malaysia, the Philippines, South Korea, Taiwan and beyond. Oxford Analytica Daily Brief® © Oxford Analytica 2025. All rights reserved No duplication or transmission is permitted without the written consent of Oxford Analytica Contact us: T +44 1865 261600 or oxan.to/contact Global investment and trade war will shape AI future
  • 6.
    Tariffs on thosecomponents or the raw materials that go into them increase the upfront cost of building the data-centre capacity needed to develop and run AI software. Non-tech protectionism could slow down AI development in the United States At present, it is not clear what will be the 'final' status of tariffs on US imports of technology equipment following different announcements, pauses and carve-outs announced in recent months and weeks. In theory, as a signatory of the WTO's Information Technology Agreement (ITA), the United States is committed to import such equipment free of tariffs or other charges. However, even if Washington finally respects all its ITA commitments, tariffs on other relevant goods -- including steel and aluminium used in data-centre racks, copper used for electrical wiring and cooling systems, and electrical infrastructure, such as transformers and power distribution units -- could slow down data-centre constructions in the United States (see UNITED STATES: Tariffs will hurt tech supply chains - April 8, 2025). This could give China an advantage in the dispute for AI leadership between both countries, not least as the rise of DeepSeek has demonstrated the Asian giant's ability to catch up with US rivals despite being deprived of access to the most advanced US-designed chips (see CHINA: US curbs will drive Huawei chip innovation - May 14, 2025 and, see CHINA: US chip curbs slow but do not stop AI progress - May 9, 2025). Oxford Analytica Daily Brief® © Oxford Analytica 2025. All rights reserved No duplication or transmission is permitted without the written consent of Oxford Analytica Contact us: T +44 1865 261600 or oxan.to/contact Global investment and trade war will shape AI future