IDC FutureScape
IDC FutureScape: Worldwide Artificial Intelligence and
Automation 2025 Predictions
Ritu Jyoti Lily Phan Shane Rau Matt Arcaro
Anne Cheng Daeil Chun Arnal Dayaratna Maureen Fleming
Andrew Gens Deepika Giri Nancy Gohring Jennifer Hamel
Heather Hershey Nobuko Iisaka Kathy Lange Shari Lava
Melih Murat Michele Rosen Peter Rutten David Schubmehl
Hayley Sutherland Neil Ward-Dutton Mary Wardley Madhumitha Sathish
Raghunandhan Kuppuswamy
IDC FUTURESCAPE FIGURE
FIGURE 1
IDC FutureScape: Worldwide Artificial Intelligence and Automation 2025 Top
10 Predictions
Note: Marker number refers only to the order the prediction appears in the document and does not indicate rank or
importance, unless otherwise noted in the Executive Summary.
Source: IDC, 2024
October 2024, IDC #US51666724
EXECUTIVE SUMMARY
According to IDC's Worldwide AI and Generative AI Spending Guide, 2024: Release V2,
August 2024 — which tracks artificial intelligence (AI) software, hardware, and services
across industries and use cases — organizations worldwide are expected to invest $235
billion on AI solutions in 2024. This spending is expected to grow to $632 billion at a
compound annual growth rate (CAGR) of 29.0% for 2023–2028. As per the same,
organizations worldwide are expected to invest $40.5 billion on generative AI (GenAI)
solutions in 2024. GenAI spending is expected to exceed $202 billion in 2028 at a CAGR
of 59.2% for 2023–2028.
For overall AI, this is almost three times greater than the five-year CAGR of 9.8% for
worldwide IT spending (includes hardware [less devices and network], IT services, and
software) over the same period. And for GenAI, this is more than six times greater than
the five-year CAGR of 9.8% for worldwide IT spending over the same period.
In addition, worldwide intelligent process automation (IPA) software will reach $102.4
billion in 2028, growing at a CAGR of 24.3% from 2023 to 2028.
In this study, the global team of IDC analysts describe key drivers affecting IT and
business decision-makers responsible for this spending and the effective use of
associated solutions. This study also presents the top 10 predictions affecting AI and
automation initiatives through 2029.
Each prediction is assessed based on its impact (a mix of cost and complexity to
address) and time frame to the expected stated adoption level. This study also offers
IDC analysts' guidance to IT and business decision-makers as they develop or revise
their strategies and create resource allocation plans for investment in AI and
automation.
The following 10 predictions represent the expected trends with the greatest potential
impact on artificial intelligence and automation initiatives:
▪ Prediction 1: By 2026, 90% of enterprise use cases for LLMs will be dedicated to
training SLMs because of cost, performance, and expanded deployment options.
▪ Prediction 2: By 2027, AI adoption barriers will become indistinct due to AI
infrastructure commoditization, advanced LC/NC tools, and security frameworks,
leading to reduction of AI build costs by nearly 80%.
▪ Prediction 3: By 2026, 20% of frustrated knowledge workers with no
development experience will take charge of transforming how they work by
building their own agentic workflows, improving cycle times by 40%.
©2024 IDC #US51666724 2
▪ Prediction 4: By 2025, 50% of organizations will use enterprise agents
configured for specific business functions, instead of focusing on individual
copilot technologies to achieve faster business value from AI.
▪ Prediction 5: By 2028, 80% of foundation models used for production-grade use
cases will include multimodal AI capabilities to deliver improved use case
support, accuracy, depth of insights, and intermode context.
▪ Prediction 6: By 2028, 80% of RAG implementations will be embedded in
generative AI–based features and products, increasing standardization and
transparency about the use of contextual data for such applications.
▪ Prediction 7: By 2028, 80% of foundation models used by enterprises will be
from a maximum of three providers as the market will consolidate due to
unsustainable business models.
▪ Prediction 8: By 2026, 65% of enterprises will adopt hybrid edge-cloud
inferencing as organizations fully integrate edge into the cloud infrastructure and
management strategy.
▪ Prediction 9: By 2027, a third of new AI applications will contain a diversified set
of chained traditional and GenAI models and business rules, outpacing the
development of new singular AI model applications.
▪ Prediction 10: By 2027, 80% of critical AI decisions will require human oversight
supported by visual explainability dashboards, potentially slowing processes but
enhancing accountability.
The IDC study provides IDC's top 10 predictions for artificial intelligence and
automation in 2025 and beyond.
"AI agents are set to revolutionize various industries by enhancing efficiency, improving
customer experiences, and enabling new business models," says Ritu Jyoti, GVP/GM, AI
and Data Research at IDC. "With technological innovations, responsible technology
usage, and workplace transformation, AI adoption barriers will continue to diminish,
mitigate barriers, and transform an enterprise operating model."
IDC FUTURESCAPE PREDICTIONS
Summary of External Drivers
▪ AI-driven business models — Moving from AI experimentation to monetization
▪ The drive to automate — Toward a data-driven future
▪ Future proofing against environmental risks — ESG operationalization and
risk management
▪ AI-driven workplace transformation — Building tomorrow's workforce today
©2024 IDC #US51666724 3
▪ Regulatory flux — Navigating compliance challenges in a shifting policy
landscape
▪ Responsible and human-centric technology — Ethics in the enterprise
▪ Battling against technical debt — Overcoming hurdles to IT modernization
Predictions: Impact on Technology Buyers
Prediction 1: By 2026, 90% of Enterprise Use Cases for LLMs Will
Be Dedicated to Training SLMs Because of Cost, Performance,
and Expanded Deployment Options
A recent IDC's Global GenAI Technology Trends Survey found that 25% of surveyed
enterprise respondents had already deployed small language models (SLMs), with a
further 17% evaluating them and 13% with small models in testing.
An SLM is so called because it is trained on a smaller data set and has fewer
parameters — numbered in the millions or billions rather than trillions. While large
language models (LLMs) are designed for multipurpose use cases, trained on a wide
variety of subjects and capabilities, a small model is trained on domain-specific
knowledge and is tailored for a specific business need.
Enterprises will select small models for several benefits including:
▪ Improved accuracy: Because small models are trained on an extensive corpus
of domain-specific information, they tend to perform better than an LLM when
asked to perform tasks associated to the topic. For instance, a finance, health, or
legal-focused SLM is likely to return more accurate responses related to those
domains than an LLM.
▪ Ability to deploy on resource constrained devices: With fewer parameters
than an LLM, small models require fewer resources and, as such, can be run in
on-premises datacenters or potentially devices such as PCs.
▪ Sustainability and cost improvements: Because the models require fewer
resources for fine-tuning and inferencing, enterprises may realize reduced costs
and sustainability gains.
Because of these benefits, the predominant use of LLMs by enterprises in the future
will be oriented around using them to train small models.
The implications of a preference for small models are notable, including that they will
contribute to a shift toward open models. Proprietary, commercial LLMs typically
restrict customers from using the model to train a small language model that is
designed for commercial use.
©2024 IDC #US51666724 4
Associated Drivers
▪ AI-driven business models — Moving from AI experimentation to monetization
▪ Future proofing against environmental risks — ESG operationalization and
risk management
▪ Battling against technical debt — Overcoming hurdles to IT modernization
IT Impact
▪ Enterprises will be challenged to acquire the skills needed to develop and fine-
tune a host of small language models, with expertise related to AI engineering,
governance, security, and so forth required. Organizations with existing in-house
expertise will have an opportunity for competitive advantage.
▪ The use of SLMs opens the door to new deployment possibilities, requiring
expertise, adding complexity, and demanding additional workload management.
Guidance
▪ Recognize that the future enterprise will likely juggle scores of SLMs, applying the
right model to the right job and harnessing multiple models to accomplish a
single task. Begin to examine technologies designed to support the interaction of
models as well as model orchestration.
▪ Prepare to embrace open models, since open models will mostly likely be used
to build smaller models.
Prediction 2: By 2027, AI Adoption Barriers Will Become Indistinct
Due to AI Infrastructure Commoditization, Advanced LC/NC
Tools, and Security Frameworks, Leading to Reduction of AI Build
Costs by Nearly 80%
As organizations transition from experimenting with generative AI to more large-scale
enterprise implementations, the biggest blocker to achieving scale is first the
unpredictable increase in infrastructure costs. The second blocker is that the risk
around security, brand, and regulations deter large-scale adoption of AI, forcing
enterprises to turn to costly private AI solutions. Finally, the availability of talent with AI
skills prevents organizations from expanding their AI initiatives.
By 2027, these barriers to AI adoption will become indistinct due to the
commoditization of AI infrastructure, evolution of low-code/no-code (LC/NC) and
agentic tools from vendors that will bridge the AI talent gap, and highly secure and
trusted deployment frameworks, leading to a drastic reduction in AI build costs by
nearly 80%, resulting in the large-scale adoption of AI. Because of the aforementioned,
about 80% of AI build costs will be allocated to building secure and trusted enterprise-
©2024 IDC #US51666724 5
grade data platforms to harmonize high-quality data sets from across the enterprise
landscape that will power the AI applications. There are multiple reasons for this:
▪ Commoditization of AI infrastructure, especially the cost of compute (GPU/TPU),
networking, and storage, will come down drastically. This will eliminate the
return on investment (ROI) quandary that we face today.
▪ AI infrastructure as a service (AIaaS) will become predominant in affording tech
buyers' elasticity and scalability without massive up-front investments.
▪ Vendors will focus on delivering secure AI capabilities to address regulatory and
brand risks.
▪ The evolution of agents that are semiautonomous/fully autonomous will shorten
the AI application development cycle and help maximize time to value.
Once the aforementioned barriers are addressed, data will remain a crucial element for
AI and generative AI success. Consequently, AI initiatives will increasingly focus on
enterprise data integration, data quality, and governance to ensure that AI systems
provide genuine competitive advantage.
Associated Drivers
▪ AI-driven workplace transformation — Building tomorrow's workforce today
▪ The drive to automate — Toward a data-driven future
▪ Battling against technical debt — Overcoming hurdles to IT modernization
IT Impact
▪ Planning and sizing AI infrastructure: Ensure that AI infrastructure planning
and sizing consider the evolving nature of both technologies and models.
Implementing elastic scaling will help optimize costs effectively.
▪ Addressing talent gaps: Choose tools and technologies with low-code or no-
code capabilities to help bridge talent gaps and streamline development.
Guidance
▪ Prioritize real-time AI solutions by integrating edge AI where applicable,
leveraging reduced infrastructure costs to enhance decision-making speed and
operational efficiency. Ensure your team is prepared to handle the shift from
cloud-based AI to distributed, real-time systems and also enhance data usability
and value for AI applications.
▪ Adopt AI infrastructure as a service to achieve scalability and resource
optimization, enabling flexible deployment. Regularly review and adjust resource
usage to match evolving business demands, avoiding unnecessary costs while
maintaining efficiency.
©2024 IDC #US51666724 6
▪ Leverage specialized AI models to address specific industry challenges efficiently.
Invest in continuous model training and refinement processes, ensuring that AI
tools remain relevant and competitive as new, specialized models emerge in
your sector. This will help optimize AI investments and minimize technical debt.
▪ Embrace low-code/no-code tools and agentic workflows to streamline
development and reduce manual intervention. Foster collaboration between IT
and business teams to integrate agents seamlessly into your processes, ensuring
that AI is aligned with business goals and operational needs.
Prediction 3: By 2026, 20% of Frustrated Knowledge Workers With
No Development Experience Will Take Charge of Transforming
How They Work by Building Their Own Agentic Workflows,
Improving Cycle Times by 40%
Over the next three years, we will see a transformative shift in how knowledge workers
approach their daily and project-oriented tasks and workflows. As frustration with
inefficient processes and lack of technological support mounts, workers will quickly
realize they can harness the new capabilities of LLMs to automate and augment
portions of their jobs.
By 2026, 20% of knowledge workers will take charge of their work transformation.
Despite lacking formal development experience, they will harness the power of large
language models conversationally to create personalized, agentic workflows.
Workers will describe their tasks, processes, problems, and goals in plain language. The
LLMs will then interpret these requirements and generate the necessary code, scripts,
or automation routines that will either be executed by the emerging capabilities of
LLMs or in Python, creating the worker's personal AI agents.
As vendors begin to offer AI agents and agentic workflow capabilities, knowledge
workers may be able to use platforms already adopted at work.
The resulting solutions will enable AI agents to perform tasks both autonomously and
interactively. The impact will be substantial, with these workers reporting a 40%
improvement in cycle times for their core responsibilities with improved quality. This
efficiency boost will stem from several factors, including the use of LLMs to:
▪ Draft reports, presentations, and documents.
▪ Generate multiple solution approaches for complex problems, offering diverse
perspectives and innovative ideas that humans can evaluate and build upon.
▪ Analyze data, extract key insights, and present them.
©2024 IDC #US51666724 7
▪ Automate routine aspects of knowledge work, such as summarizing meetings or
organizing information.
As success stories proliferate, organizations will grapple with the implications of this AI-
driven, grassroots innovation movement. Progressive companies will embrace and
support these efforts. More conservative firms may resist, citing concerns over AI
reliance and data security.
While this trend promises significant productivity gains, it also poses challenges.
Organizations will need to balance the benefits of empowered, efficient workers against
potential risks related to AI governance and process consistency. Knowledge workers
are unlikely to rely entirely on AI-generated solutions, but quality will need to be
checked using similar review processes in use today.
Associated Drivers
▪ AI-driven workplace transformation — Building tomorrow's workforce today
▪ The drive to automate — Toward a data-driven future
▪ Battling against technical debt — Overcoming hurdles to IT modernization
IT Impact
▪ IT will likely be tapped as a shared service for skills enablement in prompt
engineering and Python as well as support.
▪ As part of overall GenAI enablement, user-created AI agents will need to be
included for monitoring.
▪ Organizations that manage citizen developer programs are likely to extend their
programs to support worker-created and -managed AI agents and agentic
workflows.
Guidance
▪ Given the ease of use and value to knowledge workers, it makes more sense to
support rather than prohibit use of GenAI to automate and augment knowledge
work, especially when so many employees work at home. Business and IT need
to develop a plan and program for safe employee enablement.
▪ Organizations should consider implementing a platform that enables
experimentation and creation of AI agents by business users while providing
common governance and assurance capabilities. If you don't know where to
start, look to service providers for advice and solutions, as they likely have
already developed such platforms for their internal use.
▪ Learning from the successful formation of citizen developer programs is an
example of how this could work. Teams assigned to building the program should
consider hackathons, community for collaboration, and training programs.
©2024 IDC #US51666724 8
Prediction 4: By 2025, 50% of Organizations Will Use Enterprise
Agents Configured for Specific Business Functions, Instead of
Focusing on Individual Copilot Technologies to Achieve Faster
Business Value from AI
The fast rise of copilots from the generative AI boom in 2022 is quickly giving way to AI
agents. Copilots have been game changing and have been rapidly infused into all types
of enterprise software to assist end users by generating suggestions and
recommendations via chat capabilities. AI agents, which are also LLM powered, go a
step further into action. They are fully automated software components that are
empowered to use knowledge and skills to assess a situation and take actions
independently and without human intervention. The promise of AI agents is exciting
because they fill the gap in unlocking the benefits from converging AI and automation
technologies, unlocking the next level of automation. Up until now, the technology of
agents has lagged the vision for them, but that is quickly changing.
In just a few short months, AI agents have gone from a promising concept to a
deployable reality. But what is clear is that while agents can act autonomously, there
must be parameters in place to govern what they can access and do by those in the
organization who understand both the business process and data the agent is using.
While agents need the ability to work across the enterprise, using agents with a smaller
scope, such as a microservice component approach, makes it faster to configure
business function–specific processes and increases developer agility. A smaller agent
scope enables agents to be reused, so that there is some level of output consistency no
matter how and when that agent is invoked. This approach balances the ability for the
agent to determine the best action and outcome while still ensuring the organizational
process is followed. This approach also makes it easier for agents to be updated by
developers independently of one another, increasing stability and agility. And because
agents are executing tasks in the business process, configuring small scope agents to
execute roles in the business process is optimal for moving from enterprise automation
to enterprise orchestration.
Associated Drivers
▪ AI-driven workplace transformation — Building tomorrow's workforce today
▪ The drive to automate — Toward a data-driven future
▪ Battling against technical debt — Overcoming hurdles to IT modernization
IT Impact
▪ Prioritize key use cases for agents, in what could quickly become large demand
in a fast-changing environment.
©2024 IDC #US51666724 9
▪ Remember that with deployment comes the need to develop governance
processes to support agents and agent life-cycle management, control access to
data and knowledge created by individual agents based on their roles and
responsibilities, and optimize technology run costs.
▪ Acknowledge the need for developer resources to build AI agent configuration
and deployment skills with access to the right development tools. Low-code
agent configuration by business users will need to be possible alongside agent
development to get the right mix of capabilities to enable agentic workflows.
Guidance
▪ Select initial use cases with fewer pathways to start. As the organization builds AI
agent–based capabilities, starting with internal use cases with simple choices
makes it easier to learn how to interact with and govern agents. Consider
engaging a services partner with business functional expertise to help you get
started quickly and effectively.
▪ Assess current and near-future AI agent capabilities from current AI and
automation suppliers. As agents and copilots proliferate, it will become very easy
for capabilities to become diffused. This kind of sprawl will likely limit benefits of
these tools and delay ROI. Seek advice from your current suppliers' partner
ecosystem and services providers on best-fit products and strategy.
▪ Invest in AI literacy and skills training to improve familiarity with AI agent
capabilities across your organization. Consider organizational structures such as
centers of excellence (COE) and work with a services partner that augments your
internal AI skills.
Prediction 5: By 2028, 80% of Foundation Models Used for
Production-Grade Use Cases Will Include Multimodal AI
Capabilities to Deliver Improved Use Case Support, Accuracy,
Depth of Insights, and Intermode Context
Today's AI ecosystem has experienced a rapid acceleration and advancement in AI
foundation models driven by providers including Anthropic, AWS, Google, Meta,
Microsoft, and OpenAI. Although these models do offer multimodality, this functionality
is largely composited with the model's primary function being focused on the
generation of text (i.e., next-word prediction) across a broad range of languages (e.g.,
spoken, written, and programming). These frontier model providers understand that
multimodality remains a critical foundational building block that needs to be sufficiently
addressed to progress their systems from use cases focused on ad hoc interactions
and workflows to greater and more consistent user collaboration, agentic AI, and
(potentially even) AGI ambitions. These providers continue to experiment and invest in
©2024 IDC #US51666724 10
innovative methods to more comprehensively integrate multimodal capabilities within
future model iterations/releases.
As IDC anticipates the technology's progression over the next three years, it expects
that more seamless integration and availability of multimodal capabilities within
foundation models will increasingly become the norm. These more capable models will
be able to leverage modalities including audio, data (structured and unstructured),
images, text, and video to deliver improved use case support, accuracy, depth of
insights, and intermode context. Companies such as Meta are already pioneering this
space with models like ImageBind, which can process multiple input modalities
simultaneously. These AI systems will seamlessly integrate and analyze data from
various sources simultaneously, providing a critical boost to AI's use case applicability
and the potential range of actionable insights.
For technology buyers, the material organizational impact and value improvement from
multimodal foundation models will simply be too big to ignore. Adopting organizations
will be able to take advantage of the "learned" subtleties and interplay across multiple
data modalities to drive greater accuracy, improve the contextual relevance of their
data, and derive greater, more in-depth data insights. This will be particularly beneficial
in fields and use cases that skew less generalizable (i.e., can be more easily referenced
and understood through a typical broad model training data set) and require more
specific and inter-related data relationships and understanding (e.g., within healthcare,
finance, and customer service). One additional benefit to highlight is that these future
multimodal models (because of their ability to process and analyze diverse data types)
will reduce the need for extensive data preprocessing and normalization, thereby
streamlining workflows and improving efficiency.
Associated Drivers
▪ AI-driven business models — Moving from AI experimentation to monetization
▪ The drive to automate — Toward a data-driven future
▪ AI-driven workplace transformation — Building tomorrow's workforce today
IT Impact
▪ This evolution in multimodal foundation and embedding models will push the
boundaries on both infrastructure memory and compute requirements.
Although there will be opportunities for optimization, the near-term focus will be
on addressing functionality and business requirements.
▪ As organizations process and leverage a broader range of their data (i.e.,
ingesting increasing data scale/amounts as well as more/new data modalities),
there will be a need to consider potential data privacy and security impacts.
©2024 IDC #US51666724 11
▪ Organizations will need to continue to address multimodal model evaluation,
testing, and validation issues including bias, hallucinations, explainability, risk
mitigation, and benchmarking/accuracy. Although these model capabilities will
increase, they will be deriving from such complex models.
Guidance
▪ Assess your compute and data infrastructure availability and requirements to
determine your ability to support the next generation of multimodal foundation
models. In many cases, organizations implementing multimodal foundation
models will be forced to pursue a cloud-based or hybrid strategy to manage this
stepwise increase in compute and memory requirements.
▪ Implement clear process and user guidelines, along with strict technology/tool
controls and configurations to calculate and mitigate data privacy and security
risks. Often this will require identifying limits on selecting and using first- and
third-party data with these models.
▪ Identify and select an LLMOps platform that enables your organization to better
understand, experiment, measure, validate, and optimize your production
multimodal models. Having these insights into all aspects of the model life cycle
will be critical for an organization to successfully deploy and scale its multimodal
AI initiatives.
Prediction 6: By 2028, 80% of RAG Implementations Will Be
Embedded in Generative AI–Based Features and Products,
Increasing Standardization and Transparency About the Use of
Contextual Data for Such Applications
As organizations scrambled to adopt generative AI for business use cases in 2023 and
2024, the need to supplement LLMs' pretrained knowledge with more current and
domain-specific knowledge became a clear priority. From early on, connecting
proprietary data sources to LLMs has been a key aspect of making this technology
successful and useful in enterprise settings. In August 2023, an IDC survey showed that
83% of IT leaders believe that GenAI models that leverage their own business' data will
give them a significant advantage over their competitors (source: IDC's GenAI ARC
Survey, August 2023; n = 1,363).
As more vendors begin offering generative AI capabilities for business use, there is a
common need to support retrieval-augmented generation (RAG) architectures. As a
result, RAG has become somewhat of a battleground in 2024. Many vendors across
different markets — from search vendors to conversational AI vendors to cloud
hyperscalers to database vendors to emerging "RAG as a service" vendors — are
©2024 IDC #US51666724 12
offering vector database capabilities and products paired with vector embedding and
search algorithms.
Given the prevalence of RAG across so many different types of offerings, from "build it
yourself" components to built-in capabilities in virtual agents and other conversational
and question-answering systems, IDC expects that RAG as a standalone technology will
become largely commoditized over the next few years.
As enterprises mature in their use of generative AI, they will increasingly take a more
focused, scaled-out, and outcomes-based approach, making the "embedded RAG"
alternatives to broad generative AI platforms and RAG pipelines especially attractive.
IDC expects that by 2028, 80% of RAG implementations will be embedded in generative
AI–based features and products, increasing standardization and transparency about
the use of contextual data for such applications.
Associated Drivers
▪ AI-driven business models — Moving from AI experimentation to
modernization
▪ Battling against technical debt — Overcoming hurdles to IT modernization
▪ Regulatory flux — Navigating compliance challenges in a shifting policy
landscape
IT Impact
▪ IT will need to understand the technical requirements for different RAG use
cases, including what data sources the system will need access to,
accuracy/freshness/completeness of that data, and any security/compliance
aspects.
▪ IT may be called upon to reconcile and rationalize different organizational
approaches to the RAG aspect of generative AI.
▪ IT will be the "last line of defense" to assess the vendors' ability to deliver on
their promises with RAG and GenAI.
Guidance
▪ Work with LOBs to reach agreement on, and ensure access to, required data
sources, and work to assess the state of such data sources before considering
RAG implementation. If the vendor is offering embedded RAG capabilities that
use the vendor's own proprietary data source, carefully assess whether that data
will adequately meet your organization's needs. Ensure that the vendor is
providing you with the appropriate security and compliance assurances, and
check data ownership agreements regardless of whose data is being used for
RAG.
©2024 IDC #US51666724 13
▪ Examine planned and current generative AI use cases to determine where best
to leverage GenAI features and products with embedded RAG or whether there
are any areas where a partial or full "build it yourself" approach may be best.
This will also depend on aspects such as how much customization/control the
organization wants over such features, as well as resource requirements for
supporting the development and implementation of standalone or DIY RAG.
▪ Consider whether detailed preprocessing/post-processing on organizational
documents will be needed and whether tools for advanced preprocessing may
be required from a vendor solution. Speak with reference customers if possible,
and carefully assess vendor capabilities against organizational needs and
priorities.
Prediction 7: By 2028, 80% of Foundation Models Used by
Enterprises Will Be From a Maximum of Three Providers as the
Market Will Consolidate Due to Unsustainable Business Models
One of the remarkable attributes about the maturation of generative AI technologies is
the rapid proliferation of foundation models such as large language models, small
language models, vision models, multimodal models, and variational generative
adversarial networks. As of September 2024, foundation models have been created by
an expansive universe of vendors that include OpenAI, Microsoft, Google, Amazon,
Meta, IBM, Mistral, Anthropic, NVIDIA, Stability AI, Cohere, Alibaba, and Baidu. While a
plurality of vendors are currently responsible for developing foundation models, IDC
envisions significant consolidation of the set of vendors that develop and productize
foundation models for business use by 2028.
This consolidation will occur for three reasons:
▪ Open models such as Meta's Llama 3 family of models and the IBM Granite
family of models are rapidly catching up or at parity with their proprietary
counterparts from a functionality standpoint.
▪ The costs of developing and updating these models are prohibitively expensive
for most organizations because of the expense specific to compute resources
required.
▪ Advances in model transformation technologies such as model distillation,
pruning, quantization, fine-tuning, and retrieval augmented generation attenuate
the need for net-new models.
Taken together, these three reasons explain why the business models that justify
investments in development of new large language models are unsustainable.
Technology suppliers will find it difficult to deliver a return on investment for investing
in models because the value of foundation models is ultimately derived from the
©2024 IDC #US51666724 14
products and services that are created around those models as opposed to the models
themselves. Put differently, organizations are willing to pay for the use of foundation
models as a means toward creating innovative digital solutions rather than to access
the models themselves. Given the quality of open models such as Llama 3.1 and Mistral
8x7B, technology suppliers will find it challenging to successfully commercialize
foundation models. This difficulty will correspondingly lead to a dramatic consolidation
of the foundation model landscape by 2028.
Another important inhibitor to the successful commercialization of foundation models
is the scarcity and cost of GPUs and associated accelerated compute infrastructures
that are required to train these models. For example, the cost of training and
operationalizing Meta's Llama 3 model is estimated in the hundreds of millions of
dollars given that it was trained on 24,000 NVIDIA H100 chips, each of which is
estimated to have cost approximately $30,000, which amounts to a cost of $720 million
in infrastructure alone. Very few organizations can afford to build models of a
comparable scale, even if the costs of accelerated compute infrastructure will decrease
over the next three to five years.
Associated Drivers
▪ AI-driven business models — Moving from AI experimentation to monetization
▪ The drive to automate — Toward a data-driven future
▪ AI-driven workplace transformation — Building tomorrow's workforce today
IT Impact
▪ The consolidation of the foundation model landscape will catalyze a transition
away from a focus on foundation models toward the ecosystem of technologies
that build on these models to create value. This ecosystem features applications
that leverage foundation models such as GitHub Copilot and OpenAI's ChatGPT.
▪ Model selection processes will become simplified because developers and AI
engineers will focus on a more curated set of models to use for generative AI
development initiatives.
Guidance
▪ Augment your familiarity with development frameworks and platforms that
enable the development of foundation models' powered digital solutions.
Examples of such frameworks include LangChain, Llama Index, and Spring AI.
▪ Deepen your organization's adoption of practices and technologies that
orchestrate and automate interactions between and among foundation models
and complementary technologies, such as MLOps and LLMOps. These practices
and technologies will be central to ensuring collaboration between models that
enable their synergistic and optimized adoption.
©2024 IDC #US51666724 15
▪ Pay close attention to the adoption of models as well as their extensibility for
differentiated use cases. This attention will empower your organization to adopt
models that will spearhead the consolidation of the market.
Prediction 8: By 2026, 65% of Enterprises Will Adopt Hybrid Edge-
Cloud Inferencing as Organizations Fully Integrate Edge into the
Cloud Infrastructure and Management Strategy
The growing adoption of hybrid edge-cloud inferencing is a response to the need for
efficient, scalable data processing across cloud, edge, and datacenter environments.
According to IDC's research, many organizations have cited edge being fully integrated
into the organization's cloud infrastructure and management strategy. The advantage
of hybrid inferencing is its flexibility, allowing data processing to occur where it's most
efficient — often near the data source, whether on the cloud, edge, or in the
datacenter. This allows for rapid inferencing where data is generated, such as in IoT
devices, while offloading more intensive tasks to the cloud, where computational power
and scalability are greater.
In scenarios leveraging retrieval-augmented generation, hybrid edge-cloud inferencing
becomes even more powerful for industries that require real-time insights and up-to-
date information. RAG enhances decision-making by sourcing relevant data from both
cloud and edge environments dynamically, offering a way to maximize both edge and
cloud resources. Meanwhile, inferencing without RAG still benefits from hybrid models
by utilizing local processing to minimize latency and cloud-based AI models for deeper
analysis. As edge computing's close alignment with business priorities such as data
security and operational resilience is significant, edge inferencing makes it a critical
component for enterprises, and in addition to this, hybrid edge-cloud inferencing can
bring about cost reduction, enhance operational efficiency, and improve
responsiveness.
From an IT perspective, hybrid edge-cloud inferencing creates both challenges and
opportunities. It is imperative to note that cloud is the preferred operating model for
edge. Organizations are prioritizing investment in cloud service provider edge solutions
alongside server and storage deployments at the edge. IT departments are bound to
invest in technology and skilled personnel as management strategies evolve to handle
the complexity of hybrid environment needs for ensuring seamless integration across
cloud, edge, and datacenter infrastructures.
Associated Drivers
▪ AI-driven business models — Moving from AI experimentation to monetization
▪ The drive to automate — Toward a data-driven future
▪ AI-driven workplace transformation — Building tomorrow's workforce today
©2024 IDC #US51666724 16
IT Impact
▪ Managing this hybrid infrastructure will require IT teams to oversee seamless
communication between these environments, balancing network efficiency and
security concerns. This will drive investment in skilled personnel and
infrastructure.
▪ Orchestration tools will become essential, enabling automated deployment and
updates of AI models across the edge and cloud environments.
▪ Hybrid inferencing will allow IT to optimize performance while minimizing the
infrastructure challenges posed by traditional, isolated systems.
Guidance
▪ As edge computing becomes more critical to business operations, robust
cybersecurity strategies will be necessary to protect data and ensure operational
resilience. This shift to hybrid edge-cloud inferencing requires unified
management systems and robust security protocols to safeguard data, especially
as more businesses use edge for mission-critical operations.
▪ Enterprises will need to invest in new tools to monitor and secure these
distributed systems, creating additional demands on IT teams to manage
performance, scaling, and risk.
Prediction 9: By 2027, a Third of New AI Applications Will Contain
a Diversified Set of Chained Traditional and GenAI Models and
Business Rules, Outpacing the Development of New Singular AI
Model Applications
The union of predictive and generative AI within enterprise applications is poised to
revolutionize numerous industries, creating more powerful AI systems capable of
solving more challenging problems than any single AI model–based application could
tackle alone. The combination of a diversified set of chained traditional and generative
AI models in multiple modalities, along with business rules and other forms of
automation, will create a powerful synergy, enabling organizations to extract greater
value from their data, make more informed decisions, and develop more innovative,
tailored solutions for real-world problems. By combining the strengths of all types of AI,
organizations can gain unprecedented insights and capabilities, leading to concrete
business impacts and revenue streams.
This shift will enhance businesses' ability to provide ultra-personalized
recommendations, customer service, and industry-specific solutions than traditional
single-model apps. The combination will broaden the scope of AI-based applications
and accelerate more data-driven business processes. In healthcare, for example,
predictive models can analyze patient data and identify potential health risks.
©2024 IDC #US51666724 17
Generative AI can then be used to create synthetic patient data for training and testing
new medical treatments. Chained models can also be used to develop more accurate
drug discovery pipelines by predicting the properties of new molecules and simulating
their interactions with biological targets.
In financial services, the blend of models can address personalized financial planning,
fraud detection, customer service, and risk assessment. Predictive analytics can
forecast market trends and customer behavior, while generative AI can create
recommendations for investment strategies, simulate economic scenarios, or generate
synthetic data to train fraud detection models and test security systems. Some or all
these applications can be driven through a natural language interface.
But to realize the full potential of these applications addressing complex business
processes and decisioning with a mixture of model cocktails chained together into
logical sequences, organizations must overcome challenges such as data quality,
privacy, ethical considerations, and computational resources. They will encounter data
format incompatibility, model output integration and formatting issues, performance
bottlenecks, a plethora of tools and frameworks to manage and orchestrate, and
challenges in explaining, interpreting, and debugging the applications.
Associated Drivers
▪ AI-driven business models — Moving from AI experimentation to monetization
▪ The drive to automate — Toward a data-driven future
▪ Responsible and human-centric technology — Ethics in the enterprise
IT Impact
▪ Chained multimodal and multimodel applications may require significant
investments in IT infrastructure and tooling for both the build and inferencing
environment to orchestrate and execute complex business applications involving
multiple data pipelines, workflows, and integration with operational systems and
ensure security, scalability, and compliance.
▪ Model management platforms may need to be enhanced to monitor and
optimize these model chains, ensuring smooth interactions between the
application components. There will be a growing need for explainability,
transparency, and contextual accuracy in how the applications perform and
execute with model and concept drift taken into consideration.
▪ Applications with chained models can open new business opportunities by
enabling the development of innovative products and services.
▪ The development and deployment of chained model applications will require
additional collaboration among resources with expertise in traditional AI/ML,
generative AI, ML engineering, data management, and software development.
©2024 IDC #US51666724 18
Guidance
▪ Organizations should focus on mastering predictive AI– or GenAI-based solutions
individually before attempting to chain mixed models in applications. A
sequential approach allows for a deeper understanding of each technology's
capabilities and limitations, enabling organizations to build maturity and
experience.
▪ Organizations should start by mapping the application's anticipated workflow
and identifying data sources and integration points between models and
operational systems. They should validate the flow inputs and outputs for format
compatibility and choose functionally aligned tools and frameworks for
developing and orchestrating the workflow. These planning steps will help
minimize pitfalls associated with delivering complex AI-based applications.
Prediction 10: By 2027, 80% of Critical AI Decisions Will Require
Human Oversight Supported by Visual Explainability Dashboards,
Potentially Slowing Processes but Enhancing Accountability
By 2027, the use of visual explainability dashboards for critical AI decisions is expected
to become standard practice, highlighting the growing demand for transparency and
accountability in AI systems. As businesses continue to innovate, the shift toward
incorporating human oversight via advanced visualization tools will have a significant
impact on AI adoption and implementation across multiple sectors. A visual
explainability dashboard is a tool that provides a clear and understandable
visualization of how an AI model makes its decisions.
The rapid advancement of both traditional and generative AI technologies has resulted
in complex decision-making processes that frequently operate as "black boxes." This
complexity has raised concerns about hallucinations, bias, fairness, and unintended
consequences in critical scenarios. To effectively address these issues, organizations
will invest heavily in developing and deploying visual explainability tools that explain AI
algorithms' decisions in a visually understandable format. These dashboards will use
techniques such as LIME (Local Interpretable Model-agnostic Explanations), SHAP
(SHapley Additive exPlanations), and attention visualization to provide users with real-
time insights into how AI makes decisions. These techniques will enable nontechnical
stakeholders to understand and validate AI outputs using simple visual formats such as
heat maps, decision trees, and feature importance graphs.
Data scientists and AI/ML engineers will need to strike a balance between
interpretability and performance metrics, which can slow down some aspects of
deployment due to trade-offs between model complexity and accuracy. Nonetheless,
the long-term benefits are significant. Enhanced accountability is expected to increase
trust in AI systems in sensitive domains such as healthcare and finance, where
©2024 IDC #US51666724 19
reliability could accelerate AI adoption. Furthermore, real-time bias detection
capabilities promise more equitable outcomes for automated decisions. As this trend
accelerates, new roles such as "AI explainability engineers" and "algorithmic auditors"
are expected to emerge (specialists who develop these visual tools and interpret their
output).
Associated Drivers
▪ AI-driven workplace transformation — Building tomorrow's workforce today
▪ Regulatory flux — Navigating compliance challenges in a shifting policy
landscape
▪ Responsible and human-centric technology — Ethics in the enterprise
IT Impact
▪ Using interpretable ML techniques such as LIME and SHAP within existing AI
pipelines is critical for increasing transparency. This strategic emphasis on
explainability will not only increase user trust but also facilitate the widespread
adoption of AI technologies.
▪ Innovative approaches, such as federated learning and differential privacy
techniques, are necessary to balance explainability and data privacy.
Implementing standardized APIs for explainability features will ensure
interoperability among AI systems, promoting seamless integration and usability.
▪ IT will need to incorporate these types of applications into their overall
operational procedures and systems. Organizations will want to ensure that
these types of applications are used in conjunction with both custom AI
applications as well as with embedded AI in enterprise applications.
Guidance
▪ Incorporate considerations of explainability across all stages of the AI life cycle,
from initial data handling to final model deployment and monitoring. Adopting
this strategy is vital for maintaining the robustness and relevance of AI systems.
▪ Prioritize continuous education and training initiatives that focus on emerging
techniques and best practices in explainable AI for both data scientists and AI
developers as well as the end-user community.
▪ Establish cross-functional teams composed of data scientists, domain experts,
and ethicists to delve into and apply insights from visual explainability. This will
significantly influence the speed and effectiveness of AI adoption.
▪ Incorporate these types of tools into the organization under the guidance of the
teams or groups that oversee responsible AI within the organization.
©2024 IDC #US51666724 20
ADVICE FOR TECHNOLOGY BUYERS
To maximize the realization of artificial intelligence and automation initiatives,
technology leaders should focus on the following:
▪ Build expertise. Develop a deep understanding of AI technologies across all
levels of the organization. This includes training for executives, managers, and
employees to understand AI's capabilities and limitations.
▪ Invest in talent. Hire and train employees with the necessary skills to work with
AI. This includes data scientists, AI specialists, and other tech-savvy professionals.
▪ Establish governance. Implement strong governance frameworks to manage AI
initiatives. This includes setting up ethical guidelines, ensuring data privacy, and
creating processes for monitoring AI systems.
▪ Integrate AI into business strategy. Align AI initiatives with the organization's
strategic goals. Identify high-impact areas where AI can drive significant value,
and start with pilot projects to demonstrate success.
▪ Foster a culture of innovation. Encourage a culture that embraces change and
innovation. This involves promoting collaboration, experimentation, and
continuous learning.
▪ Focus on human–AI collaboration. Emphasize the augmentation of human
capabilities with AI rather than replacement. Highlight how AI can enhance
productivity and creativity, allowing employees to focus on higher-value tasks.
▪ Prioritize the adoption of SLMs for domain-specific tasks to improve accuracy,
reduce costs, and enhance deployment flexibility. Begin examining technologies
that support model interaction and orchestration, and consider adopting open
models to build SLMs.
▪ Leverage AIaaS to achieve scalability and resource optimization.
▪ Embrace low-code/no-code tools and agentic workflows to bridge talent gaps
and streamline development, ensuring AI is aligned with business goals and
operational needs.
▪ Support the creation and use of AI agents and agentic workflows by knowledge
workers to improve productivity.
▪ Develop a plan for safe employee enablement, including platforms for
experimentation and common governance capabilities, and expand existing
citizen developer programs to include agentic automation.
▪ Invest in hybrid edge-cloud inferencing to optimize data processing across cloud,
edge, and datacenter environments.
©2024 IDC #US51666724 21
▪ Implement robust cybersecurity strategies and unified management systems to
protect data and ensure operational resilience while investing in tools to monitor
and secure distributed systems.
By taking these steps, enterprises can effectively prepare for the transformative impact
of AI and position themselves for future success.
EXTERNAL DRIVERS: DETAIL
AI-Driven Business Models — Moving from AI
Experimentation to Monetization
▪ Description: As the generative artificial intelligence (GenAI) hype settles into a
new digital business reality, it's critical for both tech buyers and vendors to prove
that "AI is real," can be monetized, and is leading to concrete business impact
and revenue streams. While tech buyers' GenAI attention in the initial AI
everywhere stages primarily focused on efficiency and automation-oriented use
cases, the longer-term ambition is to leverage AI (including GenAI) to enable new
business models and open new revenue streams. At the same time, after all the
initial excitement and rush to new launches/announcements, it's time for tech
vendors to capitalize on 2023–2024 AI investments, move customers' POCs to
concrete multiyear deals, and unlock exponential AI monetization. While they
implement this, companies must keep in mind that AI is not without risks,
especially when it comes to ethical AI and data privacy. Enterprises need to
carefully consider the best use cases to implement AI effectively and to the
benefit of the organization.
▪ Context: With intelligence becoming a key source of value creation, we are in the
midst of an "intelligence revolution," in which AI and automation-oriented
technology are major accelerators of business change. GenAI especially is a
transformative force. This branch of AI enables machine-driven autonomous
creation of new content, from images to music to even written text, with
remarkable accuracy. Current business applications of GenAI include content
and code generation, as well as personalized recommendations, but it is evolving
quickly.
The Drive to Automate — Toward a Data-Driven Future
▪ Description: Broader automation use cases — which are different from just AI
and generative AI — are now ubiquitous. Automating tasks that require human
judgment and decision-making are becoming a key area of development.
However, thoughtful implementation is crucial. This requires careful data
management, quality, governance, and storage. Data quality and governance will
©2024 IDC #US51666724 22
become paramount as organizations strive to maintain accuracy in automation
tools and comply with increasingly stringent regulations like GDPR and CCPA.
Efficient storage and retrieval of vast data sets are also essential, prompting IT to
explore scalable solutions like object storage or data lakes. As more employees'
access data tools and insights, fostering a culture of data sharing will be key.
Breaking down data silos will be crucial for achieving a unified view for
automation processes. This also means that while data generally becomes more
open and accessible, protecting key information related to health, for example,
becomes central to value and risk. Provided that data is thoughtfully managed,
and silos are appropriately broken down, hyperautomation, the combination of
multiple automation tools and technologies, may become more prevalent. This
approach, which aims to automate as many processes as possible within an
organization, can greatly improve efficiency and agility.
▪ Context: Businesses are rethinking how to employ automation to maximize
operational efficiency — from automating assembly in manufacturing to
identifying opportunities for food waste reduction in hospitality to improved CX
in digital banking. And as data is embedded in the core of strategic capability for
every organization, automation has become critical to scaling a digital business.
This is evident in three domains: IT automation, process automation, and value
stream automation — leading to autonomous operations, digital value
engineering, and innovation velocity. From healthcare robotics to real-time data
analytics, the applications are extensive.
Future Proofing Against Environmental Risks — ESG
Operationalization and Risk Management
▪ Description: Although the topic is often politicized, it is undoubtable that risks
are multiplying in the form of extreme weather — droughts, floods, and irregular
weather patterns in general are disrupting supply chains and wreaking economic
havoc all over the world, increasing insurance/reinsurance costs. Accounting for
this risk is increasingly seen as an imperative part of businesses' risk
management strategy. Decreasing environmental footprints is also part of many
businesses' efforts to become responsible enterprises. Frameworks such as
environmental, social, and governance (ESG) support actions to achieve
sustainability and contribute to a better future. In addition, ESG-related laws that
oblige companies to account for this risk are increasing, including the EU's
Corporate Sustainability Reporting Directive (CSRD) and Sustainable Finance
Disclosure Regulation (SFDR), the SEC's climate disclosure requirement
approved, and Japan's GX Basic Policy. Many companies are now actively
operationalizing ESG with AI-informed carbon accounting software, carbon
budgets, and sustainability requirements into requests for proposals (RFPs) they
©2024 IDC #US51666724 23
send to tech suppliers. In addition, many now have positions such as chief
sustainability officer or are integrating sustainability into the responsibilities of
the C-suite. They are also engaging in initiatives such as energy efficiency in
technology. This is often an ecosystemwide initiative, helping further advance
meaningful risk management and development of best practices around
climate/ESG.
▪ Context: Businesses are increasingly beholden to climate/ESG. More and more
customers care about whether the companies they deal with behave sustainably
and deliver sustainable products and services. ESG can also be a cost-saving
measure and hedge against risks. Yet, despite much progress, there is still work
to be done, especially in complying with carbon footprint measuring and
achieving high-quality data. As laws and regulations — as well as investment
opportunities — amp up around ESG, the IT industry will increasingly require
green talent and skills and better data modeling of ESG metrics to achieve
maximum benefit.
AI-Driven Workplace Transformation — Building Tomorrow's
Workforce Today
▪ Description: There are many pressures in the labor market, ranging from skills
shortages to long-term demographic shifts. To increase automation and AI
capabilities, digital skills are now in high demand, but the current supply of such
skill sets does not match this demand. Despite talk about automation replacing
jobs, company growth depends more on reskilling to effectively make use of
these investments. Expertise in security, cloud, and IT service management
alongside AI skills are crucial. But enterprises can't live on IT skills alone —
human-centric skills are also important, perhaps even more so than ever.
Without proper socialization, awareness, and cross-organizational support, we
may not see the innovation and productivity that GenAI and AI initiatives
promise, and the overall enterprise IT strategy will be slow to deliver its needed
results. To succeed, enterprises must also be open to organizational change and
models that allow for greater trust and growth in their employees. Leaders must
be accountable for laying the groundwork of communication, collaboration,
creativity, and continuous learning, which will need to be pervasive for engineers
and HR analysts alike. All of this lays the groundwork for long-term demographic
shifts. Declining/aging populations means that the labor market is getting tighter.
Fewer workers logically means that businesses will have fewer personnel. We
have already seen talent shortages impacting businesses' operations. This will
only get more competitive in the future. Business leaders are starting to fight
against this, but success hinges on the ability of the enterprise to adopt better
©2024 IDC #US51666724 24
organizational strategies and models that allow for a more productive,
collaborative, and learning-focused workplace.
▪ Context: The workplace has been shifting for some time, especially due to new
modes of working, and the rise of AI and automation only further facilitates this
shift. In the context of talent shortages, demographic changes, and other issues
such as ESG concerns and ethical AI, it is clear that reskilling, upskilling, and
overall transformation of workplace design are taking center stage. C-suite
leaders and their teams must collaborate to recalibrate work culture,
augmentation, and space/place planning to enable more secure, dynamic, and
refined organizations of the future.
Regulatory Flux — Navigating Compliance Challenges in a
Shifting Policy Landscape
▪ Description: With frontier technologies like generative AI, geopolitical concerns,
and cyber-risks, the tech legal landscape is rapidly changing. The tech regulatory
landscape is shifting, from privacy/cybersecurity laws such as NIS 2 in the EU to
various policies incentivizing nearshoring of critical technologies such as South
Korea's tax incentives for the "K-Semiconductor Belt." Beyond that, however, are
laws that fundamentally can change the market landscape in technology. The
EU's Digital Services Act (DSA) and Digital Markets Act (DMA) aim to increase
transparency and accountability for online platforms and attempt to prevent
anticompetitive behavior from "gatekeepers," or large online platforms of
significance. In China, a number of firms have withstood major fines and
penalties for anticompetitive practices, breaches of data security, and consumer
privacy rights. Other emerging efforts in jurisdictions like the United States, India,
and Australia mean that tech giants may be seeing themselves caught in stricter
compliance challenges. Regulations, however, are notably inconsistent in their
rollout. While some regulations lag behind technology development — especially
notable in the case of artificial intelligence across many jurisdictions — others
lead, such as tariffs on imports. Regulations also are of course subject to political
change. More than 70 countries worldwide are set to vote in 2024, and polls
predict sweeping change in political agendas. These changes are not only going
to impact society and the economy in the short term but may also have wide-
reaching, long-term effects.
▪ Context: Businesses must navigate an increasing number of regulatory rules.
Even if it is not always the primary focus, tech is often a crucial part of these
regulations. Most of these rules are intended to hedge against risks, but some
are entrenched in geopolitical divides, so those firms that stay ahead of the
game and build resiliency will be best equipped to comply with these regulations.
Moreover, regulations and policies are not always simply restraints — they are
©2024 IDC #US51666724 25
also often springboards for investment, with many regulations proposing tax
subsidies and other kinds of incentives.
Responsible and Human-Centric Technology — Ethics in
the Enterprise
▪ Description: Enterprises are increasingly conscious of the broader societal
impacts of their business models and of certain technologies, especially
emerging technologies. Most topical at the moment is AI. AI may provide lower-
cost, higher-value solutions, but it has significant ethical (and incipient legal)
implications that companies will increasingly need to adapt to. There are
significant questions over issues like copyright, trust, safety, and misinformation
distribution. Beyond that, organizations must grapple with issues like privacy and
consent around data, reproduction of biases and toxicity, generation of harmful
content, insufficient security against third-party manipulation, and accountability
and transparency of processes. As a result, countries around the world are keen
to regulate AI, from the EU to Brazil to China. Aside from AI, new emerging
technologies like quantum also have ethical challenges, and new branches such
as quantum ethics are being developed. With quantum ethics, in light of the
power of quantum computing, questions remain about how to ensure equity,
transparency, and appropriate usage given its power to crack encryption.
Roboethics grapples with the ethical questions that the use of robotics pose,
especially those used in healthcare, military applications, and others. And
beyond emerging technologies, supply chain ethics are also being questioned, as
many raw materials such as critical minerals are mined under circumstances that
may implicate human rights questions, and jurisdictions from Canada to the EU
to Japan have created laws requiring more stringent oversight of suppliers.
Businesses are also still grappling with inclusivity and corporate responsibility.
Having a diverse workforce can often be a benefit for businesses to ensure a
greater amount of skill sets, and promoting corporate responsibility can be a way
to attract and retain talent. And though these issues are often politicized, neglect
of ethics in the business isn't just a moral quandary either — it is increasingly
viewed as a significant business risk that can mean less trust, less control, and
less ability to advance technologies in an optimal way.
▪ Context: AI is bringing the "S" (social) and "G" (governance) in ESG to the
forefront of conversation in a way that is distinct from conversations around "E,"
the environment. Businesses are increasingly discussing AI ethics due to rising
public and regulatory scrutiny, concerns about privacy and bias, and high-profile
AI missteps. Adhering to ethical standards enhances reputation, builds consumer
trust, and ensures sustainable, responsible innovation. This shift underscores the
importance of developing and using AI technologies ethically and transparently.
©2024 IDC #US51666724 26
Battling Against Technical Debt — Overcoming Hurdles to
IT Modernization
▪ Description: As technology becomes increasingly central to business operations,
the role of IT leadership is evolving into business leadership, highlighting the
critical importance of managing technical debt. This debt, exacerbated by the
rapid advancements and growing complexity of IT systems, not only inflates
maintenance costs but also poses significant challenges to operational efficiency,
profitability, and market adaptability. Accumulated technical debt manifests in
software bugs, security vulnerabilities, and system inefficiencies, leading to
increased operational costs, data breaches, and a loss of customer trust. For
developers, working with outdated systems diminishes morale and productivity,
while businesses face hurdles in adapting to new technologies or market
demands swiftly. Specifically, in the realm of AI, "data debt" — stemming from
poor data quality, inadequate architecture, and insufficient documentation —
complicates maintenance, reduces system flexibility, and hampers accurate
decision-making. These issues, along with the struggle to maintain legacy
systems and navigate technical heterogeneity, slow down development
processes, delaying the launch of new features or products. There is a cascading
effect that arises with technical debt (e.g., cloud laggards will become AI
laggards).
▪ Context: In recent years, technical debt is a growing concern due to accelerated
digital transformation, increased reliance on complex software systems, and the
urgent need for rapid innovation. The pressure to deliver software quickly often
leads to compromises in code quality, resulting in a backlog of maintenance
issues. Businesses face mounting pressure to address outdated code and quick
fixes to maintain system reliability, security, and scalability amid evolving
technological demands. As systems become more complex, the cost and effort to
address these issues escalate, impacting operational efficiency and innovation.
LEARN MORE
Related Research
▪ Pricing and Packaging Strategies for Generative AI (IDC #US52530924, September
2024)
▪ The Arrival of AI Agents: How GenAI and Automation Technologies Come Together (IDC
#EUR152585024, September 2024)
▪ Market Analysis Perspective: Worldwide Enterprise Intelligence Services, 2024 (IDC
#US51423724 September 2024)
©2024 IDC #US51666724 27
▪ Critical External Drivers Shaping Global IT and Business Planning, 2025 (IDC
#US52438224, August 2024)
▪ Market Analysis Perspective: Worldwide AI Platforms and GenAI Software Services,
2024 (IDC #US52517524, August 2024)
▪ Generative AI Copilot Forecast (IDC #US52541024, August 2024)
▪ Tech Buyers Introduction to AI Agents and Agentic Workflows (IDC #US52518424,
August 2024)
▪ Navigating the AI Regulatory Landscape: Differing Destinations and Journey Times
Exemplify Regulatory Complexity (IDC #EUR151900724, March 2024)
▪ The Rise of Vector and Graph Databases in Generative AI Implementations (IDC
#AP51580924, March 2024)
▪ IDC PlanScape: Next Wave of Foundation Models and AI Platforms (IDC
#US51957624, March 2024)
©2024 IDC #US51666724 28
ABOUT IDC
International Data Corporation (IDC) is the premier global provider of market intelligence, advisory
services, and events for the information technology, telecommunications, and consumer technology
markets. With more than 1,300 analysts worldwide, IDC offers global, regional, and local expertise on
technology, IT benchmarking and sourcing, and industry opportunities and trends in over 110 countries.
IDC's analysis and insight helps IT professionals, business executives, and the investment community to
make fact-based technology decisions and to achieve their key business objectives. Founded in 1964, IDC
is a wholly owned subsidiary of International Data Group (IDG, Inc.).
Global Headquarters
140 Kendrick Street
Building B
Needham, MA 02494
USA
508.872.8200
Twitter: @IDC
blogs.idc.com
www.idc.com
Copyright and Trademark Notice
This IDC research document was published as part of an IDC continuous intelligence service, providing
written research, analyst interactions, and web conference and conference event proceedings. Visit
www.idc.com to learn more about IDC subscription and consulting services. To view a list of IDC offices
worldwide, visit www.idc.com/about/worldwideoffices. Please contact IDC report sales at
+1.508.988.7988 or www.idc.com/?modal=contact_repsales for information on applying the price of this
document toward the purchase of an IDC service or for information on additional copies or web rights.
Copyright 2024 IDC. Reproduction is forbidden unless authorized. All rights reserved.