Follow BigDATAwire:

April 30, 2025

Are We Putting the Agentic Cart Before the LLM Horse?

Agentic AI has been cast as the next big wave of technological innovation that will fundamentally transform how work gets done. Powered by a new class of increasingly accurate and reliable reasoning models, AI agents will automate a large swath of tasks that currently require the human touch, we’re told. The story sounds compelling, but how much of it is real versus a technological fantasy?

There’s no doubt that companies are investigating AI and investing large sums in AI projects. Some of these projects are succeeding, but the majority arguably are not succeeding. That’s not, in and of itself, cause for alarm, as many new technologies face challenges during the early stages of adoption. The big question is how quickly we’ll overcome these challenges and how AI adoption ultimately will look in the enterprise.

At this point, if you’re an IT vendor or an IT consultant and AI is not part of your strategy, you’re not likely to get many returned calls.

“AI is front and center of all our discussions,” says Ram Palaniappan, CTO at TEKsystem, an IT consultancy with $7 billion in global revenue. “If you are positioning without an AI first approach, customers don’t want to hear you…They feel that you are somewhere legacy.”

TEKsystem is working with many large global firms to help them build out their AI systems. Much of the work involves using large language models (LLMs) to provide more customized experiences in areas like customer service, he says. The consultancy uses tools like LangChain and Llama Index to automate some of those generative AI workflows.

However, some customers already are asking for assistance with developing AI agents. That space is not as well defined as the LLM space, he says, and it will take some time for the tools to mature.

(a-image/Shutterstock)

“What we are seeing is that usage of agentic AI is slowly evolving, the tools in that space are evolving. The integrations, the open standards for communication–those things are evolving,” Palaniappan tells BigDATAwire in an interview. “I would say that there are some leading indicators primarily from the adoption perspective, but at the same time there is a catch-up game to meet those requirements.”

Julian LaNeve, the CTO of Astronomer, has also noticed an uptick in discussion around agentic AI. As the company behind Apache Airflow, Astronomer is all about getting data where it needs to go, whether that’s a data warehouse for ad hoc analytics or to a reasoning model for a prediction and an action.

However, LaNeve is not convinced that some of these early agentic AI use cases are worth the time and expense in working with complex and error-prone technology. For instance, one of the CTOs of an Astronomer customers told him that he wanted to build “a multi-agent swarm” to help automate the support ticketing system. That struck LaNeve as overkill.

“All you need to do is classify the support ticket and then auto draft a response for it. “It’s a simple workflow,” he says. “It’s easy to get excited about what these LLMs can do. But it’s like people jump straight off the deep end to go try to get maximum potential out of them before doing the simple and obvious thing.”

LaNeve understands the big benefit that LLMs bring us compared to how natural language processing (NLP) used to be done. Instead of building out a machine learning team and then training a custom model on terms that are common in that company, it’s much cheaper and easier to use a pre-built LLM to classify and even potentially respond to things like IT support tickets.

“The simplest example is prompt chaining,” he tells BigDATAwire. “So you can use an LLM as the first step of the pipeline and the second step of the pipeline and third step and eventually you do something with it. A good example of that is LlamaIndex or LangChain or something like that.”

But in some cases, even tools like LangChain and LlamaIndex can be overkill, he says. LaNeve has seen many Astronomer customers build solid AI workflows using Apache Airflow.

“It’s a flexible enough workflow orchestration platform that whether you’re calling out to data tools, ML tools, AI tools, the principles are still very much the same,” he says. “We’ve seen a lot of people productionize these things with little to no effort. I’ve seen teams spit out new LLM workflows multiple times a day, and it adds up super quickly. Each individual LLM workflow might, in and of itself, not be that interesting. Maybe it gives you like an extra 1% to 5% efficiency. You’re taking a very specific thing and starting to automate it. But when you’re able to go build a dozen of those every week, it starts to add up very, very quickly.”

The sudden obsession with agentic AI workloads strikes LaNeve as a classic case of technologists becoming obsessed with new technologies instead of looking at how technologies can solve actual business problems. Since all LLMs and reasoning models are prone to hallucinations, you also increase the odds of errors creeping into your workflows when you take humans entirely out of the loop, as many want to do with agentic AI.

“I wouldn’t go as far to say I’m anti agent, in the long term,” he says. “But I am anti starting with agents before you go get real value out of these single workflow use cases.”

Related Items:

Reporter’s Notebook: AI Hype and Glory at Nvidia GTC 2025

Can You Afford to Run Agentic AI in the Cloud?

When GenAI Hype Exceeds GenAI Reality

 

BigDATAwire