Do we need Large Language Models for agentic AI?

View profile for Prateek Majumder

Data & Tech @ RIL | Ex Developer @ TCS Digital

Large Language Models (LLMs) have rightly captured the spotlight for their broad capabilities and near-human performance across tasks. But as we step into the era of agentic AI—where systems focus on executing a small set of specialized tasks repeatedly—the question arises: do we always need the LLMs? A new perspective argues no. In fact, Small Language Models (SLMs) may be: ✅ Sufficiently powerful for specialized agent workflows ✅ More economical to deploy at scale ✅ Naturally better suited for repetitive, narrow tasks For scenarios that demand broader conversational intelligence, the answer may lie in heterogeneous agentic systems—where multiple models (big and small) work together seamlessly. This vision not only makes agentic AI more efficient but also redefines how we think about scaling intelligence: sometimes, smaller is smarter. Read the paper here: https://lnkd.in/ghK5PfeR

  • text, letter
Debdaru Dasgupta

Building Software Applications...

1mo

I believe we would still need the dependency of Generative AI in Agentic AI models too, since a lot of planning, execution phases depends on this, SLM's won't be enough

To view or add a comment, sign in

Explore content categories