Large Language Models (LLMs) have rightly captured the spotlight for their broad capabilities and near-human performance across tasks. But as we step into the era of agentic AI—where systems focus on executing a small set of specialized tasks repeatedly—the question arises: do we always need the LLMs? A new perspective argues no. In fact, Small Language Models (SLMs) may be: ✅ Sufficiently powerful for specialized agent workflows ✅ More economical to deploy at scale ✅ Naturally better suited for repetitive, narrow tasks For scenarios that demand broader conversational intelligence, the answer may lie in heterogeneous agentic systems—where multiple models (big and small) work together seamlessly. This vision not only makes agentic AI more efficient but also redefines how we think about scaling intelligence: sometimes, smaller is smarter. Read the paper here: https://lnkd.in/ghK5PfeR
Do we need Large Language Models for agentic AI?
More Relevant Posts
-
🚀 Just published my new article on Medium! SLM vs LLM: Why the Future of AI Is About Working Together, Not Competing In this post, I explore how combining Small Language Models (SLMs) with Large Language Models (LLMs) can cut costs, save tokens, and build smarter AI systems. Would love to hear your thoughts! Do you see SLMs shaping the future of enterprise AI? #AI #LLM #SLM #ArtificialIntelligence #FutureOfWork
To view or add a comment, sign in
-
✨ Small Language Models as the Future of Agentic AI The video provides a strong position statement arguing that Small Language Models (SLMs) are the future of agentic AI, despite the current dominance of Large Language Models (LLMs). The authors contend that SLMs are sufficiently powerful, more economical, and operationally more suitable for the specialized and repetitive tasks common in AI agents. They provide arguments grounded in modern SLM capabilities and inference efficiency, advocating for a shift to SLM-first architectures or heterogeneous systems that use LLMs only when necessary. Furthermore, the paper outlines a conversion algorithm to help developers migrate existing LLM-based agents to more efficient SLM solutions and discusses barriers to adoption such as industry inertia and infrastructure investment https://lnkd.in/gUVb8mA7
✨ Small Language Models as the Future of Agentic AI
https://www.youtube.com/
To view or add a comment, sign in
-
What if our language models could understand nuance like we do? Precision in AI-generated text is about to get a big boost. Semantic fusion adds a new layer to language models by incorporating subtle, interpretable features like tone, part of speech, and sentiment strength for each word. This approach lets AI control text generation more precisely, like adjusting punctuation or sentiment on demand, while keeping the model efficient and simple. I see this as a step toward AI that writes with far more personality and intention, making human-machine communication smoother and more natural. How would more controllable AI-generated language help in your work or daily life?
To view or add a comment, sign in
-
A major paradigm shift is imminent in the deployment of AI agents! 🤯 I highly recommend diving into the new position paper, "Small Language Models are the Future of Agentic AI" by Belcak et al., which challenges the established, LLM-centric status quo. The authors present a groundbreaking position that Small Language Models (SLMs) are the future of agentic AI. Their findings demonstrate a promising new direction for the future of AI by asserting that SLMs are fundamentally more suitable and economical for the vast majority of agent operations. The paper’s argumentation is grounded in capability, operational suitability, and economic impact. Key Takeaways: 🧠 Modern SLMs are sufficiently powerful to handle agentic tasks, often achieving performance levels comparable to or exceeding much larger, previous-generation LLMs in critical areas like commonsense reasoning, tool calling, and instruction following. 💰 Serving a typical SLM is 10–30 times cheaper (in latency, energy, and FLOPs) than serving larger LLMs. This efficiency enables rapid fine-tuning agility and flexible edge deployment. ⚙️ Since most agentic subtasks are repetitive, specialized, and non-conversational, SLMs are inherently more flexible and predictable. The authors advocate for heterogeneous agentic systems—using SLMs by default and invoking LLMs selectively and sparingly—for maximal efficiency. 🗺️ The work outlines a detailed LLM-to-SLM agent conversion algorithm and highlights the profound implications this shift has for promoting responsible and sustainable AI deployment. This paper is a must-read for anyone concerned with the long-term infrastructure and costs of scaling agentic systems. Read the paper here: https://lnkd.in/eT_B_JAq #AI #AgenticAI #SLMs #LLMs #MachineLearning #ArtificialIntelligence #AIEconomics #NVIDIAresearch 🔍 #WithNotebookLM
To view or add a comment, sign in
-
According to alternative/standard views, "LLM generalist models will always retain the advantage of universally better performance on language tasks" and how the authors try to discuss in this article, it is possible that Small Language Models can perform equally well. Well, let's test it!
A major paradigm shift is imminent in the deployment of AI agents! 🤯 I highly recommend diving into the new position paper, "Small Language Models are the Future of Agentic AI" by Belcak et al., which challenges the established, LLM-centric status quo. The authors present a groundbreaking position that Small Language Models (SLMs) are the future of agentic AI. Their findings demonstrate a promising new direction for the future of AI by asserting that SLMs are fundamentally more suitable and economical for the vast majority of agent operations. The paper’s argumentation is grounded in capability, operational suitability, and economic impact. Key Takeaways: 🧠 Modern SLMs are sufficiently powerful to handle agentic tasks, often achieving performance levels comparable to or exceeding much larger, previous-generation LLMs in critical areas like commonsense reasoning, tool calling, and instruction following. 💰 Serving a typical SLM is 10–30 times cheaper (in latency, energy, and FLOPs) than serving larger LLMs. This efficiency enables rapid fine-tuning agility and flexible edge deployment. ⚙️ Since most agentic subtasks are repetitive, specialized, and non-conversational, SLMs are inherently more flexible and predictable. The authors advocate for heterogeneous agentic systems—using SLMs by default and invoking LLMs selectively and sparingly—for maximal efficiency. 🗺️ The work outlines a detailed LLM-to-SLM agent conversion algorithm and highlights the profound implications this shift has for promoting responsible and sustainable AI deployment. This paper is a must-read for anyone concerned with the long-term infrastructure and costs of scaling agentic systems. Read the paper here: https://lnkd.in/eT_B_JAq #AI #AgenticAI #SLMs #LLMs #MachineLearning #ArtificialIntelligence #AIEconomics #NVIDIAresearch 🔍 #WithNotebookLM
To view or add a comment, sign in
-
Fascinating. A new study finds that humans can read analog clocks with 89.1 percent accuracy, but the best AI model only manages 13.3 percent: https://lnkd.in/ebnjxvAt
To view or add a comment, sign in
-
The Next Wave in Agentic AI: Small Language Models (SLMs) NVIDIA Research highlights a critical point: not every agentic AI task requires a massive LLM. For many routine, narrow, and specialized workflows, SLMs deliver faster performance, lower cost, and easier fine-tuning. I see this as the natural evolution of enterprise AI. The real opportunity is in hybrid systems—deploying SLMs for scale and efficiency, while reserving LLMs for complex reasoning. This pragmatic approach will shape how organizations design agent ecosystems that are scalable, sustainable, and business-ready. Curious how others are thinking about balancing LLMs and SLMs in production? 👉 Full research here: https://lnkd.in/euDKVixx #AI #AgenticAI #EnterpriseAI #SLM #NVIDIA
To view or add a comment, sign in
-
Agentic SLM Sandbox is a small-model-first testbed for building and evaluating agentic systems. It includes minimal controllers for LM-first and controller-first orchestration, tool-calling scaffolds, a secure logger to collect agent traces, and a repeatable LLM→SLM migration workflow (task clustering, model selection, PEFT fine-tuning, routing, and iterative evals). Inspired by “Small Language Models are the Future of Agentic AI” ([ink in comments] and its LLM-to-SLM agent conversion algorithm. Read the paper [link in comments] for more details.
To view or add a comment, sign in
-
I just finished reading "How Far are we From AGI: Are LLMs all we need? by Tao Feng et al., and it really reshaped how I think about the future of AI and its use. link: https://lnkd.in/gycwzYRb The things really were much interesting as the following things were key concepts that I learned from this paper. 1. AGI is not a bigger model: Mostly for AGI, there are many challenges like reasoning, memory, interfaces, safety, and adaptability. which, by the way, can't be solved by LLMs 2. Evaluation matters: Rather than the basic evaluation metrics that we use, AGI intrigues us to create something far more accurate and accountable for the AGI. 3. Outside LLMs: Mostly from what we know, LLMs are foundational, but from this paper, AGI is something that has to learn and interact with a real-world environment. During reading and understanding this paper, I faced some challenges and questions that excited me. Are we actually limiting our imagination by creating AGI to be something that is around LLMs, or rather, is it something that can vary very differently from what we understand? Is it truly upsetting that we are still held on projects like deepfake, image generation, and other AI technologies to be at the forefront of AI? LLMs are incredible as it has been the foundation that gave us ChatGPT and other models, whereas going further, we have to use them, integrate them, and create most things using these LLMs. This paper isn't about where we are now, but about how we can understand the next stage of AI development. If we only look at LLMs, which is what we have now, maybe we may get left behind in the race of AI. #AGI #LLMS #AIResearch #GenerativeAI #FutureofAI 🤖
To view or add a comment, sign in
-
Imagine talking to a machine… And it answers back.💡 Thanks to #LLMs (Large Language Models), we can now interact conversationally with the #digital replica of an object or system. 🗣️ Instead of dashboards and code, you just ask questions — and the #DigitalTwin responds with insights, actions, and solutions. This is the future of human–machine collaboration. 🚀 Learn more at the link in bio. #FifthIngenium #TechGlossary #DigitalTwinAgent #AI #ConversationalAI
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Building Software Applications...
1moI believe we would still need the dependency of Generative AI in Agentic AI models too, since a lot of planning, execution phases depends on this, SLM's won't be enough