How Robotics is Evolving With New Technologies

Explore top LinkedIn content from expert professionals.

  • View profile for Varun Grover
    Varun Grover Varun Grover is an Influencer

    Product Marketing Leader at Rubrik | AI & SaaS GTM | LinkedIn Top Voice | Creator🎙️

    9,294 followers

    Physical AI is becoming real—and fast. 🦾 At GTC 2025, NVIDIA didn’t just launch new chips or tools. They showed us how AI is evolving beyond language and vision—into machines that can act in the real world. Here’s what you need to know about physical AI: 1️⃣ It’s not about one robot. It’s about transferable intelligence. The big leap isn’t hardware—it’s the idea that a single model can power many robots. Trained on both real and synthetic data, foundation models like GR00T can learn general skills—like grasping, walking, or organizing—and adapt to new environments. It’s the same shift we saw in NLP: one model, many use cases. 2️⃣ Simulation is more than a test environment—it’s a learning engine. With realistic physics, sensors, lighting, and even human avatars, today’s simulators are rich enough to train robots from scratch. This dramatically reduces the cost of failure, accelerates iteration, and unlocks edge-case training you’d never risk in real life. 3️⃣ The AI stack is converging—from perception to motion. Historically, vision, planning, and control lived in silos. Now, we’re seeing unified models that combine them—so robots can see, understand, and act in milliseconds. That unlocks autonomy that’s adaptive, not brittle. 4️⃣ Edge deployment isn’t optional—it’s foundational. Robots don’t have time to wait for cloud inference. Running large models locally—with fast, efficient chips—means faster reactions, safer systems, and more robust performance. This is especially critical in healthcare, manufacturing, and logistics. 5️⃣ Physical AI is becoming infrastructure. From humanoids in factories to autonomous X-rays in hospitals, the same core ingredients are emerging: • Generalist models • Simulation pipelines • Edge AI hardware • Domain-specific fine-tuning The implication? We’re not just building robots. We’re building a new interface between AI and the real world. — Why it matters: Most people still think of AI as something that writes text or generates images. But the next wave is embodied. AI that moves. That helps. That does. Physical AI isn’t a product category. It’s a shift in what AI can be. #PhysicalAI #GTC2025 #EmbodiedAI #Simulation #EdgeAI #Robotics #AIInfrastructure #Autonomy #DigitalTwins #AIforRealWorld

  • View profile for Harsha Srivatsa

    AI Product Builder @ NanoKernel | Generative AI, AI Agents, AIoT, Responsible AI, AI Product Management | Ex-Apple, Accenture, Cognizant, Verizon, AT&T | I help companies build standout Next-Gen AI Solutions

    11,178 followers

    This article is a writeup of collaborative learning and doing with the brilliant Robotics Engineer Dr. Karthika Balan. For past year or so, I have been doing this to uplevel myself in a new and niche field of Physical AI + Robotics. Why? Because I like to stretch myself and believe in continous upleveling. Besides, it is an awesome experience working + learning with Dr Balan. The fundamental question I focused on was: How do we design Humanoid Robots that evolve alongside exponential AI breakthroughs rather than becoming obsolete with each advancement? The traditional approach to Robotics—design, build, deploy, replace—creates an inherent disconnect between AI's rapid evolution and hardware's static nature. Organizations are forced into an impossible choice: wait for the "perfect" AI before building, or build now and accept rapid obsolescence. 90% of Today's Humanoid Robots Will Be Obsolete by 2027. The AI revolution is leaving robotics behind. While AI capabilities double every few months, the approach to Humanoid Robots remains stuck in the past. We design, build, deploy, and replace—creating billion-dollar investments that become outdated before they leave the lab. What if Humanoid Robots could evolve as quickly as the AI that powers them? Dr. Balan and I brainstormed and developed EVOLVE—a revolutionary framework that transforms robots from static products into living platforms that continuously absorb AI breakthroughs. The results? Organizations implementing this Progressive Systems Design approach could see 80% ROI over five years versus just 30% with traditional methods. We've proven it works. Our fledgling work on Project COMPANION - Humanoid Robot Design Companions to tackle the loneliness epidemic in elders promises to achieve what was previously impossible: Robots that form genuine emotional connections, reducing loneliness by 43% and improving wellbeing by 37% among seniors. The future belongs not to Humanoid Robots designed as machines, but to those designed as continuously evolving products.

  • View profile for Andrea L. Thomaz

    Founder CEO Diligent Robotics Inc.

    3,664 followers

    Robotics is certainly having its moment. Reading this Financial Times piece, I couldn’t help but reflect on how far we’ve come—and the exciting challenges ahead. The rapid advancements in physical AI highlighted resonate deeply with both my work at Diligent and my academic research into human-robot interaction. There are many inspiring demonstrations and examples of robot dexterity in the article (presented beautifully by the way). And while it’s fun to see robots flipping pancakes or tying shoelaces, as demonstrations of how functional these new capabilities are, the real test will come when the rubber hits the road in a real environment. Speaking from experience in bringing research to product - the truly complex things that mobile manipulation robots need to handle are in their ability to navigate dynamic, unpredictable environments and work with humans, not just for them. This is where some of the most interesting aspects of embodied AI come into play. At Diligent, we’ve spent years honing Moxi’s ability to operate seamlessly alongside healthcare teams in busy hospitals—managing lab and pharmacy workflows, badging into secure areas, and even riding elevators with people autonomously. These environments aren’t static. They require robots that can adapt in real-time, respond to people in the environment, and handle the complexities of the physical world. At Diligent we focus not on building robots that imitate humans—but instead on designing general purpose robots that complement and enhance what humans do best. This is where I see the future of robotics being most exciting -- the future is People + Robots. The article’s discussion of advances in teaching robots dexterity really highlights the importance of robust datasets in shaping these capabilities. At Diligent, every delivery our fleet of Moxi robots make contributes to a growing knowledge base that informs smarter, more adaptable systems. This ability to leverage a fleet of robots in the world is going to be a key factor in putting dexterous robots to work in the real world. Having been in the field of Robot Learning for years, it’s incredibly exciting to see the surge of advances in dexterity and robot capability today. For companies and founders working to productize these advances, the focus needs to remain clear: creating technology that serves human needs with precision, empathy, and practicality. https://lnkd.in/gC6nvf3T

  • View profile for Ulrik Stig Hansen

    President & Co-Founder of Encord

    12,815 followers

    What's a $50 trillion opportunity in technology? Physical AI for industrial and robotics, according to Jensen Huang. The last decade in AI was about understanding the digital world (language, images, code), the NEXT DECADE will be about grounding AI in the physical world—mastering space, time, objects, and actions. We’re already seeing the landscape accelerating dramatically with pioneering foundation models from NVIDIA (Cosmos and GR00T N1), Physical Intelligence (Pi0), Archetype AI (Newton), Covariant (RFM-1), Google DeepMind (Gemini Robotics), Agibot (GO-1). Unlike LLMs, which is confined to digital spaces, physical AI faces formidable challenges: - Physical environments are noisy, unpredictable, and full of edge cases - Data collection for robotics is exponentially more expensive and time-consuming - Real-world robots must navigate physics constraints that don't exist in digital realms - Systems must adapt in real-time to unexpected situations without causing harm These challenges are precisely why we need to evolve beyond static models toward "adaptive intelligence" – systems capable of learning from feedback, adjusting to errors, and iterating in real-time as they encounter the unpredictability of physical environments. Building such adaptive systems requires platforms that facilitate the intersection of multimodal understanding, simulation, reinforcement learning, and continuous data alignment—creating a virtuous cycle where real-world experience constantly improves AI performance. At Encord, we're building precisely this infrastructure to close the loop between physical world interactions and AI systems, creating the feedback mechanisms essential for physical AI to thrive. What innovative approaches are you most excited about seeing emerge in this space?

  • View profile for Mark Johnson

    We partner with your team to build tech that delivers on a specific business problem you are trying to solve

    30,730 followers

    Hello 👋 from the Automate Show in downtown Detroit. I’m excited to share with you what I’m learning. Robotics is undergoing a fundamental transformation, and NVIDIA is at the center of it all. I've been watching how leading manufacturers are deploying NVIDIA's Isaac platform, and the results are staggering: Universal Robotics & Machines UR15 Cobot now generates motion faster with AI. Vention is democratizing machine motion for businesses. KUKA has integrated AI directly into their controllers. But what's truly revolutionary is the approach: 1. Start with a digital twin In simulation, companies can deploy thousands of virtual robots to run experiments safely and efficiently. The majority of robotics innovation is happening in simulation right now, allowing for both single and multi-robot training before real-world deployment. 2. Implement "outside-in" perception Just as humans perceive the world from the inside out, robots need their own sensors. But the game-changer is adding "outside-in" perception - like an air traffic control system for robots. This dual approach is solving industrial automation's biggest challenges. 3. Leverage generative AI Factory operators can now use LLMs to manage operations with simple prompts: "Show me if there was a spill" or "Is the operator following the correct assembly steps?" Pegatron is already implementing this with just a single camera. They're creating an ecosystem where partners can integrate cutting-edge AI into existing systems, helping traditional manufacturers scale up through unprecedented ease of use. The most powerful insight? Just as ChatGPT reached 100 million users in 9 days, robotics adoption is about to experience its own inflection point. The barriers to entry are falling. The technology is becoming accessible even for mid-sized and smaller companies. And the future is being built in simulation before transforming our physical world. Michigan Software Labs Forbes Technology Council Fast Company Executive Board

  • Check out this craziness led by the brilliant Dr. Jim Fan at NVIDIA: They taught robots how to move like Lebron, Ronaldo and Kobe using reinforcement learning. Here's what they solved, in non-tech terms: First: What's Reinforcement Learning, exactly? Reinforcement learning (RL) is AI tech - in this case, tech that lets robots learn through trial and error - similar to human learning. Robots attempt movements, get feedback on their success, and adjust their behavior to maximize the right outcomes. The process keeps going until the robot achieves the right movement patterns. What's NVIDIA's Amazing Achievement? The robotics team taught robots to replicate movements of Ronaldo, LeBron James, and Kobe Bryant. They're so fluid and natural that the robotics folks actually SLOW DOWN the videos so you can see how good the movements are. What's the Big Technical Challenge? Teaching robots to move naturally in the physical world has traditionally been a huge challenge for two main reasons: 1. Real-world robot training is both expensive and potentially risky 2. Computer simulations struggle to perfectly replicate real-world physics How Did They Solve it? NVIDIA developed ASAP (Adversarial Sim-to-real Action Processing), a sophisticated three-step system: 1. Simulation Training: The team created a virtual environment where robots could practice movements thousands of times, learning to mimic specific athletic movements 2. Real-World Testing: These simulated movements are then attempted by physical robots, with the results recorded 3. AI-Powered Adaptation: The system learns from any discrepancies between simulation and reality, continuously improving the accuracy of virtual training What's This All Mean? This is a huge advancement in robotics because they're successfully combining: - Traditional physics-based simulations refined over decades - Modern AI capabilities that can adapt to real-world complexities This is tech that bridges the gap between simulation and reality. What that means is they're opening new possibilities for robotic applications that require sophisticated, human-like movement patterns. Follow Jim Fan. Follow him here and on X. Follow him wherever you can find him. He's a treasure.

  • View profile for Uche Okoroha, JD

    The Most Advanced Tax Credit Platform 👉 𝗧𝗮𝘅𝗥𝗼𝗯𝗼𝘁.𝗰𝗼𝗺 | CEO & Co-Founder | Leveraging AI to Deliver Tax Incentives | R&D Tax Credit | Employee Retention Credit (ERTC) | Dog dad 🐶

    9,761 followers

    The AI race just took a sharp turn—straight into the world of robots Google, OpenAI, Meta, and Amazon aren’t just building smarter chatbots anymore—they're quietly (and not-so-quietly) making massive moves in robotics. Not just software. Hardware too. We're talking robotic arms that can "think," household bots that learn your routines, and warehouse automation that adapts in real time—all powered by next-gen AI models. This isn’t theoretical. ➡️ Amazon is already deploying AI-powered robots across their fulfillment centers ➡️ Google DeepMind is training robots with reinforcement learning that mimics human behavior ➡️ Meta is exploring AI agents that understand and interact with physical environments ➡️ OpenAI, backed by Microsoft, is investing in robotics startups aiming to merge GPT-like intelligence with real-world machines The vision? Smart machines that see, learn, adapt, and execute—in homes, factories, and even the streets. Having worked closely with AI startups, I’ve seen firsthand how quickly the line between digital and physical intelligence is blurring. What felt like sci-fi just two years ago is now something founders are actively pitching—and building. This shift could be as transformational as the rise of the smartphone. Maybe even more. Curious to hear— Where do you think robotics will impact daily life first? Home, healthcare, manufacturing... or somewhere else? Drop your thoughts in the comments. 👇

  • View profile for Gajen Kandiah

    AI-First CEO | Scaling Global Tech | Ex-President & COO, Hitachi Digital

    20,770 followers

    The innovation around #AI coming out of Massachusetts Institute of Technology's Department of Electrical Engineering and Computer Science continues to fascinate. This latest story by @Jennifer Chu explains their work to connect robot motion data with the “common sense knowledge” of large language models (#LLMs) to enable self-correction and improved task performance. The development, which enables robots to "physically adjust to disruptions within a subtask so that the robot can move on without having to go back and start a task from scratch...," could have far-reaching impact across a range of industries. Again, fascinating. https://lnkd.in/e96_eh-y Hitachi Digital Hitachi Digital Services Frank Antonysamy

  • View profile for Robert Little

    Chief of Robotics Strategy | MSME

    36,998 followers

    The day of programming your robot may be coming to an end. It could be a year away, or it might take a decade, but the shift is undeniable. As The New Yorker article highlights, a future generation of robots will no longer rely on explicit programming for each task. Instead, they’ll learn like humans—through observation, interaction, and experience. This transformation is being driven by advancements in AI and sensory feedback, particularly from vision and force. Companies like Intrinsic, Physical Intelligence, and Pittsburgh’s Skild AI are leading the charge. By leveraging AI models that integrate sensory data, these innovators are creating adaptable robots capable of taking on complex, diverse tasks without human intervention. We seem so close yet so far away from success: “Speculating on the future of A.I.-powered robots is like trying to imagine the Industrial Revolution from the perspective of a nineteenth-century hatmaker,” writes author James Somers. ATI Industrial Automation supports this transition through advanced robotic force sensors. Additionally, Celera Motion, A Novanta Company, empowers robots with powerful, compact servo drives that improve performance and flexibility. New Yorker article: https://lnkd.in/eRRuKwPT

  • View profile for Ruth Morales Zimmerman

    Investor | VC | Advisor | TEDx-Speaker | 30k+ followers

    31,922 followers

    🌟 While AI's impact on software is widely recognized, robotics is making strides just as impressive—often under the radar. The next big investment opportunities will be at the crossroads of Robotics and AI. As a VC, are you prepared to explore this dynamic fusion? 🔍 For instance, Google DeepMind has recently unveiled an AI-powered robot that achieves amateur human-level performance in table tennis. 🤖🏓 As a VC investor, I'm closely watching this convergence because the potential is massive. Here are a few key areas where the fusion of AI and robotics can create significant value: Healthcare: Imagine robots with advanced AI assisting in surgeries, rehabilitation, and patient care, enhancing precision and personalization in medical services. Manufacturing: AI-driven robots can revolutionize production lines, improving automation, quality control, and flexibility in manufacturing processes. Logistics: In warehousing and supply chains, AI-enabled robots can optimize sorting, packing, and inventory management, leading to greater efficiency and cost savings. Consumer Products: From smart home devices to personal assistants, AI-powered robots can make daily life more convenient and tailored to individual needs. Agriculture: AI and robotics can transform farming with automated planting, harvesting, and crop monitoring, supporting more sustainable and efficient agricultural practices. If you're working on a startup at the forefront of this exciting intersection, I’d love to hear about the unique challenges you’re addressing and how your technology integrates AI and robotics to solve them. The opportunities in this space are immense, and I’m eager to explore potential collaborations and investments that could drive the next wave of innovation. #AI #Robotics #Innovation #Investment #FutureOfTech #VC #ArtificialIntelligence

Explore categories