Large Language Models (LLMs) are being optimized with Chain of Thought (CoT) reasoning; however, it’s crucial that we identify flawed reasoning pathways. We’re proposing a new method developed in collaboration with The University of Texas at Austin: SEAL (Steerable rEAsoning caLibration) for calibrating reasoning paths using a training-free approach. Read the blog to dive deeper into the LLM research. https://intel.ly/45u0rtb #AIResearch #LLM #TechnicalDeepDive
About us
See what it means to be on the vanguard of research in computer science, academic and industry collaboration, and a leader in visionary thinking about technology, the sciences, society, and culture. Intel Labs, the research arm of Intel Corporation, is inventing tomorrow's technology to make our lives easier, more enjoyable and more productive.
- Website
-
http://intel.com/labs
External link for Intel Labs
- Industry
- Research Services
- Company size
- 10,001+ employees
- Headquarters
- Hillsboro, OR
- Type
- Public Company
- Specialties
- Research, Microprocessor, Computer Architecture, Circuits, Comms, Systems, Platforms, Energy, Nanotechnology, Ethnography, Security, Quantum Computing, Neuromorphic Computing, Artificial Intelligence, and Computer Science Research
Locations
-
Primary
2111 NE 25th Ave
Hillsboro, OR 97124, US
Employees at Intel Labs
Updates
-
The development and optimization of large language models (LLMs) is happening at such a fast pace that it can be tricky to keep up! In case you missed out, here’s some of the exciting work our researchers have been up to lately, in partnership with our friends at Hugging Face. 🚀 Advanced Universal Assisted Generation Techniques A new method for Universal Assisted Generation (UAG) that allows using any small LM for delivering enhanced speed boosts. 🤯 Faster Decoding with Any Assistant Model Extend assisted generation to work with a small language model from any model family. 🏃 Faster Assisted Generation with Dynamic Speculation A novel method that accelerates text generation by up to 2.7x. ⏩ Speed Up AI with Speculative Decoding Accelerate any LLM by using smaller models to optimize inference speed and cost across platforms. Which area of development do you think will have the biggest impact on LLMs moving forward? Read more about our latest research: https://intel.ly/3J49zgu #LargeLanguageModels #AIResearch #Developer
-
-
Today, we’re proud to announce our 2024 Outstanding Researcher Award winners! Our list of honorees includes professionals in various research fields, from energy-efficient magneto-electric spin-orbit devices to AI-enabled high-throughput electron tomography. Read the blog to learn about each award recipient and see the incredible breakthroughs being developed at Intel Labs. https://intel.ly/4lSvsxN #Research #Technology #Awards
-
-
The development of computer-use AI agents is unlocking new functionality for users, but it’s also presenting a new attack surface for malicious actors. This new style of AI assistant, such as the open-source UI-TARS model, monitors your screen and can control inputs to perform a requested task. At Intel Vision 2025, our researchers demonstrated a proof-of-concept that shows how these models can be forced to execute arbitrary commands by displaying an adversarial image on the screen. We’ve open-sourced this research to allow the community to validate the issue and assist in developing countermeasures. Learn more: https://intel.ly/44RKXjG #AIResearch #CyberSecurity #ComputerUseAI #AgenticAI
-
-
Researchers from Intel Labs and the Weizmann Institute of Science introduced a major advance in speculative decoding at the International Conference on Machine Learning (ICML). This new technique addresses a core inefficiency of GenAI by utilizing speculative decoding to infer longer phrases within AI prompts, thereby mitigating resource consumption and reducing computation cycles per output, delivering up to 2.8x faster inference without loss of output quality. Read the newsroom article to learn more about this incredible breakthrough and its real-world benefits. https://intel.ly/45m0xUI #AIResearch #LLM #GenerativeAI
-
Whether mobile or stationary, the top priority for robots is human safety. That’s why we’ve developed novel 3D perception algorithms for increasing robot capabilities to navigate 3D spaces while simultaneously maintaining safe interactions. They're available for free with the Intel Robotics SDK and packaged with RealSense cameras. https://intel.ly/4lP3WS8 #Robots #AIResearch #Industrial
-
-
This week, Senior Fellow Dr. Pradeep Dubey has been awarded the Distinguished Alumni Award from the Birla Institute of Technology, Mesra. Dr. Dubey’s contributions to engineering, technology, and entrepreneurship have had a significant impact on the community surrounding him, as well as shedding light on the path that guides university students who follow in his footsteps. Congratulations, Dr. Pradeep K Dubey! #Engineering #Technology #Education
-
-
We’re presenting our latest research at the International Conference on Machine Learning (ICML) 2025: six works at the main conference (including two spotlight papers and an oral presentation), and two papers at related workshops. Read the blog to see the complete list of works we’re bringing to ICML 2025. https://intel.ly/44N2Tul #AIResearch #MachineLearning #Events
-
-
Congratulations are in order for Souvik Kundu, who has won the DAC Under-40 Innovators Award, the INNS Aharon Katzir Young Investor Award, and the 2025 CPAL Rising Star Award for his work on Foundation Model Efficiency. Souvik Kundu’s work on AI-assisted automation has brought large model intelligence to small devices, enabling them to run at the edge for both personal computers and mobile devices. This breakthrough allows users to scale quickly and at lower costs. Read more. https://intel.ly/3TxIuEv #InnovatorsAward #Research #Innovation
-
-
While large, pre-trained models have achieved outstanding results in general language sequence modeling, an alternative architecture called Selective Structured State Space Models has arrived to address inefficiencies in Transformers. In this article, you’ll learn how the novel Mamba-Shedder compression method demonstrates that redundant components can be removed with only a minor impact on model performance. A few other highlights: 👍 Efficient pruning techniques 👍 Accelerated inference 👍 Recovery tuning Read more here: https://intel.ly/3U9jraZ #ArtificialIntelligence #DeepLearning #Developer
-