Wednesday, Oct. 29, 8:00 p.m. ET
From automated arms to four-legged robots and helpful humanoids, robots roamed the expo hall at NVIDIA GTC Washington, D.C.
Wednesday, Oct. 29, 7:30 p.m. ET
At GTC Washington, D.C., chief information officers (CIOs) from across the U.S. spoke in a panel discussion on the application of AI across public safety, transportation and operations for efficiencies.
Moderated by Jumbo Edulbehram, director of smart spaces and global public sector for NVIDIA, the panel included Marcie Kahbody, CIO for Caltrans and agency information officer at California State Transportation, California State; Jorge Cardenas, CIO of the City of Brownsville, Texas; Nicole Coghlin, CIO of the City of Cary, North Carolina; and Melissa Kraft, CIO of the City of Frisco, Texas.
California’s I-5 highway, in Sacramento, is one of the most dangerous highways, said Kahbody. With the amount of data that’s available, it’s very difficult for current data analytics to uncover why some of these highways are so dangerous — but using new tools, there’s a hope for better data insights, said Kahbody.
Cardenas pointed out cities like Brownsville can better understand traffic patterns if they can simulate simple things like a traffic closure or an emergency situation.
Frisco is one of the safest cities in the United States, said Kraft, and that’s thanks in part to a program called Situation Awareness for Emergency Response, which uses cameras and advanced technology integrated into the city’s 9-1-1 system.
Edulbehram emphasized the efforts underway to adopt and use AI technologies in towns and cities across the country is leading to profound changes in the municipalities themselves.
It’s super interesting to learn how AI is really being applied to the public realm, Edulbehram said. And “don’t be fooled” by the sizes of some of these cities, he added, as they’re adopting new technology to offer services that are cost-effective, efficient and data-driven, aiming to “do more with less.”
Wednesday, October. 29, 7:00 p.m. ET
Amid massive server racks, life-size humanoid robots and widescreen displays, an array of diminutive AI supercomputers stole the show at GTC Washington, D.C.
NVIDIA DGX Spark — built for developers, researchers and creators — delivers supercomputer-class performance in a compact desktop form factor. It features an NVIDIA GB10 Grace Blackwell Superchip, delivering up to 1 petaflop of AI performance at FP4 precision — plus 128GB of unified CPU-GPU memory so developers can prototype, fine-tune and run inference locally.
At the show, attendees flocked to booths showing DGX Spark systems in action, running LLM models and accelerated Python workflows.
Winners of a developer social media contest received NVIDIA DGX Spark systems.
Wednesday, Oct. 29, 6:30 p.m. ET
From GTC DC, a coalition of researchers, founders and NVIDIA leaders argued that open-source AI is America’s fastest path to innovation, efficiency and trust.
If you want to solve the world’s hard problems, you start in the open.
That was the distilled, non-negotiable takeaway from the GTC DC panel, “Open-Source AI 101: Enabling American Innovation.” A brain trust of developers, researchers and capital experts made the case that transparent collaboration is the new path to national advantage. Forget the closed-off labs of the past — the engine powering today’s breakthroughs is open to everyone.
The panel was an all-star lineup of open-AI evangelists: NVIDIA’s Ankit Patel and Bryan Catanzaro, AI2/UW’s Noah Smith and OSS Capital Founder Joseph Jacks. They cut through the noise with clarity, defining what “open source” means for innovation at speed.
Efficiency Unlocks Innovation
The central theme is simple: open models drastically accelerate innovation. They slash the time and cost it takes a startup, university lab or government agency to build a critical application. It is, as the panelists argued, the most efficient way to scale the sheer volume of AI expertise required to solve hard problems.
For the venture capital perspective, Jacks was direct that open models unlock “huge amounts of leverage and efficiency in R&D because they didn’t have to build the pre-training models.”
This democratization of capability means innovation can now come from anywhere. As Patel noted, this current AI boom is built on a proven foundation: “How many of you remember Linux and PHP and MySQL and nginx and all that… that was the foundation, the open-source foundation of the internet era, right?”
Trust and Cooperation: The Killer Features
The discussion in D.C. always returns to trust, and here, open source delivers a crucial feature: transparency. For an industry fostering rapid change, the ability to inspect the AI model’s inner workings is the critical layer of accountability.
AI2/UW’s Smith gave a striking metaphor to describe the futility of closed AI for scientific progress: “Trying to study the state of artificial intelligence scientifically from proprietary models is like trying to do astronomy from pictures in the sky printed in a newspaper, you can’t do it, right?”
NVIDIA’s Catanzaro argued that the core dangers of AI are fundamentally dangers of control, making openness a safer societal bet.
“I think one of the big dangers of closed systems is about control,” Catanzaro said. “And since AI is a technology about ideas, I think that inherently it is safer” when open.
Catanzaro later expanded on this idea, emphasizing the benefit of having diverse perspectives collaborate on one model: “One of the things that we’ve definitely seen over the past few years [in the] development of AI is that openness has really moved the field along.”
The message from the panelists to D.C. policymakers was unambiguous: Open-source AI creates an ecosystem that is more resilient, more innovative and, by being transparent, more trustworthy.
Wednesday, Oct. 29, 6:00 p.m. ET
At GTC Washington, D.C., NVIDIA announced NVIDIA NVQLink, an open system architecture for tightly coupling the extreme performance of GPU computing with quantum processors to build accelerated quantum supercomputers.
In the expo hall, accelerated quantum computing was a centerpiece of the NVIDIA booth.
Wednesday, Oct. 29, 4:30 p.m. ET
Manufacturers are facing significant obstacles in U.S. reshoring, including supply chain issues, lack of skilled labor and the complexity of integrating advanced technology into existing facilities.
In a GTC Washington, D.C., panel today titled “Shaping the Future of U.S. Manufacturing With AI,” leaders in semiconductor, pharmaceutical, robotics and industrial manufacturing discussed how AI and digital twins — powered by the NVIDIA Omniverse and Isaac platforms — are the primary drivers for overcoming such challenges.
Moderated by Rev Lebaredian, vice president of Omniverse and simulation technology at NVIDIA, the panel featured:
“Physical AI allows us to go into unstructured environments to have flexibility without correspondingly having this vastly increased cost of deployment and uncertainty in the edge cases,” Agility Robotics’ Velagapudi said. “This revolution in physical AI is really going to allow us to push into areas that previously would have never made business sense.”
Agility’s humanoid robot, Digit, automates highly repetitive manufacturing processes so human workers can focus on higher-skill tasks.
Such advanced robotics effectively deploys complicated sensor perception systems “at points in your workflow where you may never have had observability before … allowing us to do things that we would have never imagined before,” Caterpillar’s Hootman said.
The benefits across industries are immense — even potentially life-saving.
For example, Lilly is using AI and digital twins to refine and improve steps in manufacturing pharmaceuticals, Prucka said.
Gratzke said Siemens Industry is using AI and robotics across its facilities, including through an industrial copilot that enables human operators to converse in their native language with machines to help troubleshoot, ramp up operations and optimize processes.
And in semiconductor manufacturing, NVIDIA technologies significantly boost efficiency, lower costs and save time.
“AI, no doubt in my mind, is the most consequential technology of our time, maybe of all time,” TSMC’s Zhang said. “And in the United States, we’re going to turn on lots of advanced silicon right here to support the AI revolution. This is really the best time for the semiconductor industry, yet the best day is still ahead.”
Wednesday, Oct. 29, 4:00 p.m. ET
A lively panel this morning at GTC Washington, D.C., including NVIDIA cofounder Chris Malachowsky, discussed the role of public-private collaboration in expanding AI education and infrastructure across the U.S.
Joining Malachowsky were Aruna Miller, lieutenant governor of Maryland; Sam Liccardo, U.S. representative from California; and Charlie Wuischpard, vice president of Americas enterprise sales at NVIDIA.
The discussion focused on the critical role of universities in AI development for their students and communities. Part of the talk touched on the example of Malachowsky and NVIDIA’s donation of $50 million several years ago to deliver to the University of Florida — Malachowsky’s alma mater — an AI supercomputer that would infuse AI throughout UF’s curriculum and help address a wide range of challenges across the state.
Liccardo said that the federal government should be giving state tax credits to companies that want to invest in state universities, specifically when it comes to upskilling the next generation of workers and reskilling the current generation.
Miller said that Maryland is home to 57 colleges and universities and is focused on aerospace, biotech, cybersecurity and AI. The state is pursuing building an AI factory, Miller added, because it believes that innovation can be scaled faster and further when universities, the private sector and the government come together.
Wednesday, Oct. 29, 2:30 p.m. ET
Training generalist robots that can reason and perform diverse tasks in complex environments requires large, physically accurate and diverse datasets that aren’t readily available in the public domain.
Announced today at NVIDIA GTC Washington, D.C., NVIDIA Isaac GR00T-Dreams — a synthetic data generation and neural simulation framework built on NVIDIA Cosmos world foundation models — offers a way to help robots learn beyond the limits of real-world experience. By producing “dreams” — imagined world states and action trajectories — it enables developers to generate data and actions for robots to learn new tasks in varied environments.
Conventional simulators used to generate synthetic data required experts to painstakingly build virtual environments. GR00T-Dreams changes that by allowing developers to “dream” new training scenarios using just a single image and simple natural language instructions.
GR00T-Dreams includes two main modalities:
Once scenarios are generated, GR00T-Dreams passes them through Cosmos Reason, a reasoning model that filters out flawed or low-quality “bad dreams.” The remaining data forms coherent action trajectories that fuel the post-training of vision language action models, such as the GR00T N model family.
These models integrate visual understanding, natural language comprehension and physical control, allowing robots to interpret instructions and act intelligently in complex environments.
Wednesday, Oct. 29, 2:00 p.m. ET
At GTC Washington, D.C., AWS announced two updates that make it easier for customers to harness NVIDIA-powered AI acceleration.
NVIDIA Listed in AWS Marketplace for US Intelligence Community
NVIDIA AI Enterprise is now listed in the AWS Marketplace for the U.S. Intelligence Community (ICMP). ICMP is a curated digital catalog from AWS that makes it easy to discover, purchase and deploy software packages and applications from vendors that specialize in supporting government customers.
NVIDIA AI Enterprise is an end-to-end cloud native platform that will deliver accelerated tools, libraries and microservices for rapid development, deployment and scaling of AI applications. NVIDIA AI Enterprise unlocks full-stack innovation, accelerating time to mission by powering integrated AI workflows and providing NVIDIA enterprise-grade support to U.S. federal government customers via AWS ICMP.
Extending NVIDIA Run:ai Support to Amazon EKS Hybrid Nodes
NVIDIA Run:ai now supports Amazon EKS Hybrid Nodes, making it easier to manage and scale GPU resources across both cloud and on-premises environments through a single platform.
With this integration, organizations can improve efficiency by maximizing GPU usage, simplifying operations and enabling AI workloads to run where they deliver the most value.
Along with support for AWS Outposts racks and Local Zones, NVIDIA Run:ai offers the flexibility and control needed to accelerate AI initiatives across distributed and hybrid infrastructures.
Wednesday, Oct. 29, 11:00 a.m. ET
New Earth-2 tools enable rapid, AI-powered simulation, and MITRE Corporation deploys Earth-2 model to transform sparse observations into comprehensive weather states for risk analysis.
The NVIDIA Earth-2 platform is helping weather agencies, climate tech startups and researchers around the world develop high-resolution, AI-augmented models and forecasts of global weather and climate. New Earth-2 tools and research unveiled at GTC Washington, D.C., focus on fast, accurate simulation of extreme weather events to support better risk analysis and disaster preparedness.
The GTC expo hall showcases a new workflow demo in NVIDIA Omniverse that speeds the generation of realistic extreme weather simulations from hours to seconds, enabling users to simulate storms in any location of interest, unlocking interactive climate informatics.
The workflow uses Earth-2 tools including cBottle — short for Climate in a Bottle — a foundation model by NVIDIA Research that can generate accurate, high-resolution climate states thousands of times faster and with more efficiency than traditional numerical models.
Disaster preparedness agencies, insurance firms, policymakers and weather agencies such as the National Weather Service could harness AI tools like these to quickly study the potential impact of unprecedented weather scenarios.
“Being able to generate realistic and physically consistent weather scenarios would help the National Weather Service rapidly test new operational tools and train forecasters on unseen weather events — as well as test communication of probabilistic information with ensemble models,” said Monica Youngman, chief scientist at the U.S. National Weather Service.
A new paper by the NVIDIA Climate and Weather Simulation Research Lab, published Tuesday in the Journal of Advances in Modeling Earth Systems, addresses the challenge of score-based data assimilation, where an AI model uses a limited set of weather observation data to generate a higher-resolution visualization — filling gaps between data points based on a powerful underlying generative model.
In one example, this allows users to harness data from as little as 50 weather stations in Oklahoma to model winds and precipitation anywhere in the state.
Creating a weather map of the continental U.S. at extremely high resolution using traditional data assimilation techniques takes millions of CPU hours per year running on supercomputers. The researchers’ approach tapped the NVIDIA PhysicsNeMo framework to dramatically speed up processing time to just one hour on a single GPU — democratizing sophisticated atmospheric modeling previously available only to major weather centers.
MITRE, a not-for-profit corporation that manages various federally funded research and development centers for the U.S. government, is using the model to improve the starting conditions for weather forecasts. This involves taking scattered data from weather stations and creating complete atmospheric pictures that help forecast models and make better predictions.
“Score-based data assimilation marks a step-change for weather prediction: a new class of generative neural networks that turn limited observations into high-fidelity atmospheric states at scale,” said Alex Philp, senior principal for strategic outreach at MITRE. “We’re seeing strong interest across public and private sectors because better state initialization benefits everything downstream — numerical, hybrid and AI weather models alike. When the starting point is closer to reality, every forecast gets better.”
Get started with Earth-2.
See notice regarding software product information.
Wednesday, Oct. 29, 10:00 a.m. ET
The NVIDIA Blackwell architecture marks the next evolution in AI infrastructure, driving a new era of reindustrialization and innovation. Engineered across America — from silicon fabrication in Arizona and Indiana to assembly in Texas and California — NVIDIA Blackwell exemplifies large-scale precision engineering.
With 130 trillion transistors and 1.2 million components, NVIDIA Blackwell stands as both a technological and industrial breakthrough—made in America, made for the world.
Tuesday, Oct. 28, 2:00 p.m. ET
In his keynote address, NVIDIA founder and CEO Jensen Huang laid out the blueprint for America’s AI century. From massive GPU deployments and quantum breakthroughs to secure government AI factories, robotics and autonomous mobility, each announcement builds America’s AI backbone:
Learn more by reading the NVIDIA GTC Washington, D.C., blogs and press releases and watching the replay.
The lights dim, the crowd settles and the energy in the room is unmistakable. Huang steps onto the stage for his keynote address at GTC Washington, D.C., at the Walter E. Washington Convention Center in the nation’s capital.
The presentation begins with a video, a tribute to American innovation and American innovators, past, present and future. This isn’t just another tech talk — it’s a declaration, a blueprint for U.S. leadership in AI infrastructure and innovation.
“Washington, D.C., welcome to GTC,” Huang declared to cheers from the crowd. “It’s hard not to be sentimental and proud of America, I’ll tell you that.”
Huang opened up by thanking NVIDIA’s lineup of partners. “We couldn’t do what we do without NVIDIA’s ecosystem of partners,” Huang said, nodding to GTC’s reputation as “the Super Bowl of AI.”
For decades, CPU performance scaled like clockwork — then Dennard scaling broke. NVIDIA’s answer: parallelism, GPUs and accelerated computing. Moore’s law could not continue forever.
“We invented this computing model because we wanted to solve problems that general-purpose computers could not,” Huang said. “We observed that if we could add a processor that takes advantage of more and more transistors, apply parallel computing, add that to a sequential processing CPU, we could extend the capabilities of computing well beyond — and that moment has really come.”
Accelerated computing begins with NVIDIA CUDA‑X libraries across the stack — from cuDNN and TensorRT‑LLM for deep learning, to RAPIDS (cuDF/cuML) for data science, cuOpt for decision optimization, cuLitho for computational lithograph, CUDA‑Q and cuQuantum for quantum and hybrid quantum‑classical computing, and more.
“This really is the treasure of our company,” Huang said, before teeing up a video showing what CUDA-X can do.
Telecommunications is the lifeblood of our economy, our national security, Huang said.
Yet wireless technology around the world is largely today “deployed on foreign technology, our fundamental communication technology, built on foreign technology, and that has to stop — and we have an opportunity to do that.”
Time to get “back into the game,” Huang declared.
Huang announced a U.S.‑anchored AI‑native wireless stack for 6G, NVIDIA ARC, built on the NVIDIA Aerial platform and powered by accelerated computing — and Nokia is going to integrate NVIDIA’s technology.
“We’re going to partner with Nokia and they’re going to make NVIDIA ARC their future base stations,” Huang said.
Forty years ago, quantum physicist Richard Feymann imagined a new kind of computer that can simulate nature directly, because it runs on quantum principle.
It is now possible to make one logical qubit, or quantum bit, that’s coherent, stable and error corrected, Huang said. But these qubits are “incredibly fragile,” creating a need for powerful technology to do quantum error correction and infer what the qubit’s state is.
To connect quantum and GPU computing, Huang announced NVIDIA NVQLink — a quantum‑GPU interconnect that enables real‑time CUDA‑Q calls from QPUs with latency as low as about four microseconds.
“Just about every single DOE lab [is] working with our ecosystem of quantum computer companies and these quantum controllers so that we can integrate quantum computing into the future of science,” Huang said as he stood in front of a slide highlighting support from 17 quantum computing companies and several U.S. Department of Energy labs.
America’s national labs are entering a new era of scientific discovery, powered by unprecedented investments in AI infrastructure, Huang said, announcing that the DOE is partnering with NVIDIA to build seven new supercomputers to enhance the future of science.
NVIDIA is working the U.S. Department of Energy and Oracle to build the DOE’s largest AI supercomputer at Argonne National Laboratory.
Key news:
“AI is not a tool. AI is work,” Huang declared. “For the first time, technology is now able to do work and help us be more productive. This shift — from tools to AI workers — is creating entirely new forms of computing and, with them, new jobs and industries.”
AI factories aren’t just data centers; they’re purpose-built platforms for generating, moving and serving tokens at massive scale.
“And then, because AI is such a large problem, we scale it up,” Huang explained. “We created a whole computer … for the first time, that has scaled up into an entire rack. That’s one computer, one GPU. And then we scale it out by inventing a new AI Ethernet technology,” he added, referring to NVIDIA Spectrum-X.
As these factories rise, they’re powering new careers in AI engineering, robotics, quantum science and digital operations — roles that didn’t exist just a few years ago.
“This virtual cycle is now spinning,” Huang said. “What we need to do is drive the cost down tremendously, so that one, the user experience is better … and two, we keep this virtual cycle going by driving its cost down.”
The solution: “extreme codesign,” Huang said, designing new fundamental computer architecture, including new chips, systems, software, models and applications called at the same time.
Emphasizing the physicality of these systems, Huang brought some of the gear on stage. In addition, he revealed the new NVIDIA BlueField-4 DPU, the processor powering the operating system of AI factories with a 64-core NVIDIA Grace CPU and NVIDIA ConnectX-9, delivering about 6x the compute of BlueField-3.
Huang also introduced Omniverse DSX, a comprehensive blueprint for designing and operating 100 megawatt to multi‑gigawatt AI factories — validated at the AI Factory Research Center in Manassas, Virginia.
“AI infrastructure is an ecosystem-scale challenge requiring hundreds of companies to collaborate. The NVIDIA Omniverse DSX is a blueprint for building and operating gigascale AI factories,” Huang said. “With DSX, NVIDIA partners around the world can build and bring up AI infrastructure faster than ever.”
Open source and open models drive innovation for startups, enterprises and researchers worldwide, Huang explained. NVIDIA contributes across model families and data — hundreds of open models and datasets this year alone.
NVIDIA model families include Nemotron (for agentic and reasoning AI), Cosmos (for synthetic data generation and physical AI), Isaac GR00T (for robotics skills and generalization) and Clara (for biomedical workflows) to power agentic AI, robotics and scientific breakthroughs.
“We are dedicated to this, and the reason for that is because science needs it, researchers need it, startups need it and companies need it,” Huang said, receiving wide applause from the crowd.
Huang then went on to highlight the work of AI startups built on NVIDIA, as well as work from Google, Microsoft Azure, Oracle, ServiceNow, SAP, Synopsys, Cadence, CrowdStrike and Palantir.
Huang announced NVIDIA is partnering with CrowdStrike to make cybersecurity “speed of light,” enabling enterprises to deploy specialized security agents from cloud to edge using NVIDIA Nemotron‑based models and NVIDIA NeMo tooling.
He also announced that NVIDIA and Palantir are integrating accelerated computing, CUDA‑X libraries and Nemotron open models into Palantir Ontology to “data processing at a much, much larger scale and with more speed.”
Physical AI is powering America’s reindustrialization — transforming factories, logistics and infrastructure with robotics and intelligent systems. In a video, Huang highlighted how partners are putting it to work.
“The factory is essentially a robot that’s orchestrating robots to build things that are robotic,” he said. “The amount of software necessary to do this is so intense that unless you could do it inside a digital twin, the hopes of getting this to work is nearly impossible.”
From the stage, Huang called out the work of Foxconn, which is using Omniverse tools to design and validate a new Houston facility for manufacturing NVIDIA AI infrastructure systems; Caterpillar — which is also incorporating digital twins for manufacturing; Brett Adcock who founded a company three and a half year ago, Figure AI, building humanoid robots for the home and workforce, that is now worth almost $4 billion; Johnson & Johnson; and Disney, which is using Omniverse to train the “cutest robot ever.”
Huang announced that Uber and NVIDIA are collaborating to build the backbone for autonomous mobility — targeting about 100,000 autonomous vehicles, with scaling starting in 2027. NVIDIA DRIVE AGX Hyperion 10 is the level‑4 reference architecture: safe, scalable, software‑defined — unifying human and robot drivers on one network.
“In the future, you’ll be able to hail up one of these cars,” Huang said. “The ecosystem is going to be incredibly rich, and we’ll have Hyperion or robotaxi cars all over the world.”
Key facts:
“The Age of AI has begun. Blackwell is its engine. Made in America, made for the world,” Huang concluded. “Thank you for allowing us to bring GTC to Washington, DC. We’re going to do it hopefully every year, and thank you all for your service and making America great again.”
Learn more by reading the NVIDIA GTC Washington, D.C., blogs and press releases.
Tuesday, Oct. 28, 4:30 p.m. ET
Physics simulation is essential for teaching robots how to move and interact with the world — but the complexity of modern robots pushes current simulation technology to its limits.
Newton is an open-source, extensible physics engine for advancing robot learning, developed by NVIDIA, Google DeepMind and Disney Research and managed by the Linux Foundation. Built on NVIDIA Warp and OpenUSD, and compatible with robot learning frameworks such as MuJoCo Playground and NVIDIA Isaac Lab, Newton enables robots to learn how to handle complex tasks with greater precision.
Showcased at NVIDIA GTC Washington, D.C., this week, Newton now comprises multiple solvers that teach robots how to interact with different types of materials. The cloth simulator helps robots learn to handle fabrics and flexible materials for tasks like folding laundry. The granular materials solver helps robots practice working with sand, soil and other loose materials in real time, preparing them for outdoor tasks on uneven or shifting ground.
The rigid body solver handles the precise movements robots need for working with solid objects and their own mechanical joints, enabling smooth coordination between different parts of a robot’s body. This allows developers to simulate complex scenarios — like picking up a solid cup filled with liquid while standing on sand — within the same virtual environment.
Tuesday, Oct. 28, 3:30 p.m. ET
The NVIDIA GH200 Grace Hopper Superchip has aced key financial services inference benchmarks on real-time market data.
STAC — an organization that creates industry-standard benchmarks for technology firms and financial institutions — recently performed a STAC-ML Markets (Inference) audit on the NVIDIA GH200 Grace Hopper Superchip in a Supermicro server to test latency, throughput, energy, space and algorithm performance across varying model architectures.
The GH200 Superchip — part of NVIDIA’s full-stack inference platform being showcased this week at NVIDIA GTC Washington, D.C. — achieved a new world record, surpassing STAC-ML’s previous best-reported result on field-programmable gate arrays, demonstrating:
STAC also audited the STAC-AI LLM Inference benchmark — an industry-standard test for financial institutions running high-throughput data analytics and large-scale LLM inference workloads — on the GH200 NVL2 and HPE servers with eight NVIDIA H200 NVL GPUs.
The audit ran workloads using the open-source Llama-3.1-8B and Llama-3.1-70B models on EDGAR filings, achieving record results:
Read the full reports for the results:
Tuesday, Oct. 28, 1:30 p.m. ET
NVIDIA is rolling out new open models as part of NVIDIA Clara, a family of models, tools and recipes built to accelerate scientific discovery, analyze medical images and provide a foundational understanding of human health, biology and chemistry.
Clara powers the entire early drug discovery pipeline — from predicting protein structures to designing molecules that can be synthesized in a lab.
Clara includes CodonFM, a model that learns the rules of RNA to reveal how changes in its code can improve therapeutic design.
Codeveloped with Arc Institute, the CodonFM model will be used by Therna Biosciences, Greenstone Biosciences, Moonwalk Biosciences and the Stanford RNA Medicine program to refine their RNA datasets to better map medicine design.
NVIDIA will contribute open models like CodonFM to the Chan Zuckerberg Initiative’s virtual cells platform, accelerating open-source collaboration and model evaluation. NVIDIA also codeveloped cz-benchmarks to create community-driven standards for virtual cell models.
Another Clara model is La-Proteina, which creates 3D protein structures atom by atom at double the length and faster than previous similar models, enabling the design of better medicines, enzymes and materials.
Clara also includes Reason, a vision language model that advances explainable AI in radiology and imaging; Segment for interactive 3D segmentation and annotation; and Generate for producing high-quality synthetic CT and MR images.
NVIDIA researchers collaborated with National Institutes of Health (NIH) clinicians to capture the human expert reasoning process, bringing transparency and interpretability to medical AI. NIH is integrating Clara Reason models into radiology workflows to aid report drafting, explain findings and support clinician training.
Kitware has integrated NVIDIA Clara open models into its VolView platform, bringing advanced medical AI to an interactive, web‑based visualization environment.
This enables researchers, developers and innovators to experiment with state-of-the-art AI capabilities for segmentation, reasoning and generative workflows within their existing imaging and data ecosystems.
See notice regarding software product information.
With physical AI and simulation, Johnson & Johnson MedTech is advancing development of the MONARCH Platform — a first-to-market innovation in robotic-assisted bronchoscopy that’s also cleared for use in robotic-assisted urologic procedures in the United States.
Using NVIDIA Isaac for Healthcare, powered by the NVIDIA Omniverse and Cosmos platforms being showcased at the NVIDIA GTC Washington, D.C., conference this week, J&J’s teams will use virtual environments to simulate how the MONARCH Platform for Urology may perform — testing everything from device setup to patient interaction, all before even entering an operating room.
Kidney stones affect nearly one in nine Americans, resulting in up to 2 million emergency department visits each year. In addition, many patients require multiple procedures — 10% even return within a month, and half experience recurrence within a decade. In treating conditions like kidney stones, clinicians endure long, repetitive endoscopic procedures that contribute to fatigue and strain, with more than 60% of endourologists reporting orthopedic injuries.
With Omniverse, engineers and clinicians can codesign and test digital twins of both anatomy and the operating room — optimizing layout, ergonomics and procedural flow without occupying physical space or equipment. Design reviews that once took months or even years can now happen in hours, enabling faster iteration.
Meanwhile, Cosmos generates realistic synthetic datasets to train perception and navigation models for vision, tracking and automation — compressing months of data collection into hours. This continuous feedback loop between virtual and physical systems accelerates validation, shortens time to clinical readiness and strengthens efficiency in the operating room, where every second counts.
A simulation-first approach will help Johnson & Johnson MedTech teams evaluate multiple design variations, test new instruments virtually and integrate clinician feedback. It also has the potential to transform training for the MONARCH Platform for Urology, set to launch commercially in the U.S in 2026, by enabling clinicians to practice complex scenarios in high-fidelity, physically accurate anatomical simulations before interacting with patients.
It’s almost time for the GTC Washington, D.C., keynote by NVIDIA CEO Jensen Huang. Tune in now to the pregame show — hosted by Altimeter’s Brad Gerstner, Moor Insights and Strategy’s Patrick Moorhead and CNBC’s Kristina Partsinevelos — as industry titans from across the ecosystem share insights into the state of AI.
NVIDIA founder and CEO Jensen Huang even made a special appearance with Brett Adcock, CEO of Figure AI.
The keynote will begin at 12 p.m. Eastern, with live coverage here on the blog.
Monday, Oct. 27, 12:30 p.m. ET
A panel discussion today at GTC Washington, D.C., highlighted a public-private initiative to invigorate the economy of Rancho Cordova, California, with a focus on AI.
To bolster innovation, the city is working with the Human Machine Collaboration Institute and NVIDIA on an economic development strategy that includes AI education initiatives, investments in AI infrastructure and support for small businesses adopting AI.
“We’re the city of Rancho Cordova. We are not experts in artificial intelligence. We are not experts in semiconductors,” said Micah Runner, city manager of Rancho Cordova. “And so you find good partners, and you have to trust them in the space that they know and they’re responsible for.”
The city sees its strategy as one that can be replicated nationwide.
“There are only 20 large cities in the country, but there are tens of thousands of medium and small cities — and I think we have to figure out how to create an equal playing field for all of those communities,” Runner said.
Monday, Oct. 27, 9 a.m. ET
GTC Washington, D.C., begins with a series of full-day developer workshops at the Walter E. Washington Convention Center covering agentic AI, accelerated data science, large language models and physical AI.
Meanwhile, an NVIDIA Gear Store truck filled with swag — including the popular engineering ruler — is making stops at three universities around Washington, D.C.
Locations include:
Setup is underway for the main event — NVIDIA CEO Jensen Huang’s keynote — taking place Tuesday at 12 p.m. ET.
Friday, Oct. 24, 5 p.m. ET
Next week, Washington, D.C., becomes the center of gravity for artificial intelligence. NVIDIA GTC Washington, D.C., lands at the Walter E. Washington Convention Center Oct. 27-29 — and for those who care about where computing is headed, this is the moment to pay attention.
The headline act: NVIDIA founder and CEO Jensen Huang’s keynote address on Tuesday, Oct. 28, at 12 p.m. ET. Expect more than product news — expect a roadmap for how AI will reshape industries, infrastructure and the public sector.
Before that, the pregame show kicks off at 8:30 a.m. ET with Brad Gerstner, Patrick Moorhead and Kristina Partsinevelos offering sharp takes on what’s coming.
But GTC offers more than a keynote. It provides full immersion: 70+ sessions, hands-on workshops and demos covering everything from agentic AI and robotics to quantum computing and AI-native telecom networks. It’s where developers meet decision-makers, and ideas turn into action. Exhibits-only passes are still available.
Bookmark this space. Starting Monday, NVIDIA will live-blog the news, the color and the context, straight from the floor.