At the 2026 CES event in Las Vegas, NVIDIA unveiled a comprehensive suite of open-source physical AI models, simulation frameworks, and advanced hardware modules designed to boost the development of robotics and autonomous vehicles. Attendees at the event observed not only technological shifts but also notable enthusiasm from industry players integrating these advancements into their operations. Several companies, ranging from healthcare robotics firms to construction equipment manufacturers, are deploying NVIDIA’s new AI and hardware platforms to power intelligent, adaptable machines for real-world tasks. As more organizations turn to autonomous solutions, the accessibility and scale of these tools are likely to attract further industry interest.
When compared to previous announcements, this recent unveiling expands upon NVIDIA’s ongoing momentum toward open ecosystem collaboration and resource sharing. In earlier years, NVIDIA focused largely on closed development for certain hardware and software stacks, targeting select industry partners and research labs. The current approach, highlighted by partnerships with Hugging Face and the release of foundational AI models on public platforms, signals a more community-driven strategy. Simultaneously, the performance leap delivered by new modules like the Jetson T4000 comes at a time when several rivals were working to close the hardware gap for high-performance robotics computing.
What Are NVIDIA’s New Physical AI Models for Robotics?
NVIDIA introduced open models such as Cosmos Transfer 2.5, Cosmos Predict 2.5, and Cosmos Reason 2 to streamline the development of robotics with reasoning, planning, and perception abilities. These foundation models aim to enable robots to better understand their environments, adapt to diverse tasks, and handle complicated workflows without the resource-intensive pretraining routines that previously acted as barriers for smaller organizations.
“Breakthroughs in physical AI — models that understand the real world, reason and plan actions — are unlocking entirely new applications,”
said Jensen Huang, CEO of NVIDIA. The company claims these models, which are publicly available on Hugging Face, provide a bridge between highly specialized robots and emerging generalist machines, marking a step toward “generalist-specialist” robotic capabilities that combine versatility with subject-matter expertise.
How Do NVIDIA’s Simulation and Compute Frameworks Support Developers?
To address the complexity of real-world deployment, NVIDIA launched frameworks such as Isaac Lab-Arena for benchmarking and training robot policies using standardized tasks and the OSMO orchestration platform for managing development workflows across different compute infrastructures. These tools operate from on-premises workstations to public cloud systems, accelerating the research-to-production timeline.
“Isaac Lab-Arena is the world’s first collaborative system for large-scale robot policy evaluation and benchmarking to address this critical gap,”
stated Rev Lebaredian, NVIDIA’s Vice President for Omniverse and simulation. The integration of these solutions into the Hugging Face LeRobot library gives millions of developers immediate access to the latest AI and simulation resources through a unified suite.
Which Companies Are Adopting the Latest NVIDIA Tools and Hardware?
A variety of firms are advancing autonomous systems by utilizing NVIDIA’s GR00T models and Jetson Thor processing modules. LEM Surgical is enhancing its Dynamis robot, while Boston Dynamics, Franka Robotics, RLWRLD, and Caterpillar are adopting these solutions across humanoid, industrial, and construction robotics. Other brands, such as Archer and AGIBOT, use the new IGX Thor and Isaac platforms for safety and simulation in aviation, healthcare, and manufacturing. Support from industry partners like Microsoft Azure, Advantech, ADLINK, and Lucid also demonstrates the broad utility envisioned for these technologies.
NVIDIA has also launched the Alpamayo family of reasoning-based models and simulation tools for autonomous vehicles, which support perception, planning, and agent-style decision-making. With these, automakers and mobility tech firms like Jaguar Land Rover, Uber, and Berkeley DeepDrive are enabled to fast-track the deployment of SAE Level 4 autonomy and beyond. The Jetson T4000 module, aimed at affordable and scalable performance, rounds out the hardware updates, while a focus on flexible, community-accessible tools continues to distinguish NVIDIA’s strategic offerings from those of earlier product cycles.
Looking at the current landscape, NVIDIA’s release underscores the shifting direction of robotics AI toward openness and rapid iteration. By lowering entry barriers and offering cross-platform support, companies and researchers can more easily experiment and deploy advanced reasoning and planning systems in physical machines. For stakeholders building the next wave of intelligent automation, familiarity with models such as Cosmos and Alpamayo, and tools like Isaac Lab-Arena and OSMO, will become increasingly useful—not only to access state-of-the-art algorithms, but also to participate in the growing ecosystem around robotics and AI at large.
