Train robot policies in simulation for real-world adaptability.
Accelerated Computing Tools & Techniques
Data Center / Cloud
Robotics
Simulation / Modeling / Design
Healthcare and Life Sciences
Manufacturing
Retail/ Consumer Packaged Goods
Smart Cities/Spaces
Innovation
Return on Investment
NVIDIA Isaac GR00T
NVIDIA Isaac Lab
NVIDIA Isaac Sim
NVIDIA Jetson AGX
NVIDIA Omniverse
While preprogrammed robots can be useful for specific, repetitive tasks, they have one key drawback. They operate using fixed instructions within set environments, which limits their adaptability to unexpected changes.
AI-driven robots overcome these limitations through simulation-based learning, letting them autonomously perceive, plan, and act in dynamic conditions. They can acquire and refine new skills by using learned policies—sets of behaviors for navigation, manipulation, and more—to improve their decision-making across various situations before being deployed into the real world.
Flexibility and Scalability
The “sim-first” approach enables the training of hundreds or thousands of robot instances in parallel. Developers can iterate, refine, and deploy robot policies for real-world scenarios using a variety of data sources from real robot-captured data and synthetic data in simulation environments. This works for any robot embodiment, including autonomous mobile robots (AMRs), robotic arms, and humanoid robots.
Accelerated Skill Development
Train robots in physically accurate simulation environments, helping them adapt to new task variations and reducing the sim-to-real gap without the need to reprogram the physical robot’s hardware.
Safe Proving Environment
Test potentially hazardous scenarios without risking human safety or damaging equipment.
Reduced Costs
Avoid the burden of real-world data collection and labeling costs by generating large amounts of synthetic data, validating trained robot policies in simulation, and deploying on robots faster.
Robot learning algorithms—such as imitation learning or reinforcement learning algorithms—can help robots generalize learned skills and improve their performance in changing or novel environments. There are several learning techniques, including:
Quick Links
A typical end-to-end robot workflow involves data processing, model training, validation in simulation, and deployment on a real robot.
Data Processing: To bridge data gaps, use a diverse set of high-quality data that combines internet-scale data, synthetic data, and real robot data.
Training and Validating in Simulation: Robots need to be trained and deployed for task-defined scenarios and require accurate virtual representations of real-world conditions. The NVIDIA Isaac™ Lab, an open-source framework for robot learning, can help train robot policies by using reinforcement learning and imitation learning techniques in a modular approach.
Isaac Lab is natively integrated with NVIDIA Isaac Sim™—a reference robotic simulation application built on the NVIDIA Omniverse™ platform—using GPU-accelerated NVIDIA PhysX® physics and RTX™ rendering for high-fidelity validation. This unified framework lets you rapidly prototype policies in lightweight simulation environments before deploying to production systems.
Deploying Onto the Real Robot: The trained robot policies and AI models can be deployed on NVIDIA Jetson™, on-robot computers that deliver the necessary performance and functional safety for autonomous operation.
While imitation learning lets humanoid robots develop new skills by replicating expert demonstrations, collecting real-world datasets is often expensive and labor-intensive.
To overcome this challenge, developers can use the NVIDIA Isaac GR00T-Mimic and GR00T-Dreams blueprints—built on NVIDIA Cosmos™—to produce large, diverse synthetic motion datasets for training.
These datasets can then be used to train the Isaac GR00T N open foundation models within Isaac Lab, enabling generalized humanoid reasoning and robust skill acquisition.
Use Isaac Lab to conduct high-fidelity physics simulations, perform reward calculations, and enable perception-driven reinforcement learning (RL) within modular, customizable environments.
Start by configuring a wide variety of robots in varying environments, defining RL tasks, and training models using GPU-optimized libraries such as RSL RL, RL-Games, SKRL, and Stable Baselines3—all supported natively by Isaac Lab.
Isaac Lab offers flexible task workflows—either direct or manager-based—so you have control over the complexity and automation of your training jobs. Additionally, NVIDIA OSMO—a cloud-native orchestration platform—enables efficient scaling and management of complex, multi-stage, and multi-container robotics workloads across multi-GPU and multi-node systems. This can significantly accelerate the development and evaluation of robot learning policies.
Quick Links
NVIDIA RTX PRO™ 6000 Blackwell Series GPUs accelerate physical AI by running every robot development workload across training, synthetic data generation, robot learning, and simulation.