Newton 1.0 GA Released
NVIDIA has announced Newton 1.0 GA at GTC 2026, a production-ready, GPU-accelerated physics simulator purpose-built for industrial robotics. Unlike traditional physics engines that force trade-offs between speed and realism, Newton is designed to deliver both—enabling robots to learn dexterous manipulation and locomotion tasks with greater precision and efficiency.
Modular Architecture with Multiple Solvers
Newton's strength lies in its extensible, modular framework that unifies multiple physics solvers and simulation components behind a consistent API. Rather than locking developers into a single solver or scene format, it supports:
- MuJoCo Warp (MJX): Google DeepMind's trusted MuJoCo physics engine, now GPU-accelerated for scale. New optimizations deliver 252x speedup for locomotion tasks and 475x speedup for manipulation tasks on NVIDIA RTX PRO 6000 Blackwell.
- Kamino: Disney Research's advanced rigid-body solver excelling at complex mechanisms—robotic hands, legged systems, and closed-loop linkages with passive actuation—enabling designers to simulate systems without worrying about simulatability.
- Collision detection: SDF-based collision captures complex CAD geometries with high fidelity. Hydroelastic contacts model continuous pressure distributions for realistic tactile interaction, critical for sim-to-real transfer.
- Deformable simulation: VBD solver handles cables, cloth, and rubber parts; Implicit Material Point Method (iMPM) supports granular materials for rough terrain scenarios.
Integration with NVIDIA Ecosystem
Newton integrates natively with NVIDIA Isaac Sim 6.0 and Isaac Lab 3.0, streamlining workflows from robot description to trained policies. The framework supports common robotics formats (MJCF, URDF, OpenUSD) and serves as the common data layer, making it straightforward to connect existing robot assets and workflows.
Developer Impact
Key capabilities include a stable API for modeling, solving, and sensing; flexible collision detection pipelines reusable for custom solvers; and a Warp-based tiled camera sensor supporting high-throughput rendering (RGB, depth, albedo, normals, instance segmentation). Teams can mix and match components while maintaining a consistent simulation stack for reinforcement and imitation learning workflows.