Select Page

Now that we’re learning more about Large Language Model AI, users have discovered that what the model is trained on is one important key. That makes this robot AI trainer intriguing.

Universal Robots and Scale AI Launch Imitation Learning System to Accelerate AI Model Training, Bridging the ’Lab-to-Factory’ Gap

Universal Robots (UR) unveiled March 16 the UR AI Trainer at GTC 2026 in Silicon Valley. Developed in collaboration with Scale AI, the AI Trainer marks a tectonic shift as robots move from pre-programmed applications to fully AI-driven tasks. These systems are powered by robust data generated in AI training cells where robots imitate humans.  

“Our customers, ranging from large enterprises to AI research labs, are no longer just asking for AI features,” said Anders Beck, VP of AI Robotics Products at Universal Robots. “They need a way to collect high-fidelity, synchronized robot and vision data to train AI models on the same robots they intend to deploy. Our AI Trainer is the industry’s first direct lab-to-factory solution for AI model training.”  

Alongside the new AI Trainer, Universal Robots’ GTC booth will showcase a state-of-the-art robotic foundation model from Generalist AI, a UR preferred model partner. Leveraging this model, two UR robots will complete a complex smartphone packaging task, previously impossible without recent advances in the field of Physical AI.  

  

AI robotics training is often hindered by fragmented hardware and low-fidelity data capture.  Much of today’s training data is collected on research robots not suited for production environments, and many systems rely only on visual feedback, making delicate or contact-rich tasks difficult. “The AI Trainer directly addresses these barriers,” said Beck.  “By utilizing our unique Direct Torque Control and force feedback features, we give developers direct influence over how the robot physically interacts with the world, training on the same robust hardware used in over 100,000 industrial deployments.”

The AI Trainer allows human operators to guide UR robots through tasks in a leader-follower setup while automatically capturing high-quality multimodal data for robotics AI development. Operators physically guide a “leader” robot through a task while a synchronized “follower” robot mirrors the motion in real time. During each demonstration, the system records synchronized motion, force, and visual data, producing the structured datasets required to train Vision-Language-Action (VLA).  

Deploying on UR’s AI Accelerator platform, the UR AI Trainer combines UR robots with Scale AI software to enable data capture on UR robots in production and at scale creating continuous feedback that drives ongoing optimization of physical AI systems.   

  

With GTC as the official launch pad, attendees will be able to experience the system first-hand at UR’s booth as they guide two UR3e ‘leader’ robots providing haptic input to control two UR7e ’follower’ robots. The setup enables visitors to perform an advanced smartphone packaging task with haptic feedback for imitation learning and VLA training, with demonstration data recorded in real time on Scale’s stack and replayable directly on the AI Trainer.   

The process of capturing robot training data for AI models is further showcased through a demo that illustrates the same smartphone packaging task – just trained virtually:  Built in NVIDIA Omniverse and leveraging Isaac Sim, the simulated  setup allows attendees to control a virtual bi-manual UR3e system with real-time haptic feedback using two Haply Inverse3 devices as ‘leaders’, providing  a physics-accurate simulation.  

  

Universal Robots is also exploring the use of the NVIDIA Physical AI Data Factory Blueprint to automate and scale its synthetic data generation, transforming world-scale compute into a production engine for high-quality robotic training data. 

Complementing the two data-capture demonstrations, Generalist’s showcase highlights how advances in data collection and AI models translate into real-world robotic  performance. In the first public demonstration of Generalist’s embodied foundation models, two UR7e robots autonomously execute a complex smartphone packaging task, demonstrating dexterity, coordination, and contact-rich manipulation in a real-world environment. The demonstration shows how scaled, high-quality training data combined with frontier model architectures can enable robust physical AI systems beyond the lab.

Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.

Share This

Follow this blog

Get a weekly email of all new posts.