by Gary Mintchell | Mar 30, 2026 | Business, Capital Projects, Robots
Fanuc announced in recent news a significant investment to create production capacity for robot manufacturing in the US.
We experienced more announcements regarding investments in manufacturing and infrastructure in the US during the past 10 years than just about any other flurry of announcements. Most never came to pass. This thought includes the multi-billions announced for building out data center infrastructure to power AI LLMs. (Most of those will never be built as this technology levels off.)
One of my favorite analysts, Samantha Mou, Senior Analyst at market intelligence firm Interact Analysis provides comments regarding the announcement which I find relevant.
- FANUC America’s $90M investment is part of a growing trend where robot manufacturers are bringing production closer to key markets, and the US is becoming a critical destination. Interact Analysis expects the industrial robot market here to see steady growth over the next five years, driven by reshoring initiatives and policies like tariffs, which are forcing robot makers to rethink their manufacturing strategies.
- FANUC isn’t alone in this shift. Just last year, Yaskawa attracted attention by announcing plans for US-based production for robots and motion control components. As the largest robot supplier in the U.S. by market share, FANUC’s push toward local production aligns naturally with its market leadership and customer proximity strategy.
- That said, questions remain about the depth of localization. It is possible that the new facility will primarily support assembly instead of full-scale manufacturing. Given that FANUC produces its core motion control components in Japan, and with limited domestic supply of key parts such as precision gearboxes in the US, it is likely that critical components will continue to be imported, with final robot assembly conducted locally.
I’m always happy to see news of investment in manufacturing. But experience has made me skeptical about the real impact. We can hope.
Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.
by Gary Mintchell | Mar 18, 2026 | Generative AI, Robots
Now that we’re learning more about Large Language Model AI, users have discovered that what the model is trained on is one important key. That makes this robot AI trainer intriguing.
Universal Robots and Scale AI Launch Imitation Learning System to Accelerate AI Model Training, Bridging the ’Lab-to-Factory’ Gap
Universal Robots (UR) unveiled March 16 the UR AI Trainer at GTC 2026 in Silicon Valley. Developed in collaboration with Scale AI, the AI Trainer marks a tectonic shift as robots move from pre-programmed applications to fully AI-driven tasks. These systems are powered by robust data generated in AI training cells where robots imitate humans.
“Our customers, ranging from large enterprises to AI research labs, are no longer just asking for AI features,” said Anders Beck, VP of AI Robotics Products at Universal Robots. “They need a way to collect high-fidelity, synchronized robot and vision data to train AI models on the same robots they intend to deploy. Our AI Trainer is the industry’s first direct lab-to-factory solution for AI model training.”
Alongside the new AI Trainer, Universal Robots’ GTC booth will showcase a state-of-the-art robotic foundation model from Generalist AI, a UR preferred model partner. Leveraging this model, two UR robots will complete a complex smartphone packaging task, previously impossible without recent advances in the field of Physical AI.
AI robotics training is often hindered by fragmented hardware and low-fidelity data capture. Much of today’s training data is collected on research robots not suited for production environments, and many systems rely only on visual feedback, making delicate or contact-rich tasks difficult. “The AI Trainer directly addresses these barriers,” said Beck. “By utilizing our unique Direct Torque Control and force feedback features, we give developers direct influence over how the robot physically interacts with the world, training on the same robust hardware used in over 100,000 industrial deployments.”
The AI Trainer allows human operators to guide UR robots through tasks in a leader-follower setup while automatically capturing high-quality multimodal data for robotics AI development. Operators physically guide a “leader” robot through a task while a synchronized “follower” robot mirrors the motion in real time. During each demonstration, the system records synchronized motion, force, and visual data, producing the structured datasets required to train Vision-Language-Action (VLA).
Deploying on UR’s AI Accelerator platform, the UR AI Trainer combines UR robots with Scale AI software to enable data capture on UR robots in production and at scale creating continuous feedback that drives ongoing optimization of physical AI systems.
With GTC as the official launch pad, attendees will be able to experience the system first-hand at UR’s booth as they guide two UR3e ‘leader’ robots providing haptic input to control two UR7e ’follower’ robots. The setup enables visitors to perform an advanced smartphone packaging task with haptic feedback for imitation learning and VLA training, with demonstration data recorded in real time on Scale’s stack and replayable directly on the AI Trainer.
The process of capturing robot training data for AI models is further showcased through a demo that illustrates the same smartphone packaging task – just trained virtually: Built in NVIDIA Omniverse and leveraging Isaac Sim, the simulated setup allows attendees to control a virtual bi-manual UR3e system with real-time haptic feedback using two Haply Inverse3 devices as ‘leaders’, providing a physics-accurate simulation.
Universal Robots is also exploring the use of the NVIDIA Physical AI Data Factory Blueprint to automate and scale its synthetic data generation, transforming world-scale compute into a production engine for high-quality robotic training data.
Complementing the two data-capture demonstrations, Generalist’s showcase highlights how advances in data collection and AI models translate into real-world robotic performance. In the first public demonstration of Generalist’s embodied foundation models, two UR7e robots autonomously execute a complex smartphone packaging task, demonstrating dexterity, coordination, and contact-rich manipulation in a real-world environment. The demonstration shows how scaled, high-quality training data combined with frontier model architectures can enable robust physical AI systems beyond the lab.
Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.
by Gary Mintchell | Mar 18, 2026 | Education, Robots, Workforce
Here’s a free learning event in Reno concerning the value of robotics for expanding your manufacturing productivity.
As Northern Nevada’s manufacturing sector continues its rapid expansion, the region’s employers are confronting a growing challenge: a labor market that has nearly run out of slack. OnRobot will host the “Build your Automation Roadmap” event in Reno on April 9th, bringing practical automation solutions directly to the region’s manufacturing community.
The free, in-person event is designed for manufacturers in sectors such as metal fabrication, CNC machining, electronics, aerospace, food & beverage, and industrial equipment manufacturing – industries that form the backbone of the Reno manufacturing economy and are among the hardest hit by ongoing labor shortages.
Event Snapshot:
Build Your Automation Roadmap: Industrial Robots + Tooling
A hands-on event featuring live FANUC robot demonstrations, expert-led workshops, and real-world automation use cases.
April 9th, 2026, 12:00pm–4:00pm PT
Grand Sierra Resort, 2500 East 2nd St, Reno, NV
Northern Nevada’s manufacturing sector has grown faster than its available workforce can keep pace with unemployment rates at just 4.0%, near the lowest levels on record – leaving manufacturers with a shrinking pool of available workers even as industrial demand continues to grow.
A recent report from the University of Nevada, Reno, commissioned by the Nevada Office of Workforce Innovation, identified workforce gaps across every regional economic development authority in the state, with fabricated metal manufacturing, precision machining, and electronics among the most acutely affected sectors in the Reno area.
“Northern Nevada has become one of the most dynamic manufacturing regions in the country, but that growth is creating real pressure on employers who simply can’t hire fast enough to keep up,” said Kristian Hulgard, General Manager, Americas, at OnRobot. “Automation isn’t a future consideration for manufacturers here, it’s an immediate operational need. This event is about giving the region’s manufacturers a clear, practical path forward.”
At the event, attendees will see live demonstrations of FANUC robots equipped with OnRobot end-of-arm tooling for common applications such as machine tending, material handling, assembly, packaging, and quality inspection. Automation experts, that help manufacturers move from curiosity to confident implementation, will be on hand to share real-world experience in robotics, tooling, and integration.
Featured Speakers and Sessions:
Kristian Hulgard, General Manager, Americas, OnRobot – Opening keynote on macrotrends affecting U.S. automation growth, why adoption is accelerating across manufacturing, and what it means for Nevada operations.
Brian La Plante, District Manager, FANUC America – Live demonstrations of FANUC robots, identifying the practical approach to the automation journey.
Marc Magarin, President and Co-Founder, Nevatio Engineering – Learn how the right components, sourcing strategy, and distribution support keep automation projects on track.
Registration incl. lunch is free, but space is limited.
Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.
by Gary Mintchell | Mar 9, 2026 | Generative AI, Robots, Software
I walked into my local Starbucks this morning for my usual Doppio Espresso with cinnamon powder. I told my barista I was about to listen to a press conference on “physical AI.” “What do you think that is?” I asked her. “I don’t know. Maybe something like robots?” she countered. She saved me doing a deep dive with my buddy Claude.
The press conference was with ABB Robotics and NVIDIA announcing an expansion (for a fee) of ABB’s RobotStudio software to incorporate AI models establishing a new product called RobotStudio HyperReality coming to a computer near you in a few months.
- ABB Robotics integrates NVIDIA Omniverse libraries into RobotStudio to deliver physical AI for industry, closing the gap from virtual training to real-world deployment with up to 99% accuracy
- New RobotStudio HyperReality, available second half of 2026, will fundamentally change how quickly and reliably manufacturers can scale production, reducing costs by up to 40% and accelerating time-to-market by 50%
- Full range and breadth of industrial applications, with real-world pilot being conducted by Foxconn in consumer electronics assembly
- At NVIDIA GTC, the robotic workforce company WORKR will showcase how it’s using the solution to help manufacturers across the U.S. address critical labor shortages
The collaboration focuses on combining ABB Robotics’ software programming, design and simulation suite, RobotStudio, with the physically accurate simulation power of NVIDIA Omniverse libraries to close technology’s long-standing ‘sim-to-real’ gap. Developers can simulate robots in digital twins and generate synthetic data to train their physical AI models, enabling businesses of all types and sizes to deploy AI-driven robotics for various industrial workflows.
Called RobotStudio HyperReality, the resulting physically accurate simulations and foundation models are endlessly optimized with real-world data feedback continuously improving the system. These models can be used to train any number of ABB robots, anywhere in the world, with the reliability and accuracy demanded by industry.
The long-standing deficit between simulation accuracy and real-world lighting, materials and environments is known as the ‘sim-to-real’ gap. For decades, this gap has limited the ability of manufacturers to design and develop advanced manufacturing processes in the virtual world.
By integrating NVIDIA Omniverse libraries into RobotStudio, ABB Robotics will deliver unprecedented robotics simulation and synthetic data generation capabilities that will allow intelligent robots to bridge this gap with up to 99 percent accuracy. ABB is the only robot manufacturer with a virtual controller running the same firmware as the hardware, ensuring near perfect correlation between simulation and real world performance. Combined with ABB Robotics’ Absolute Accuracy technology, which reduces positioning errors from 8–15 mm to around 0.5 mm, ABB delivers unmatched precision in both virtual and physical environments, making it suited to high-precision industrial-grade applications.
ABB Robotics is also assessing the potential to integrate the NVIDIA Jetson edge computing plat-form into its Omnicore controller to achieve real-time AI inference at the edge for its extensive robot portfolio. Today’s announcement builds upon ABB Robotics’ long-standing work with NVIDIA, including the previous integration of NVIDIA Jetson into ABB Robotics’ VSLAM autonomous mobile robots as well as the development of gigawatt-scale AI data centers.
RobotStudio HyperReality will serve industrial clients at any scale, across a breadth of industries and applications, with select customers already testing its capabilities ahead of a full release to ABB Robotics’ 60,000 RobotStudio customers worldwide in the second half of 2026.
Foxconn, the world’s largest electronics contract manufacturer, is piloting the first joint use case in consumer electronics assembly. Automating the assembly of a tiny piece in consumer electronics is challenging, as multiple device variants require different production methods and the delicate metal structure requires precise pick-and-place and assembly control, as well as fine-tuned setup, often demanding additional debugging time and engineering resources. Using RobotStudio HyperReality, Foxconn’s assembly robots are trained virtually, using synthetic data to perfect multiple real-world production processes in various scenarios, before moving them to the production line with 99 percent accuracy. By optimizing production lines virtually, Foxconn will reduce set-up times and costs by eliminating physical training and tests, and accelerate time-to-market for consumer electronics.
WORKR, a California based robotic workforce company that delivers robotic manufacturing solu-tions to industry, is extending the reach of this technology to small and medium manufacturers across the United States. At NVIDIA GTC 2026 (March 16-19, San Jose, CA), WORKR will demonstrate AI- powered robotic systems built on ABB technology, trained with synthetic data using NVIDIA Omniverse libraries, and deployed without operators needing to know any program-ming. By combining ABB’s industrial grade robotics with its proprietary WorkrCore™ AI platform, the company is helping manufacturers address critical labor shortages with its robotic workforce that can learn new tasks in minutes and be operated by anyone.
by Gary Mintchell | Feb 11, 2026 | Operator Interface, Robots, Technology
Just heard of a company based both in San Francisco and Trondheim, Norway working in the robotic space. The problem it is solving is commanding industrial robots to perform pre-trained tasks without programming. Using AI training specific to the robot and applications, Trener Robotics’ Acteris platform allows operators to talk to robots in their own words to execute pre-trained skills.
I guess it’s inevitable that Alexa and Siri (hopefully better than the Apple version) gain industrial employment.
Trener Robotics today announced this week it has raised a $32 million Series A round of funding. Co-led by existing investor Engine Ventures and new investor IAG Capital Partners with participation from strategic investors Cadence and Geodesic Capital, through Nikon’s NFocus Fund, the new capital brings Trener Robotics’ total funding to over $38 million and will be used to support training Trener Robotics’ platform Acteris with new industrial robot processes, distribution expansion into new markets, and hiring talent to address rapidly scaling demand.
Unlike brittle, narrowly scripted systems or research-first generalist platforms, Acteris is a practical, shop-floor-proven solution. Trener Robotics’ first focus area is robotic CNC machine-tending with other high-demand applications to follow in 2026. Manufacturers using Acteris gain:
- A groundbreaking agentic user interface that enables robots to be controlled through natural conversation, intuitive task sequences, and high-fidelity simulation. It empowers any user, regardless of robotics expertise, to effortlessly run high-performance robotic applications.
- Part identification and handling even under adverse conditions.
- Optimized robot motions that react to changes, delivering unprecedented robustness.
- Intelligent collision avoidance and enhanced safety features that mimic common sense.
- Real-time production dashboards for performance monitoring.
Trener Robotics has built rapid momentum with more than 15 solution and integration partners across Europe and the U.S. that now provide Acteris-powered turnkey solutions—including the robot, gripper, and software—all pre-integrated and production-ready. Acteris is currently directly compatible with ABB, Universal Robots, and FANUC, with more leading robot brands to follow.
Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.
by Gary Mintchell | Jan 9, 2026 | Robots, Technology
I haven’t been to a general technology trade show for years. Going to the manufacturing mecca known as Hannover costs more than I wish to pay for the experience. My company exhibited at the old Comdex in Las Vegas and Chicago back in the late 80s-early 90s. That folded. Consumer Electronic Show (CES) replaced it as the huge tech show. I’ve never been. I’m not interested in TVs.
It’s still more expense and hassle than I wish to pursue to travel to Las Vegas for a huge show. However, more manufacturing technology companies exhibit there. I probably still won’t make the trip the first week of a new year. I do watch for news.
Something I never thought I’d see from CES was news about industrial robotics. This piece is a collaboration with Universal Robots, Robotiq, and Siemens. One trend is growing collaboration among companies. Another is digital twin or what once was called “cyber-physical systems.”
Universal Robots (UR), part of Teradyne Robotics, and Robotiq have unveiled a robotic palletizing solution at CES 2026 in collaboration with Siemens. The joint demonstration in Las Vegas highlights how advanced robotics and digital twin technology can accelerate industrial transformation for manufacturers worldwide.
The solution combines Robotiq’s PAL Ready palletizing cell with Universal Robots’ UR20 robot arm, integrated into Siemens’ automation hardware and new Digital Twin Composer software – launched at the event. Visitors to the Siemens booth #8725 in the LVCC North Hall will experience a digital-meets-physical showcase, where a fully simulated palletizing cell is rendered photo-realistically in real time and paired with a live hardware demonstration.
Designed to support a company’s operational needs, the system palletizes boxes of chips and beverages, leveraging digital twin analytics to optimize gripper performance and suction points dynamically. With data captured using Siemens’ Industrial Edge hardware, and then streamed to Siemens’ Insights Hub Copilot , the demonstrator provides real-time insights into cell behavior, reinforcing the theme of ‘digital AI meets physical AI’ and presents it in a real-time photorealistic environment built using Siemens’ new Digital Twin Composer software.
Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.