Select Page

I met a new neighbor the other day. We talked a bit about what we had done in our prior employment lives. Turns out she has a friend who gave her a copy of his book. She loaned Software Test Attacks to Break Mobile and Embedded Devices by Jon Duncan Hagar to me to read. It’s 10 years old, but it seems quite contemporary. (Not that I’ve done any embedded systems programming for decades.) The book is also thorough.

After reading through it, this press release dropped into my mailbox about yet another report from a security company. If they don’t scare you into taking action on software security, they’ve overestimated their impact. Using AI as a programming assistant is all the rage currently. Reports indicate that there are good uses, but also that you had best not use AI-generated code as your final build.

This 2025 report investigates AI adoption and the security of AI-generated code in critical embedded systems. It is certainly timely.

RunSafe Security, a pioneer of cyberhardening technology for embedded systems across critical infrastructure, announced the release of its 2025 report, AI in Embedded Systems: AI Is Here. Security Isn’t. The report is a snapshot of how artificial intelligence (AI) usage is unfolding across embedded software development and provides insights into what the data means for engineering, product, and security leaders who are integrating AI into their workflows.

Surveying more than 200 professionals throughout the US, UK, and Germany who work on embedded systems in critical infrastructure, the report reveals that AI-generated code is already running in production across medical devices, industrial control systems, automotive platforms, and energy infrastructure. The report finds that AI has quickly moved from an experimental curiosity to an operational reality in embedded systems development. While adoption races forward, security concerns loom large. 

Here follows the obligatory quote.

“AI will transform embedded systems development with teams deploying AI-generated code at scale across critical infrastructure, and we see this trend accelerating,” said Joseph M. Saunders, Founder and CEO of RunSafe Security. “Our report reveals an industry at an inflection point, where transformation is happening faster than security practices have evolved. Organizations that navigate it successfully will be those that maintain the same rigor with AI-generated code that they’ve traditionally applied to human-written code while also recognizing that AI introduces new patterns, risks, and security requirements. At RunSafe Security, we provide greater visibility into software and risk so organizations can properly manage their security while deploying AI in embedded systems.”

RunSafe Security’s report highlights the following key findings:

  • AI is already widely used in embedded software development workflows:
  • 80.5% of respondents currently use AI tools in embedded development
  • 83.5% have deployed AI-generated code to production systems 
  • 93.5% expect usage to increase over the next two years
  • Risk from AI-generated code is widely recognized, but framed as manageable if organizations modernize: 
  • 53% of respondents cited security as their top concern with AI-generated code 
  • 73% rated cybersecurity risk as moderate or higher
  • Runtime resilience is a central pillar of embedded security: 
  • Runtime protection for AI-generated embedded software is rated “highly important” by most respondents 
  • 91% of respondents plan to increase investment in embedded software security over the next two years 
  • 60% already use runtime protections to address memory safety vulnerabilities

Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.

Share This

Follow this blog

Get a weekly email of all new posts.