Select Page

Siemens Xcelerator With Microsoft for Edge, Cloud, AI and Simulation

The third of Siemens pre-Hannover news releases concerns Xcelerator Edge with Microsoft Azure IoT Operations.

  • Siemens Industrial Edge works seamlessly with Microsoft Azure IoT Operations, making OT and IT data planes fully interoperable for manufacturing
  • Edge and cloud data integration enables adaptive production through AI- and digital-twin-powered solutions
  • Industrial customers benefit from improved machine performance, product quality and reduced machine maintenance

Siemens announces an extended collaboration with Microsoft in the context of Siemens Xcelerator, Siemens’ open digital business platform, to simplify the integration of information technology (IT) and operational technology (OT) for enterprise customers. By combining Siemens Industrial Edge with Microsoft Azure IoT Operations, customers will benefit from complementary solutions that enable a seamless flow of data from production lines to the edge and to the cloud. This edge-to-cloud data integration enables AI- and digital-twin-powered solutions to improve machine performance, product quality, and reduce machine maintenance.

A core component of the Azure adaptive cloud approach, Azure IoT Operations is designed to seamlessly integrate on-premises industrial edge solutions, like Siemens Industrial Edge, with the cloud, ensuring a continuous flow of data for smarter operations.

In this way, the powerful OT data plane provided by Siemens Industrial Edge works easily with Azure IoT Operations, to create an interoperable OT and IT data plane for manufacturing. The data layer from Siemens Industrial Edge effectively addresses mission-critical production applications such as virtualized control, low-latency closed-loop AI, executable digital twins, or production line-level analytics. It allows manufacturers to deploy responsive, reliable, flexible and secure applications to optimize their operations, reduce costs, and increase uptime and quality. By coupling with Azure IoT Operations, industrial producers can easily leverage this OT data in cloud-based, data-driven use cases to optimize production across sites and gain insights from advanced analytics.

Siemens Expands Industrial Copilot with New generative AI-powered Maintenance Offering

This is the second of four Siemens news items. In the vein of everyone in industrial software is Microsoft’s best friend, Copilot headlines this news. And no news today is complete without mentioning generative AI. 

  • The Siemens Industrial Copilot, a generative AI-based assistant, is empowering customers across the entire value chain – from design and planning to engineering, operations, and services
  • Siemens expands its Industrial Copilot offering with extended capabilities for Senseye Predictive Maintenance
  • The generative AI-powered solution will support every stage of the maintenance cycle, from repair and prevention to prediction and optimization

A glimpse of Siemens’ AI strategy:

The Siemens Industrial Copilot is revolutionizing industry by enabling customers to leverage generative AI across the entire value chain – from design and planning to engineering, operations, and services. For example, the generative AI-powered assistant empowers engineering teams to generate code for programmable logic controllers using their native language, speeding-up SCL code generation by an estimated 60% while minimizing errors and reducing the need for specialized knowledge. This in turn reduces development time and boosts quality and productivity over the long term.

Siemens is developing a full suite of copilots to industrial-grade standards for the discrete and process manufacturing industries – and is now strengthening its Industrial Copilot offerings with the launch of an advanced maintenance solution, designed to redefine industrial maintenance strategies.

Bringing it to maintenance

The Senseye Predictive Maintenance solution powered by Microsoft Azure will be extended with two new offerings:

  • Entry Package: This predictive maintenance solution combines AI-powered repair guidance with basic predictive capabilities. It helps businesses transition from reactive to condition-based maintenance by offering limited connectivity for sensor data collection and real-time condition monitoring. With AI-assisted troubleshooting and minimal infrastructure requirements, companies can reduce downtime, improve maintenance efficiency, and lay the foundation for full predictive maintenance.
  • Scale Package: Designed for enterprises looking to fully transform their maintenance strategy, this package integrates Senseye Predictive Maintenance with the full Maintenance Copilot functionality. It enables customers to predict failures before they happen, maximize uptime, and reduce costs with AI-driven insights. Offering enterprise-wide scalability, automated diagnostics, and sustainable business outcomes, this solution helps companies move beyond traditional maintenance, optimizing operations across multiple sites while supporting long-term efficiency and resilience.

A3 Expands Event Lineup with FOCUS: Intelligent Vision & Industrial AI Conference

Just in from The Association for Advancing Automation (A3) about a timely new conference. I’m not sure I can make it to Seattle for this conference, but it looks like a good place to explore timely topics.

The Association for Advancing Automation (A3), the leading voice in automation and robotics, today announced the launch of a new industry event, FOCUS: Intelligent Vision & Industrial AI Conference. Set to take place September 24-25, 2025, in Seattle, this conference will provide an in-depth look at the latest advancements in machine vision, imaging technologies, AI, and smart automation applications. Attendees will explore cutting-edge innovations in vision systems and imaging while also diving into real-world case studies on AI-driven automation across industries, including manufacturing, aerospace, agriculture, defense, energy, logistics and medical devices.

With AI-powered automation and vision systems rapidly improving quality control, predictive maintenance, and robotics capabilities, industrial leaders need actionable insights to stay ahead of the curve. Unlike broader industry conferences, the FOCUS: Intelligent Vision & Industrial AI conference will center specifically on the real-world applications of AI and vision technology, featuring expert-led sessions, in-depth case studies, and hands-on technology showcases.

Registration opens soon! Stay ahead of the curve—visit the FOCUS 2025 page and subscribe for updates to be among the first to know when registration goes live.

Honeywell Unveils AI Assistant For Industrial Operators

AI Assistants comprise the new entry ticket for software developers. A tech writer I’ve followed for years recently posted a worry that Microsoft might bring back “Clippy” in AI guise. I’m glad to see Honeywell joining the trend. They’ve been quiet for some time in all their various divisions. With the announcement of breaking the company into parts makes this news both poignant and essential.

These assistants based upon generative AI are becoming an essential ingredient to user interface as a new generation of operators and engineers enter the workforce.

Honeywell announced the latest release of Honeywell Forge Production Intelligence, which seamlessly integrates performance monitoring with a new generative AI assistant to help operators and production managers automate tasks and troubleshoot problems.

By leveraging advanced generative AI models, the platform’s new Intelligent Assistant is designed to enhance user experience by allowing engineers, plant managers and business leaders to access key insights through simple, natural language prompts. The tool will also enable industrials to visualize, trend, and troubleshoot production issues from Key Performance Indicator (KPI) deviation contributors and asset relationships.

The cloud-native platform merges performance monitoring with advanced analytics, enabling rapid root cause analysis of production issues. With the addition of the Intelligent Assistant, users can now summarize deviations and overall insights quicker and more effectively. The capability not only enhances AI insights with greater explainability and usability but also supports closed-loop collaboration workflows with case management integration.

Honeywell Forge Production Intelligence is part of Honeywell’s recently announced suite of AI-enabled solutions for industrials which also includes Experion Operations Assistant and Field Process Knowledge System.

Deepgram Achieves Key Milestone to Delivering Next-Gen, Enterprise-Grade Speech-to-Speech Architecture

While the tech giants are wrestling with their speech AI products, Deepgram seems to be delivering useful products for developers in a variety of applications. Following my last post about the company comes this news about speech-to-speech technology without intermediate text.

Deepgram announced a significant technical achievement in speech-to-speech (STS) technology for enterprise use cases. The company has successfully developed a speech-to-speech model that operates without relying on text conversion at any stage, marking a pivotal step toward the development of contextualized end-to-end speech AI systems. This milestone will enable fully natural and responsive voice interactions that preserve nuances, intonation, and emotional tone throughout real-time communication. When fully operationalized, this architecture will be delivered to customers via a simple upgrade from our existing industry-leading architecture. By adopting this technology alongside Deepgram’s full-featured voice AI platform, companies will gain a strategic advantage, positioning themselves to deliver cutting-edge, scalable voice AI solutions that evolve with the market and outpace competitors.

Existing speech-to-speech (STS) systems are based on architectures that process speech through sequential stages, such as speech-to-text, text-to-text, and text-to-speech. These architectures have become the standard for production deployments for their modularity and maturity, but eliminating text as an intermediary offers opportunities to improve latency and better preserve emotional and contextual nuances.

Meanwhile, multimodal LLMs like Gemini, GPT-4o, and Llama have evolved beyond text-only capabilities to accept additional inputs such as images, videos, and audio. However, despite these advancements, they struggle to capture the fluidity and nuance of human-like conversation. These models still rely on a turn-based framework, where audio input is tokenized and processed within a textual domain, restricting real-time interactivity and expressiveness.

To advance the frontier of speech AI, Deepgram is setting the stage for end-to-end STS models, which offer a more direct approach by converting speech to speech without relying on text. Recent research on speech-to-speech models, such as Hertz and Moshi, has highlighted the significant challenges in developing models that are robust and reliable enough for enterprise use cases. These difficulties stem from the inherent complexities of modeling conversational speech and the substantial computational resources required. Overcoming these hurdles demands innovations in data collection, model architecture, and training methodologies.

Deepgram is transforming speech-to-speech modeling with a new architecture that fuses the latent spaces of specialized components, eliminating the need for text conversion between them. By embedding speech directly into a latent space, Deepgram ensures that important characteristics such as intonation, pacing, and situational and emotional context are preserved throughout the entire processing pipeline. What sets Deepgram apart is its approach to fusing the hidden states—the internal representations that capture meaning, context, and structure—of each individual function: Speech-to-Text (STT), Large Language Model (LLM), and Text-to-Speech (TTS). This fusion is the first step toward training a controllable single, true end-to-end speech model, enabling seamless processing while retaining the strengths of each best-in-class component. This breakthrough has significant implications for enterprise applications, facilitating more natural conversations while maintaining the control and reliability businesses require.

One of the requirements in enterprise speech-to-speech modeling is the ability to understand and troubleshoot each step of the process. This is particularly challenging when text conversion between steps isn’t involved, as verifying both the accuracy of the initial perception and the alignment of the spoken output with the intended response is not straightforward. Deepgram recognized this need and addressed it by designing a new architecture that enables debuggability throughout the entire process.

AI-Powered Asset Performance Management

AI contributes at least 50% of the content of news in my area of interest. No surprise that Yokogawa has harnessed some AI expertise for its Asset Performance Management.

Yokogawa Electric and UptimeAI announced a strategic agreement aimed at enhancing asset performance management in industrial plants. The agreement is underscored by a capital investment in UptimeAI by Yokogawa.

Under the agreement, the companies will integrate UptimeAI’s AI-powered platform into Yokogawa’s OpreX Asset Health Insights service. The combined solution will provide customers in the oil and gas, chemicals, cement, power, and renewable energy industries with a seamless and powerful approach to optimize plant operations, reliability, and maintenance.

Specifically, the bundled offering will merge the capabilities of OpreX Asset Health Insights as an OT/IT data enablement engine with UptimeAI’s flagship modules, “AI Expert: Generative AI” and “AI Expert: Reliability & Process,” bringing advanced LLM-based AI agents, subject matter knowledge, self-learning workflows, maintenance analysis, and industrial asset library models into a comprehensive AI assistant for plant operators. This solution will enable users to achieve a significant positive return on investment in a short period of time by reducing maintenance and operational costs with predictive insights, root cause analysis, and recommendations driven by automated learning processes.

Follow this blog

Get a weekly email of all new posts.