by Gary Mintchell | Apr 13, 2026 | Data Management, Enterprise IT, Manufacturing IT, Operations Management
I wrote (sort of) a long post Friday defining strategy and practice definitions of Smart Manufacturing. I used Claude.ai to research. I also wanted to see what Claude would write if I told it to put all the research together in an essay in the style of The Manufacturing Connection.
It did write—3,000 words.
What did I discover about the process?
I asked for citations; Claude provided several
- With every question, Claude was always most agreeable, never questioning my request but proceeding to tell me a story about the new research
- When I asked about writing in my style, Claude was most complimentary
- When I asked about holarchy of holons as a philosophical model, it interestingly returned the Purdue Enterprise Reference Architecture, aka The Pyramid model (without citing it)
- It did what I asked as a loyal copy editor, not as a collaborator
- On another project, I received a press release disguised as an article, it identified that the cited example was actually not relevant to the point providing an alternative example which is leading to further research on the subject—it can be helpful
Smart Manufacturing
Smart Manufacturing is a continuing evolution of better data for improved management with smoother processes in manufacturing.
The head of the product center of the manufacturing company where I worked in 1975 picked me to (among other tasks) become the czar of data. My task (and I chose to accept it) was to verify the accuracy of all data generated by product development, provide it in the correct and usable format to the various consumers—manufacturing operations, costing, procurement, accounting in my case.
By 1976, we were exploring how we could utilize the IBM model 3 minicomputer the company owned to help with this task. I believe this is called digitalization 😉
Fifty years later, I’ve witnessed the explosion of digital technology—sensors, networks, compute power, edge, IT applications like containers and databases, data science. Now CESMII wants to provide an open standard API to help connect all this (something its predecessor the SMLC proposed a decade ago).
Smart Manufacturing is not a thing—it’s a journey!
by Gary Mintchell | Apr 2, 2026 | Enterprise IT, Manufacturing IT
AI may be the sexy new kid on the block. High availability servers still keeps business running. This webinar from DH2i discusses its newest releases and how to ensure SQL Server high availability across Windows, Linux, and Kubernetes.
This demo-driven event on April 16 at 12:00pm EDT is intended to provide IT teams with a practical, real-world look at how to simplify and strengthen Microsoft SQL Server high availability across increasingly complex, multi-platform environments.
Details:
- SQL Server K8s scale-up AND scale-down automation
- Granular database-level monitoring with more predictable and reliable failover
- Seamless integration with K8s StatefulSets for streamlined pod management
- Optimized security & performance for heterogeneous environments
Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.
by Gary Mintchell | Mar 27, 2026 | Enterprise IT, Security
The accumulation, retention, and analysis of data continues to provide an important foundation for digital transformation, as well as, providing a threat vector for malicious hackers. News from companies combatting the problem forms a core to any coverage these days. This news contains details about launch of another IT solution.
DH2i, a leading provider of always-secure and always-on IT solutions, announced the general availability (GA) launch of DxEnterprise v26.0 and DxOperator v2, featuring high availability (HA), disaster recovery (DR), and operational resilience capabilities enhancements for SQL Server deployments across Windows, Linux, and Kubernetes environments. Together, the releases introduce meaningful advances in availability group (AG) protection, security controls, observability, and automation for both traditional and containerized SQL Server deployments.
In today’s enterprises, a perfect storm has emerged where applications have become direct revenue channels, infrastructure complexity has increased while IT staffing has not, modernization initiatives are no longer optional, security and compliance requirements are tightening, and software update velocity has accelerated. Together, these forces expose the limits of traditional HA approaches. What once worked for small, static clusters no longer scales when SQL Server deployments span hybrid, multi-platform, and containerized environments that demand continuous availability, stronger safeguards, and higher levels of automation. DxEnterprise v26.0 and DxOperator v2 address these challenges head-on.
DxEnterprise v26.0 focuses on improving cluster resilience, visibility, and administrative confidence through enhanced monitoring, stronger safeguards against split-brain scenarios, expanded credential support, and platform modernization. DxOperator v2 extends those capabilities into Kubernetes environments, giving users greater control over scale, updates, and network configuration for SQL Server AGs running in containers.
What’s New in DxEnterprise v26.0
- Deeper SQL Server and Availability Group Intelligence
- Database-level health monitoring is now enabled by default, allowing faster detection of issues affecting individual databases within an AG
- Split-brain scenarios are prevented via automatic per-availability-group quorum enforcement by demoting or shutting down replicas when quorum requirements are not met
- Improved replica connectivity alerts provide real-time notification when replicas disconnect or when SQL Server replica configurations diverge from expected cluster state
- Improved Security and Credential Resilience
- Support for secondary SQL Server backup credentials enables automatic fallback if primary authentication fails, reducing downtime caused by credential changes or expirations
- Administrative sessions are automatically disconnected when the cluster passkey changes, ensuring only authorized users with current credentials retain access
- The DxAdmin user interface now includes clearer prompts, stronger validation, and improved feedback for passkey configuration
- Greater Stability and Observability
- Core monitoring services, including DxLMonitor, DxCMonitor, DxStorMonitor, and DxHealthMonitor, have received reliability and stability improvements to reduce unexpected restarts and improve overall cluster resilience
- Basic anonymous telemetry is now available to help improve product quality and diagnostics, with opt-out configuration for customers who prefer not to participate
- Platform and Usability Enhancements
- DxEnterprise’s Linux version now runs on the .NET 8.0 runtime, delivering improved performance, security, and long-term support alignment
- Virtual hosts can now be renamed using a new rename-vhost command, simplifying cluster management and reorganization
- Additional safeguards prevent accidental overwriting of existing data stores during SQL Server high availability virtualization
- Enhancements to DxCLI and DxPS improve command-line usability, including human-readable XML output and new PowerShell cmdlets
- The DxCollect utility now includes expanded command-line options for more targeted diagnostics and log collection.
What’s New in DxOperator v2
- Flexible Scaling Up and Down
- Availability group clusters can now be expanded or reduced dynamically
- Unlike the previous version, DxOperator v2 can safely de-configure and remove replicas from a running cluster, enabling true scale-down operations
- Automated Rolling Updates
- Administrators can automate rolling updates of SQL Server or DxEnterprise container images, allowing pods to be updated one at a time without manual intervention
- Updates can also be performed manually when desired, giving operators full control over rollout strategy
- DxOperator does not automatically check for new container versions, ensuring that administrators remain in control of when and how updates are applied
- Advanced Network and Service Configuration
- Flexible service templates allow load balancers and other network services to be fully specified and automatically deployed per availability group replica
- This enables more consistent connectivity across different Kubernetes environments and cloud providers
- Redesigned Custom Resource and StatefulSet Adoption
- The custom resource definition has been redesigned for greater flexibility and now leverages Kubernetes StatefulSets
- By delegating pod creation, storage allocation, and rolling upgrades to Kubernetes, DxOperator v2 simplifies internal logic while benefiting from native Kubernetes reliability and lifecycle management
Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.
by Gary Mintchell | Dec 2, 2025 | Data Management, Enterprise IT, Generative AI, Internet of Things, Operations Management
User studies remain one of the primary ways software companies can gain insight and achieve some public recognition. Most of the studies emanate from cybersecurity protection developers. This one comes from a software company with which I’ve had little contact. There was a woman I knew from one company who came to SAS for a while. We had occasional conversations before she left that company.
SAS develops software applications. I’ve never had a handle on its business. It now bills itself as a global leader in data and AI. This study was conducted by the research firm IDC. And we have the acronym AIoT—or the convergence of AI and IoT. Somehow I feel that concatenating acronyms is the beginning of the end times 😉
Key findings from the IDC InfoBrief, How AIoT Is Reshaping Industrial Efficiency, Security, and Decision-Making, sponsored by SAS, include:
This one should surprise no one. Everyone discusses predictive maintenance.
Predictive maintenance dominates current AIoT use. Nearly 71% of organizations use AIoT for predictive maintenance, the most widely adopted use for manufacturing/industrial and energy companies surveyed. IT automation (53%) and supply and logistics (47%) were the next most cited uses for AIoT.
Executives continue to dream of significant cost reductions from AI.
AIoT drives tangible business value. 54% of respondents anticipate major cost savings, 52% predict smarter and faster innovation and 49% expect streamlined operations from their investment in AIoT. Additionally, 63% believe AIoT will boost productivity and competitiveness.
Managers continue to see AI as an aid to overcome the current skills gap of employees.
Skills gap emerges as the top challenge. The skills gap is the biggest barrier to AIoT success, outpacing legacy system integration and data quality issues as the most significant roadblock. Other challenges include high implementation costs, business process misalignment and cultural resistance. Addressing these issues is essential to unlocking AIoT’s full potential.
Some actually use the technology!
Heavy AIoT users see greater value. Organizations using AIoT heavily are twice as likely to report benefits that significantly exceed expectations as those that only use the technology sparingly. Strikingly, less than 3% say the value of AIoT “did not meet expectations.”
The IDC research is based on a global survey of more than 300 industrial executives in the manufacturing and energy industries.
And from the company:
SAS IoT solutions combine AI, machine learning and edge-to-cloud integration, enabling analysis of high-volume, high-velocity data. And joining AI with these IoT solutions extends the value of existing infrastructure investments and digitally transforms the workforce by shifting from manual oversight to intelligent orchestration.
Other organizations benefiting from SAS IoT and streaming analytics for improved asset reliability, enhanced product quality and increased efficiency across connected systems include:
- Georgia-Pacific
- Jakarta Smart City
- Lloyd’s List
- Lockheed Martin
- Town of Cary (North Carolina)
- Volvo Trucks and Mack Trucks
- wienerberger
by Gary Mintchell | May 12, 2025 | Enterprise IT, Generative AI
We have passed through the valley of the shadow of Large Language Models version of AI. Now we have moved a level to the gorge of Agentic AI. I’ve written about three posts I believe on that subject. Here is another company unveiling Agentic AI solutions.
Akka, the leader in helping enterprises deliver distributed systems that are elastic, agile, and resilient, announced new deployment options for its Akka solution, as well as new solutions to tackle the issues with deploying large-scale agentic AI systems for mission-critical applications. Already the standard for building resilient and elastic distributed systems with industry leaders like Capital One, John Deere, Tubi, Walmart, Swiggy, and many others, Akka now also gives enterprises unprecedented freedom to deploy Akka-based applications on the infrastructure of their choice. For the first time, developers now have two new options that enable them to leverage Akka to build distributed systems at scale and self-host their application or deploy their application across multiple regions automatically.
“Agentic AI has become a priority with enterprises everywhere as a new model that has the potential to replace enterprise software as we understand it today,” said Tyler Jewell, Akka’s CEO. “With today’s announcement, we’re making it easy for enterprises to build their distributed systems, including agentic AI deployments, without having to commit to Akka’s Platform. Now, enterprise teams can quickly build scalable systems locally and run them on any infrastructure they want.”
The agentic shift requires a fundamental architectural change from transaction-centered to conversation-centered systems. Traditional SaaS applications are built on stateless business logic executing CRUD operations against relational databases. In contrast, agentic services maintain state within the service itself and store each event to track how the service reached its current state.
As a result, developer teams experience very unpredictable behavior, limited planning and memory impacting agent effectiveness, hard failures at scale, opaque decision-making with zero transparency, and, perhaps most importantly, significant cost and latency concerns.
Today, Akka has introduced two new deployment capabilities:
- Self-managed Akka nodes – You can now run clusters of services that were built with Akka SDK on any cloud infrastructure. The new version of the Akka SDK includes a self-managed build option that will create services that can be executed stand-alone. Your services are binaries packaged in Docker images that can be deployed in any container PaaS, bare metal hardware, VMs, edge nodes, or Kubernetes with any Akka infrastructure or Platform dependencies. Your nodes have Akka clustering built from within.
- Self-hosted Akka Platform regions – Teams can now run your own Akka Platform region without any dependency on Akka.io control planes. Services built with the Akka SDK have always been deployable onto Akka Platform, with Akka providing managed services through the company’s Akka Serverless and Akka BYOC offerings. Akka Platform provides fully automated operations, alleviating admins from more than 30 maintenance, security, and observability duties. Both Serverless and BYOC federated multiple regions together by using an Akka control plane hosted at Akka.io.
In contrast, self-hosted regions are Akka Platform regions with no Akka control plane dependency, which teams will install, maintain, and manage on their own. Self-hosted regions can be installed in any data center with orchestration, proxy, and infrastructure dependencies specified by Akka. Since Akka Platform is updated many times each week, the installation of self-hosted regions is executed in cooperation with Akka’s SRE team to ensure stability and consistency of a customer environment.
Akka, formerly known as Lightbend, is relied upon by industry titans and disruptors to build and run distributed applications that are elastic, agile, and guaranteed resilient.
by Gary Mintchell | Jan 28, 2025 | Cloud, Enterprise IT
I wrote about a company new to me from The Netherlands thanks to a media relations person I’ve known for quite some time. Leaseweb Global provides cloud services and Infrastructure as a Service. They are back with an announcement regarding adding NVIDIA L4, L40S and H100 NVL GPUs to its infrastructure portfolio.
Through this offering, Leaseweb notes it is meeting the compute needs of a wide variety of sectors – including the Artificial Intelligence (AI), Media & Entertainment and Gaming industries – at a price point that enables significant cost savings when compared to the wider marketplace.
Available across Leaseweb’s entire global network, spanning the European, North American and Asia Pacific regions, the expanded GPU offering supports customers with a scalable, efficient deployment framework optimized for high-performance computing (HPC), ranging from AI model training and video analytics to graphics processing and video rendering functionality. Leaseweb’s new NVIDIA GPU solution aims to help customers improve their operations, reduce costs, and enhance computational speed for demanding workloads. The announcement also underlines Leaseweb’s commitment to meeting the demand for powerful infrastructure solutions with industry benchmark performance chips that can be deployed within hours to ensure high availability service provision.
This marks the next step in Leaseweb’s journey to providing a complete AI offering for its customers, which will include integration into Leaseweb’s public cloud and broader set of infrastructure solutions. By providing a comprehensive, scalable solution for a wide variety of workloads, Leaseweb is reinforcing its position as a trusted partner for organizations focused on balancing price with performance and availability. With further plans to integrate this offering into its broader solutions suite, the company is strongly positioned to become a leading provider of GPU infrastructure, supporting customers as they invest in these transformational technologies.
“This announcement represents an important step for customers where GPU availability is increasingly important and will give organizations around the world the price/performance flexibility they need, as soon as they need it,” commented Liat Mendelson Honderdors, Principal Product Manager, AI and GPU at Leaseweb.
“Our customers value Leaseweb’s extensive industry expertise as they plan and deploy infrastructure for their most processor-intensive workloads. With considerations ranging from price and performance to data sovereignty and compliance, Leaseweb’s solutions and state-of-the-art global network means we are ideally suited to helping our customers grow their business and expand into new markets, even at hours’ notice. By incorporating best-in-class NVIDIA technology into our infrastructure portfolio, we’re laying the foundation for a broader solution set that will continue to evolve with customer needs,” Mendelson Honderdors concluded.