Select Page

High Availability, Disaster Recovery, and Operational Resilience Across SQL Server Environments

The accumulation, retention, and analysis of data continues to provide an important foundation for digital transformation, as well as, providing a threat vector for malicious hackers. News from companies combatting the problem forms a core to any coverage these days. This news contains details about launch of another IT solution.

DH2i, a leading provider of always-secure and always-on IT solutions, announced the general availability (GA) launch of DxEnterprise v26.0 and DxOperator v2, featuring high availability (HA), disaster recovery (DR), and operational resilience capabilities enhancements for SQL Server deployments across Windows, Linux, and Kubernetes environments. Together, the releases introduce meaningful advances in availability group (AG) protection, security controls, observability, and automation for both traditional and containerized SQL Server deployments.

In today’s enterprises, a perfect storm has emerged where applications have become direct revenue channels, infrastructure complexity has increased while IT staffing has not, modernization initiatives are no longer optional, security and compliance requirements are tightening, and software update velocity has accelerated. Together, these forces expose the limits of traditional HA approaches. What once worked for small, static clusters no longer scales when SQL Server deployments span hybrid, multi-platform, and containerized environments that demand continuous availability, stronger safeguards, and higher levels of automation. DxEnterprise v26.0 and DxOperator v2 address these challenges head-on.

DxEnterprise v26.0 focuses on improving cluster resilience, visibility, and administrative confidence through enhanced monitoring, stronger safeguards against split-brain scenarios, expanded credential support, and platform modernization. DxOperator v2 extends those capabilities into Kubernetes environments, giving users greater control over scale, updates, and network configuration for SQL Server AGs running in containers.

What’s New in DxEnterprise v26.0 

  • Deeper SQL Server and Availability Group Intelligence
  • Database-level health monitoring is now enabled by default, allowing faster detection of issues affecting individual databases within an AG
  • Split-brain scenarios are prevented via automatic per-availability-group quorum enforcement by demoting or shutting down replicas when quorum requirements are not met
  • Improved replica connectivity alerts provide real-time notification when replicas disconnect or when SQL Server replica configurations diverge from expected cluster state
  • Improved Security and Credential Resilience
  • Support for secondary SQL Server backup credentials enables automatic fallback if primary authentication fails, reducing downtime caused by credential changes or expirations
  • Administrative sessions are automatically disconnected when the cluster passkey changes, ensuring only authorized users with current credentials retain access
  • The DxAdmin user interface now includes clearer prompts, stronger validation, and improved feedback for passkey configuration
  • Greater Stability and Observability
  • Core monitoring services, including DxLMonitor, DxCMonitor, DxStorMonitor, and DxHealthMonitor, have received reliability and stability improvements to reduce unexpected restarts and improve overall cluster resilience
  • Basic anonymous telemetry is now available to help improve product quality and diagnostics, with opt-out configuration for customers who prefer not to participate
  • Platform and Usability Enhancements
  • DxEnterprise’s Linux version now runs on the .NET 8.0 runtime, delivering improved performance, security, and long-term support alignment
  • Virtual hosts can now be renamed using a new rename-vhost command, simplifying cluster management and reorganization
  • Additional safeguards prevent accidental overwriting of existing data stores during SQL Server high availability virtualization
  • Enhancements to DxCLI and DxPS improve command-line usability, including human-readable XML output and new PowerShell cmdlets
  • The DxCollect utility now includes expanded command-line options for more targeted diagnostics and log collection.

What’s New in DxOperator v2 

  • Flexible Scaling Up and Down
  • Availability group clusters can now be expanded or reduced dynamically
  • Unlike the previous version, DxOperator v2 can safely de-configure and remove replicas from a running cluster, enabling true scale-down operations
  • Automated Rolling Updates
  • Administrators can automate rolling updates of SQL Server or DxEnterprise container images, allowing pods to be updated one at a time without manual intervention
  • Updates can also be performed manually when desired, giving operators full control over rollout strategy
  • DxOperator does not automatically check for new container versions, ensuring that administrators remain in control of when and how updates are applied
  • Advanced Network and Service Configuration
  • Flexible service templates allow load balancers and other network services to be fully specified and automatically deployed per availability group replica
  • This enables more consistent connectivity across different Kubernetes environments and cloud providers
  • Redesigned Custom Resource and StatefulSet Adoption
  • The custom resource definition has been redesigned for greater flexibility and now leverages Kubernetes StatefulSets
  • By delegating pod creation, storage allocation, and rolling upgrades to Kubernetes, DxOperator v2 simplifies internal logic while benefiting from native Kubernetes reliability and lifecycle management

Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.

Study Shows Business Value Gained From Artificial Intelligence-Internet of Things Convergence

User studies remain one of the primary ways software companies can gain insight and achieve some public recognition. Most of the studies emanate from cybersecurity protection developers. This one comes from a software company with which I’ve had little contact. There was a woman I knew from one company who came to SAS for a while. We had occasional conversations before she left that company.

SAS develops software applications. I’ve never had a handle on its business. It now bills itself as a global leader in data and AI. This study was conducted by the research firm IDC. And we have the acronym AIoT—or the convergence of AI and IoT. Somehow I feel that concatenating acronyms is the beginning of the end times 😉

Key findings from the IDC InfoBrief, How AIoT Is Reshaping Industrial Efficiency, Security, and Decision-Making, sponsored by SAS, include:

This one should surprise no one. Everyone discusses predictive maintenance.

Predictive maintenance dominates current AIoT use. Nearly 71% of organizations use AIoT for predictive maintenance, the most widely adopted use for manufacturing/industrial and energy companies surveyed. IT automation (53%) and supply and logistics (47%) were the next most cited uses for AIoT.

Executives continue to dream of significant cost reductions from AI.

AIoT drives tangible business value. 54% of respondents anticipate major cost savings, 52% predict smarter and faster innovation and 49% expect streamlined operations from their investment in AIoT. Additionally, 63% believe AIoT will boost productivity and competitiveness.

Managers continue to see AI as an aid to overcome the current skills gap of employees.

Skills gap emerges as the top challenge. The skills gap is the biggest barrier to AIoT success, outpacing legacy system integration and data quality issues as the most significant roadblock. Other challenges include high implementation costs, business process misalignment and cultural resistance. Addressing these issues is essential to unlocking AIoT’s full potential.

Some actually use the technology!

Heavy AIoT users see greater value. Organizations using AIoT heavily are twice as likely to report benefits that significantly exceed expectations as those that only use the technology sparingly. Strikingly, less than 3% say the value of AIoT “did not meet expectations.”

The IDC research is based on a global survey of more than 300 industrial executives in the manufacturing and energy industries.

And from the company:

SAS IoT solutions combine AI, machine learning and edge-to-cloud integration, enabling analysis of high-volume, high-velocity data. And joining AI with these IoT solutions extends the value of existing infrastructure investments and digitally transforms the workforce by shifting from manual oversight to intelligent orchestration.

Other organizations benefiting from SAS IoT and streaming analytics for improved asset reliability, enhanced product quality and increased efficiency across connected systems include:

  • Georgia-Pacific
  • Jakarta Smart City
  • Lloyd’s List
  • Lockheed Martin
  • Town of Cary (North Carolina)
  • Volvo Trucks and Mack Trucks
  • wienerberger

Akka Launches New Deployment Options for Agentic AI at Scale  

We have passed through the valley of the shadow of Large Language Models version of AI. Now we have moved a level to the gorge of Agentic AI. I’ve written about three posts I believe on that subject. Here is another company unveiling Agentic AI solutions.

Akka, the leader in helping enterprises deliver distributed systems that are elastic, agile, and resilient, announced new deployment options for its Akka solution, as well as new solutions to tackle the issues with deploying large-scale agentic AI systems for mission-critical applications. Already the standard for building resilient and elastic distributed systems with industry leaders like Capital One, John Deere, Tubi, Walmart, Swiggy, and many others, Akka now also gives enterprises unprecedented freedom to deploy Akka-based applications on the infrastructure of their choice. For the first time, developers now have two new options that enable them to leverage Akka to build distributed systems at scale and self-host their application or deploy their application across multiple regions automatically. 

“Agentic AI has become a priority with enterprises everywhere as a new model that has the potential to replace enterprise software as we understand it today,” said Tyler Jewell, Akka’s CEO. “With today’s announcement, we’re making it easy for enterprises to build their distributed systems, including agentic AI deployments, without having to commit to Akka’s Platform.  Now, enterprise teams can quickly build scalable systems locally and run them on any infrastructure they want.”  

The agentic shift requires a fundamental architectural change from transaction-centered to conversation-centered systems. Traditional SaaS applications are built on stateless business logic executing CRUD operations against relational databases. In contrast, agentic services maintain state within the service itself and store each event to track how the service reached its current state.

As a result, developer teams experience very unpredictable behavior, limited planning and memory impacting agent effectiveness, hard failures at scale, opaque decision-making with zero transparency, and, perhaps most importantly, significant cost and latency concerns. 

Today, Akka has introduced two new deployment capabilities:

  • Self-managed Akka nodes – You can now run clusters of services that were built with Akka SDK on any cloud infrastructure. The new version of the Akka SDK includes a self-managed build option that will create services that can be executed stand-alone. Your services are binaries packaged in Docker images that can be deployed in any container PaaS, bare metal hardware, VMs, edge nodes, or Kubernetes with any Akka infrastructure or Platform dependencies. Your nodes have Akka clustering built from within. 
  • Self-hosted Akka Platform regions – Teams can now run your own Akka Platform region without any dependency on Akka.io control planes. Services built with the Akka SDK have always been deployable onto Akka Platform, with Akka providing managed services through the company’s Akka Serverless and Akka BYOC offerings. Akka Platform provides fully automated operations, alleviating admins from more than 30 maintenance, security, and observability duties. Both Serverless and BYOC federated multiple regions together by using an Akka control plane hosted at Akka.io.

In contrast, self-hosted regions are Akka Platform regions with no Akka control plane dependency, which teams will install, maintain, and manage on their own. Self-hosted regions can be installed in any data center with orchestration, proxy, and infrastructure dependencies specified by Akka. Since Akka Platform is updated many times each week, the installation of self-hosted regions is executed in cooperation with Akka’s SRE team to ensure stability and consistency of a customer environment.

Akka, formerly known as Lightbend, is relied upon by industry titans and disruptors to build and run distributed applications that are elastic, agile, and guaranteed resilient.

Leaseweb Boosts AI-focused Infrastructure Portfolio with Launch of New NVIDIA GPU Solutions

I wrote about a company new to me from The Netherlands thanks to a media relations person I’ve known for quite some time. Leaseweb Global provides cloud services and Infrastructure as a Service. They are back with an announcement regarding adding NVIDIA L4, L40S and H100 NVL GPUs to its infrastructure portfolio. 

Through this offering, Leaseweb notes it is meeting the compute needs of a wide variety of sectors – including the Artificial Intelligence (AI), Media & Entertainment and Gaming industries – at a price point that enables significant cost savings when compared to the wider marketplace.  

Available across Leaseweb’s entire global network, spanning the European, North American and Asia Pacific regions, the expanded GPU offering supports customers with a scalable, efficient deployment framework optimized for high-performance computing (HPC), ranging from AI model training and video analytics to graphics processing and video rendering functionality. Leaseweb’s new NVIDIA GPU solution aims to help customers improve their operations, reduce costs, and enhance computational speed for demanding workloads. The announcement also underlines Leaseweb’s commitment to meeting the demand for powerful infrastructure solutions with industry benchmark performance chips that can be deployed within hours to ensure high availability service provision.

This marks the next step in Leaseweb’s journey to providing a complete AI offering for its customers, which will include integration into Leaseweb’s public cloud and broader set of infrastructure solutions. By providing a comprehensive, scalable solution for a wide variety of workloads, Leaseweb is reinforcing its position as a trusted partner for organizations focused on balancing price with performance and availability. With further plans to integrate this offering into its broader solutions suite, the company is strongly positioned to become a leading provider of GPU infrastructure, supporting customers as they invest in these transformational technologies.

“This announcement represents an important step for customers where GPU availability is increasingly important and will give organizations around the world the price/performance flexibility they need, as soon as they need it,” commented Liat Mendelson Honderdors, Principal Product Manager, AI and GPU at Leaseweb.

“Our customers value Leaseweb’s extensive industry expertise as they plan and deploy infrastructure for their most processor-intensive workloads. With considerations ranging from price and performance to data sovereignty and compliance, Leaseweb’s solutions and state-of-the-art global network means we are ideally suited to helping our customers grow their business and expand into new markets, even at hours’ notice. By incorporating best-in-class NVIDIA technology into our infrastructure portfolio, we’re laying the foundation for a broader solution set that will continue to evolve with customer needs,” Mendelson Honderdors concluded.

Leaseweb Launches Highly Efficient Virtual Private Server Infrastructure

Technology advancements and innovations plus applications solving bigger problems have led my work, research, and writing from virtual PLCs to virtual servers. That’s why I have loved technology ever since I was 14 and soldering resistors, capacitors, and coils into circuits.

Here is a Dutch company (the product is available globally, of course) that has launched a “highly efficient” Virtual Private Server (VPS) solution. 

Some of the year-end prognoses sent my way (many sort of self serving) predicted les reliance on cloud and a return to more on prem solutions. Perhaps this company provides help both ways.

Their blurb touts, “Powered by High Performance CPUs, Local NVMe Storage and Lightning-fast 10 Gbps Uplink Speed, Packages Start at Just €3.99/month.”

Leaseweb Global, a cloud services and Infrastructure as a Service (IaaS) provider, January 14 announced the launch of a new highly efficient Virtual Private Server (VPS) solution. Designed for businesses that need a combination of exceptional price-performance, fast local storage and easy deployment, Leaseweb VPS packages start at just €3.99/month to deliver affordable solutions that don’t compromise on quality.

Leaseweb’s new VPS solution provides customers with the flexibility to expand their infrastructure as their business needs grow. Delivered via a low-touch, self-service portal, it requires limited technical expertise for setup or management, enabling users to configure their server, monitor resources and manage snapshots with ease. This makes it ideal for businesses seeking a straightforward, scalable and efficient hosting service, as well as those looking for an entry-level solution to Leaseweb Public Cloud.

Technical specifications keep advancing.

With lightning-fast 10Gbps uplink speed, and powered by high performance processors and local NVMe storage, the Leaseweb VPS solution provides ample compute, RAM and generous traffic across all packages. In addition, built-in security and reliability features, including firewalls, DDoS protection and ISO-certified data centers, offer peace of mind and comprehensive protection for all customers. For those customers wanting to include backup, this is available as an add-on service.

“Our new VPS solution has been designed from the ground up to offer the ideal balance of performance, usability and cost,” said Mathijs Heikamp, Director Product Management at Leaseweb Global. “By combining the latest hardware, advanced automation and an intuitive self-service portal, we’re delivering a cloud infrastructure solution that can effortlessly adapt to customer requirements.”

Here is information about Leaseweb since this company is new to me—and perhaps to you.

Leaseweb is a leading Infrastructure as a Service (IaaS) provider serving a worldwide portfolio of 20,000 customers ranging from SMBs to Enterprises. Services include Public Cloud, Private Cloud,  Dedicated Servers ,  Colocation,  Content Delivery Network , and Cyber Security Services supported by exceptional customer service and technical support. With more than 80,000 servers, Leaseweb has provided infrastructure for mission-critical websites, Internet applications, email servers, security, and storage services since 1997. The company operates 28 data centers in locations across Europe, Asia, Australia, and North America, all of which are backed by a superior worldwide network with a total capacity of more than 10 Tbps.

Leaseweb offers services through its various Leaseweb Sales Entities which are Leaseweb Netherlands B.V., Leaseweb USA, Inc., Leaseweb Singapore PTE. LTD, Leaseweb Deutschland GmbH, Leaseweb Australia Ltd., Leaseweb UK Ltd, Leaseweb Japan KK, Leaseweb Hong Kong LTD, and Leaseweb Canada Inc.

Stratus ztC Endurance Platform

I think this is the last of the meetings I had at Automation Fair last month. The team at Stratus discussed the ztC Endurance platform. Stratus is know for high availability, redundant server and compute technology. This new platform enables organizations to run critical applications without downtime or data loss, in edge or data center environments, using intelligent, predictive fault tolerance based on Stratus’ redundant hardware architecture, hardened drivers, and Stratus Automated Uptime Layer with Smart Exchange.

Both OT and IT teams face the challenge of delivering reliability to both centralized and distributed locations across their operations. They also may lack on-site technical staff needed to maintain complex infrastructure. Platforms running critical applications must be easy to deploy, easy to manage, and easy to service—and not just in data centers, but at the edge of corporate networks.

Stratus ztC Endurance provides continuous availability and ensures data integrity for mission-critical applications running at the edge, operations center, and data center. Delivering seven nines (99.99999%) uptime, its Automated Uptime Layer with Smart Exchange provides continual proactive health monitoring and automatically takes action to maintain system availability and protect against data loss when needed. Coupled with the platform’s modular design of hot-swappable customer replacement units (CRUs), ztC Endurance makes it easy for OT and IT teams to manage and support. ztC Endurance delivers the processing power and performance to host dozens of software applications as virtual machines (VMs), dramatically reducing the number of PCs or servers required for OT and IT teams to manage and maintain.

Key Benefits

  • Seven nines availability for critical applications: Built-in computing fault tolerance delivers 99.99999% availability to run critical applications.
  • No loss of data: Redundant computing architecture combined with intelligent automated management prevents in-flight data loss and ensures data integrity.
  • “Zero touch” management and support: Modular design plus pro-active remote health monitoring and self-healing simplifies system management and serviceability for both IT and OT teams.
  • Rapid modernization and workload consolidation: Modernize infrastructure and streamline operations by leveraging virtualization to consolidate multiple software workloads onto a single platform.
  • Multi-layered security: Supports multi-layered defense-in-depth approaches, with focus on both process and product security guidelines to ensure maximum protection.
  • Lower TCO: Reduce IT footprint and purchase fewer software licenses on a highly reliable platform with an expected 7-10 year lifespan, twice that of traditional servers.

Follow this blog

Get a weekly email of all new posts.