by Gary Mintchell | May 13, 2025 | Generative AI
Still more AI news. I guess you might as well buckle-up and enjoy the ride. I remember many other memes that journalists love to perpetuate. I’m not saying there is no value to AI in its many various guises, but it is far from living up to its hype, if indeed it ever will.
BitSeek introduced the first end-to-end decentralized AI infrastructure purpose-built for Web3. At the heart of the platform is the BitSeek proprietary DeLLM (Decentralized Large Language Model) protocol, designed to deliver powerful AI without the tradeoffs of being centralized and controlled by a big tech corporation. By combining distributed compute, blockchain-native model governance, and privacy-preserving architecture, BitSeek empowers users and developers to own, control, and benefit from the AI systems they use.
The BitSeek AI tech stack includes a globally distributed computing network of independent nodes, a suite of Blockchain Model-Context-Protocol (MCP), and a data DAO. These components address two core limitations of the Web3 AI ecosystem: the lack of accessible decentralized LLMs and the absence of native on-chain interactions.
BitSeek delivers high-performance decentralized LLMs and a multi-blockchain MCP suite to power the next generation of intelligent Web3 agents, accelerating the evolution of the decentralized AI industry. Furthermore, Bitseek gives users and developers full control over AI data, computation, and monetization—marking a pivotal moment for Web3 artificial intelligence.
While decentralization has transformed finance and data in the crypto space, AI has remained centralized and remains in the hands of big tech. Most AI platforms claiming to be “decentralized” still rely on centrally hosted LLMs. BitSeek changes that by introducing model atomization architecture: a novel approach that distributes open-source LLMs—such as DeepSeek R1 and Llama 3—across a decentralized, privacy-first network. This eliminates the need for centralized hosting, placing both the model and the data it processes in the hands of the community.
For users, the DeLLM infrastructure preserves the key benefits of hosting an AI model locally: full data control, enhanced privacy for sensitive topics, and freedom from corporate surveillance.
Unlike commercial AI platforms that collect and monetize user data, the BitSeek model ensures personal information remains secure and customizable to individual needs. A recent survey shows 78% of users prefer AI that doesn’t analyze their data, and 80% favor decentralized, open-source models—reinforcing ideals that BitSeek champions.
In the BitSeek ecosystem, users retain ownership of every conversation they generate, deciding whether to keep their data private, monetize it through DataDAOs, or move it to another LLM platform. This approach gives participants direct authority over how their data is used, along with a meaningful stake in AI’s growth. Meanwhile, node operators earn tokenized rewards for providing the computational power behind the AI models, strengthening the network while keeping it decentralized.
BitSeek is a foundational infrastructure for the next generation of decentralized applications. Web3 developers can integrate DeLLM models into AI dApps, social protocols, and decentralized agents, without compromising on privacy or decentralization. The system is modular, scalable, and designed to evolve with open-source AI advances, including upcoming integrations with models like Qwen and open-weight variants of GPT.
Unlike other decentralized AI efforts, BitSeek decentralizes the model itself—delivering infrastructure-level transformation for the entire crypto space.
by Gary Mintchell | May 12, 2025 | Enterprise IT, Generative AI
We have passed through the valley of the shadow of Large Language Models version of AI. Now we have moved a level to the gorge of Agentic AI. I’ve written about three posts I believe on that subject. Here is another company unveiling Agentic AI solutions.
Akka, the leader in helping enterprises deliver distributed systems that are elastic, agile, and resilient, announced new deployment options for its Akka solution, as well as new solutions to tackle the issues with deploying large-scale agentic AI systems for mission-critical applications. Already the standard for building resilient and elastic distributed systems with industry leaders like Capital One, John Deere, Tubi, Walmart, Swiggy, and many others, Akka now also gives enterprises unprecedented freedom to deploy Akka-based applications on the infrastructure of their choice. For the first time, developers now have two new options that enable them to leverage Akka to build distributed systems at scale and self-host their application or deploy their application across multiple regions automatically.
“Agentic AI has become a priority with enterprises everywhere as a new model that has the potential to replace enterprise software as we understand it today,” said Tyler Jewell, Akka’s CEO. “With today’s announcement, we’re making it easy for enterprises to build their distributed systems, including agentic AI deployments, without having to commit to Akka’s Platform. Now, enterprise teams can quickly build scalable systems locally and run them on any infrastructure they want.”
The agentic shift requires a fundamental architectural change from transaction-centered to conversation-centered systems. Traditional SaaS applications are built on stateless business logic executing CRUD operations against relational databases. In contrast, agentic services maintain state within the service itself and store each event to track how the service reached its current state.
As a result, developer teams experience very unpredictable behavior, limited planning and memory impacting agent effectiveness, hard failures at scale, opaque decision-making with zero transparency, and, perhaps most importantly, significant cost and latency concerns.
Today, Akka has introduced two new deployment capabilities:
- Self-managed Akka nodes – You can now run clusters of services that were built with Akka SDK on any cloud infrastructure. The new version of the Akka SDK includes a self-managed build option that will create services that can be executed stand-alone. Your services are binaries packaged in Docker images that can be deployed in any container PaaS, bare metal hardware, VMs, edge nodes, or Kubernetes with any Akka infrastructure or Platform dependencies. Your nodes have Akka clustering built from within.
- Self-hosted Akka Platform regions – Teams can now run your own Akka Platform region without any dependency on Akka.io control planes. Services built with the Akka SDK have always been deployable onto Akka Platform, with Akka providing managed services through the company’s Akka Serverless and Akka BYOC offerings. Akka Platform provides fully automated operations, alleviating admins from more than 30 maintenance, security, and observability duties. Both Serverless and BYOC federated multiple regions together by using an Akka control plane hosted at Akka.io.
In contrast, self-hosted regions are Akka Platform regions with no Akka control plane dependency, which teams will install, maintain, and manage on their own. Self-hosted regions can be installed in any data center with orchestration, proxy, and infrastructure dependencies specified by Akka. Since Akka Platform is updated many times each week, the installation of self-hosted regions is executed in cooperation with Akka’s SRE team to ensure stability and consistency of a customer environment.
Akka, formerly known as Lightbend, is relied upon by industry titans and disruptors to build and run distributed applications that are elastic, agile, and guaranteed resilient.
by Gary Mintchell | Apr 25, 2025 | Generative AI, Manufacturing IT
AVEVA held its AVEVA World event a couple of weeks ago in San Francisco. I was not in attendance. I also didn’t see a bunch of news. There is this one piece I saw. Partnerships being all the major trend lately, several partnerships were announced.
- AVEVA is partnering with Databricks to revolutionize industrial operations with a secure and open approach to data and AI.
- AVEVA is also announcing a strategic partnership with Track’em, a cutting-edge material tracking and mobility solution provider, to deliver real time visibility and cost control in capital projects.
Parsing through the marketing speak, the company is using generative AI for piping design. I’ve seen a few companies finding a use for the new hot tech assisting design engineers.
by Gary Mintchell | Apr 8, 2025 | Data Management, Generative AI
I’m not talking about the Johnny Rivers theme for a late Sixties Saturday afternoon spy TV show. We’re talking software agents. Some may be secret, but none are men.
I once had an annual meeting with the CTO of a large automation company where I shared (non-privileged) information I’d gathered about the market while trying to learn what technologies I should be watching for.
With artificial intelligence (AI) and Large Language Models (LLMs) grabbing the spotlight at center stage, I’m watching for what technologies will make something useful from all the hype.
I’m looking for a return to the spotlight of these little pieces of software called agents. John Harrington, Co-Founder and Chief Product Officer at HighByte, an industrial software company, believes in 5 or so years from now, LLMs won’t be the game-changer in manufacturing that many expect. Instead, Agentic AI is set to have a far bigger impact.
So, John and I had a brief conversation just before my last trip. It was timely due to the nature of my trip—to a software conference where LLMs and AgenticAI would be important topics—and not just in theory.
From Harrington, “Agentic AI is revolutionizing the tech industry by addressing AI’s biggest limitation—making decisions that are more human like. AI agents are yet another application that analyzes and turns large amounts of data into actionable next steps, but this time they promise it will be different.”
He told me that Agentic AI will become more “human-like” going beyond LLMs. HighByte started up as an Industrial DataOps play at a time when I was just hearing about DataOps from the IT companies I followed. I told the startup team that they were entering a good niche. They have been doing well since then. They extended DataOps with Namespace work and now LLMs and agents.
“AI agents can enhance data operations by providing greater structure, but their success depends on analyzing contextualized data. Without proper context, the data they process lacks the depth needed for accurate insights and decision-making,” added Harrington.
Take an example. An agent can be a way to contextualize data, model an asset. Working with an LLM trained on data specific to the application, it can ask the LLM to scan the namespace to see if there are other assets in the database. HighByte’s can work through OPC, and also works with Ignition from Inductive Automation or the Pi database. It looks for patterns and can propose options as the engineer goes in to configure the application.
Not shy in his forecast, Harrington says the future is agents. They can affect and act on data. They can reach out to a control engineer, operator, quality group. It’s a targeted AI tool focused on one small thing. Perhaps there’s a maintenance agent, or one for OEE, or line quality on a work cell. Don’t think of a monolithic code in the cloud. Rather, think of smaller routines that could even work together helping business like Jarvis in Iron Man. Data is food for these agents, and HighByte’s business is data.
I’ve been impressed with HighByte’s growth and sustainability. Also that they’ve managed to remain independent for so long. Usually software companies want to build fast and sell fast. Watch for more progress as HighByte marries agentic AI with data.
by Gary Mintchell | Mar 28, 2025 | Cloud, Edge, Generative AI
The third of Siemens pre-Hannover news releases concerns Xcelerator Edge with Microsoft Azure IoT Operations.
- Siemens Industrial Edge works seamlessly with Microsoft Azure IoT Operations, making OT and IT data planes fully interoperable for manufacturing
- Edge and cloud data integration enables adaptive production through AI- and digital-twin-powered solutions
- Industrial customers benefit from improved machine performance, product quality and reduced machine maintenance
Siemens announces an extended collaboration with Microsoft in the context of Siemens Xcelerator, Siemens’ open digital business platform, to simplify the integration of information technology (IT) and operational technology (OT) for enterprise customers. By combining Siemens Industrial Edge with Microsoft Azure IoT Operations, customers will benefit from complementary solutions that enable a seamless flow of data from production lines to the edge and to the cloud. This edge-to-cloud data integration enables AI- and digital-twin-powered solutions to improve machine performance, product quality, and reduce machine maintenance.
A core component of the Azure adaptive cloud approach, Azure IoT Operations is designed to seamlessly integrate on-premises industrial edge solutions, like Siemens Industrial Edge, with the cloud, ensuring a continuous flow of data for smarter operations.
In this way, the powerful OT data plane provided by Siemens Industrial Edge works easily with Azure IoT Operations, to create an interoperable OT and IT data plane for manufacturing. The data layer from Siemens Industrial Edge effectively addresses mission-critical production applications such as virtualized control, low-latency closed-loop AI, executable digital twins, or production line-level analytics. It allows manufacturers to deploy responsive, reliable, flexible and secure applications to optimize their operations, reduce costs, and increase uptime and quality. By coupling with Azure IoT Operations, industrial producers can easily leverage this OT data in cloud-based, data-driven use cases to optimize production across sites and gain insights from advanced analytics.
by Gary Mintchell | Mar 28, 2025 | Generative AI, Software
This is the second of four Siemens news items. In the vein of everyone in industrial software is Microsoft’s best friend, Copilot headlines this news. And no news today is complete without mentioning generative AI.
- The Siemens Industrial Copilot, a generative AI-based assistant, is empowering customers across the entire value chain – from design and planning to engineering, operations, and services
- Siemens expands its Industrial Copilot offering with extended capabilities for Senseye Predictive Maintenance
- The generative AI-powered solution will support every stage of the maintenance cycle, from repair and prevention to prediction and optimization
A glimpse of Siemens’ AI strategy:
The Siemens Industrial Copilot is revolutionizing industry by enabling customers to leverage generative AI across the entire value chain – from design and planning to engineering, operations, and services. For example, the generative AI-powered assistant empowers engineering teams to generate code for programmable logic controllers using their native language, speeding-up SCL code generation by an estimated 60% while minimizing errors and reducing the need for specialized knowledge. This in turn reduces development time and boosts quality and productivity over the long term.
Siemens is developing a full suite of copilots to industrial-grade standards for the discrete and process manufacturing industries – and is now strengthening its Industrial Copilot offerings with the launch of an advanced maintenance solution, designed to redefine industrial maintenance strategies.
Bringing it to maintenance
The Senseye Predictive Maintenance solution powered by Microsoft Azure will be extended with two new offerings:
- Entry Package: This predictive maintenance solution combines AI-powered repair guidance with basic predictive capabilities. It helps businesses transition from reactive to condition-based maintenance by offering limited connectivity for sensor data collection and real-time condition monitoring. With AI-assisted troubleshooting and minimal infrastructure requirements, companies can reduce downtime, improve maintenance efficiency, and lay the foundation for full predictive maintenance.
- Scale Package: Designed for enterprises looking to fully transform their maintenance strategy, this package integrates Senseye Predictive Maintenance with the full Maintenance Copilot functionality. It enables customers to predict failures before they happen, maximize uptime, and reduce costs with AI-driven insights. Offering enterprise-wide scalability, automated diagnostics, and sustainable business outcomes, this solution helps companies move beyond traditional maintenance, optimizing operations across multiple sites while supporting long-term efficiency and resilience.