Select Page

Podcast–Aras Community Event 2026–Rise of Agents

I’ve posted a podcast to both my podcast channel (subscribe on any podcast app) and my YouTube channel.

A first summary of my three days with the Aras community in Miami. These PLM events always return me to the time when I did this sort of work–manually. Then my first taste of computers digitizing the bill of materials as a first step in our data management journey. Unfortunately, that company hit a big bump in the road and I was invited to try other things–at other companies.

Aras product managers showed how LLMs trained on the data within the app along with proper governance worked with agents to perform a number of tasks. Tasks in many cases that would require days of pain-staking work from a human.

While I heard from an analyst in the market that they thought this was all painfully slow, I’d offer the thought that a company does not want to outpace its customers. Most will not want to jump into the deep end immediately.

As always, this podcast was sponsored by Ignition from Inductive Automation.

AI-ready at the Edge – Siemens Industrial Automation DataCenter with AI computing power and advanced cybersecurity

It had to happen. An industrial-strength data center designed for the industrial edge. From Siemens, of course. They’re unveiling at Hannover next week. I will unfortunately not be in Hannover next week. I need funding to cover the expense, and all my past contacts have gone in other directions. Always a valuable experience.

More on the announcement:

  • Siemens is making its Industrial Automation DataCenter AI-ready for powerful AI applications in production environments
  • Siemens integrates accelerated AI computing power and advanced AI-specific cybersecurity from NVIDIA and Palo Alto Networks  
  • Single source – ready-to-operate, pre-configured and system-tested IT/OT platform for the production environment 

In partnership with NVIDIA and in collaboration with Palo Alto Networks, Siemens delivers secure NVIDIA computing infrastructure at the edge for powerful AI acceleration, alongside NVIDIA BlueField data processing units (DPUs) for intelligent real-time data processing and security from Palo Alto Networks Prisma AIRS. 

Delivered fully pre-installed, pre-configured, and system-tested from a single source, the turnkey solution combines high-performance virtualization for OT applications, backup and restore capabilities, data archiving, and an industrial demilitarized zone, effectively separating IT networks from OT environments. Through a strategic partnership with NVIDIA and collaboration with Palo Alto Networks, accelerated AI computing power and advanced AI-specific cybersecurity from NVIDIA and Palo Alto Networks is now enabled directly at the edge. 

This evolution addresses a critical industry need: implementing standardized, pre-integrated AI infrastructure poses significant challenges for many industrial companies. Building complex, high-performance, and secure AI-capable environments is very demanding, time-consuming, and costly – with integration, installation, and system engineering alone requiring up to 80 hours. Additional risks include compatibility issues and potential operational downtime. With the enhanced Siemens Industrial Automation DataCenter, customers benefit from real-time insights, optimized processes, and enhanced efficiency, yielding substantial gains in productivity and innovation. 

Siemens’ Remote Industrial Operations Services include continuous remote monitoring of IT/OT infrastructure, comprehensive cybersecurity measures, regular maintenance and preventive steps, as well as rapid support in the event of incidents. Siemens’ experts monitor and protect companies’ production environments around the clock from the Siemens OT Security Operations Center (SOC), which also reliably protects Siemens’ own facilities worldwide from cyber threats. 

Remote Industrial Operations Services offer extends over the entire lifecycle of the Industrial Automation DataCenter and is also flexibly applicable to various IT systems and components in OT environments, including third-party components.

Cal Newport On Why AI Isn’t Making It Easier

Cal Newport, Computer Science PhD and Professor at Georgetown University, explains AI, LLMs, and the like better than anyone else I follow. His newsletter from last month, Why Hasn’t AI Made Work Easier?, explains some of the reports we’ve begun hearing through the media noise.

He writes:

I’ve been studying the intersection of digital technology and office work for quite some time. (I find it hard to believe that my book, ​Deep Work​, just passed its ten-year anniversary!?) Here’s a pattern I’ve observed again and again:

(And, yes, I’ve lived through these…and more.)

  • A new technology promises to speed up some annoying aspects of our jobs.
  • Everyone gets excited about freeing up more time for deep work and leisure.
  • We end up busier than before without producing more of the high-value output that actually moves the needle.
  • This happened with the front-office IT revolution, and email, and mobile computing, and once again with video-conferencing.

Will AI be anything different?

I’m now starting to fear that we’re beginning to encounter the same thing with AI as well.

My worries were stoked, in part, by a recent article in the Wall Street Journal, titled ​“AI Isn’t Lightening Workloads. It’s Making Them More Intense.”

Based on some actual research:

The piece cites new research from the software company ActivTrak, which analyzed the digital activity of 164,000 workers across more than 1,000 employers. What makes the study notable is its methodology: it tracked individual AI users for 180 days before and after they began using these tools, providing clear insight into what changed. The results?

“ActivTrak found AI intensified activity across nearly every category: The time they spent on email, messaging and chat apps more than doubled, while their use of business-management tools, such as human-resources or accounting software, rose 94%.“

Ah, not everything was affected.

The one category where activity was not intensified, however, was deep work:

“[T]he amount of time AI users devoted to focused, uninterrupted work—the kind of concentration often required for figuring out complex problems, writing formulas, creating and strategizing—fell 9%, compared with nearly no change for nonusers.”

Why?

It’s not quite clear why AI tools are having this impact. One tantalizing clue, however, comes from Berkeley professor Aruna Ranganathan, who is quoted in the article saying: “AI makes additional tasks feel easy and accessible, creating a sense of momentum.”

I lived through these changes and concur:

This points toward a pattern similar to what happened when email first arrived. It was undeniably true that sending emails was more efficient than wrangling fax machines and voicemail. But once workers gained access to low-friction communication, they transformed their days into a furious flurry of back-and-forth messaging that felt “productive” in the ​abstract, activity-centric sense​ of that term, but ultimately hurt almost every other aspect of their jobs and ​made everyone miserable​.

Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.

Benedict Evans on OpenAI Business

I have cued up a couple of analyses on AI. Referring to Om Malik’s observation about generating news velocity, it’s hard to keep up. But the two I have remain relevant.

The first from an analyst I’ve followed for many years, Benedict Evans, End of Network Effect essay looks at the business model (or lack thereof) of OpenAI.

He opens:

OpenAI has some big questions. It doesn’t have unique tech. It has a big user base, but with limited engagement and stickiness and no network effect. The incumbents have matched the tech and are leveraging their product and distribution. And a lot of the value and leverage will come from new experiences that haven’t been invented yet, and it can’t invent all of those itself. What’s the plan? 

He compares an OpenAI executive with Steve Jobs—where do you start when developing technology and a product?

“Jakub and Mark set the research direction for the long run. Then after months of work, something incredible emerges and I get a researcher pinging me saying: “I have something pretty cool. How are you going to use it in chat? How are you going to use it for our enterprise products?” 

– Fidji Simo, head of Product at OpenAI, 2026

“You’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try to sell it”

– Steve Jobs, 1997

Pretty damning.

Evans isolates four fundamental strategic questions.

Where’s the unique selling proposition?

First, the business as we see it today doesn’t have a strong, clear competitive lead. It doesn’t have a unique technology or product. The models have a very large user base, but very narrow engagement and stickiness, and no network effect or any other winner-takes-all effect so far that provides a clear path to turning that user base into something broader and durable. Nor does OpenAI have consumer products on top of the models themselves that have product-market fit. 

It’s very early in the market cycle.

Second, the experience, product, value capture and strategic leverage in AI will all change an enormous amount in the next couple of years as the market develops. Big aggressive incumbents and thousands of entrepreneurs are trying to create new features, experiences and business models, and in the process try to turn foundation models themselves into commodity infrastructure sold at marginal cost. Having kicked off the LLM boom, OpenAI now has to invent a whole other set of new things as well, or at least fend off, co-opt and absorb the thousands of other people who are trying to do that. 

They are all in the same boat.

Third, while much of this applies to everyone else in the field as well, OpenAI, like Anthropic, has to ‘cross the chasm’ across the ‘messy middle’ (insert your favourite startup book title here) without existing products that can act as distribution and make all of this a feature, and to compete in one of the most capital-intensive industries in history without cashflows from existing businesses to lean on. Of course, companies that do have all of that need to be able to disrupt themselves, but we’re well past the point that people said Google couldn’t do AI.  

Things are moving quickly right now.

The fourth problem is expressed in the quotes I used above. Mike Krieger and Kevin Weil made similar points last year: when you’re head of product at an AI lab, you don’t control your roadmap. You have very limited ability to set product strategy. You open your email in the morning and discover that the labs have worked something out, and your job is to turn that into a button. The strategy happens somewhere else. But where? 

The current market.

This means that most people don’t see the differences between model personality and emphasis that you might see, and most people aren’t benefiting from ‘memory’ or the other features that the product teams at each company copy from each other in the hope of building stickiness (and memory is stickiness, not a network effect). Meanwhile, usage data from a larger (for now) user base itself might be an advantage, but how big an advantage, if 80% of users are only using this a couple of times a week at most? 

Result?

In the meantime, when you have an undifferentiated product, early leads in adoption tend not to be durable, and competition tends to shift to brand and distribution. We can see this today in the rapid market share gains for Gemini and Meta AI: the products look much the same to the typical user (though people in tech wrote off Llama 4 as a fiasco, Meta’s numbers seem to be good), and Google and Meta have distribution to leverage. Conversely, Anthropic’s Claude models are regularly at the top of the benchmarks but it has no consumer strategy or product (Claude Cowork asks you to install Git!) and close to zero consumer awareness.

There is much more to his analysis. Definitely worth a read—and some thinking.

Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.

Potential AI Business Progression

I receive great value from the wisdom and generosity of Seth Godin. He thought out this progression of business value. As you create products and companies and personal value, consider this deeply and seriously—Create value by connecting people.

From Seth Godin Feb 13

The first generation was built on large models, demonstrating what could be done and powering many tools.

The second generation is focused on reducing costs and saving time. Replacing workers or making them more efficient.

But you can’t shrink your way to greatness.

The third generation will be built on a simple premise, one that the internet has proven again and again:

Create value by connecting people.

We haven’t seen this yet, but once it gains traction, it’ll seem obvious and we’ll wonder how we missed it.

Create tools that work better when your peers and colleagues use them too. And tools that solve problems that people with resources are willing to pay for.

Problems are everywhere, yet we often ignore them.

Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.

Podcast–Why AI?

I’ve released a new podcast.

You can subscribe and download from your favorite podcast app or from my site.

It is also available on my YouTube channel.

Episode 275. Why AI in Manufacturing? Why not? I explore how new technologies for knowledge work, unlike in manufacturing, create even more busy work distracting us from our real work–thinking, deep work. Looking beyond the hype, AI tools are going to help us do things. We just don’t know exactly what is best. We must play with the tools to find the best ways to help us. 

We also need to consider the limits of text-based LLMs. Researcher Yan LeCun has looked at the limits of these technologies. How can they work for things like bringing a robot into the house, for example, when they are limited to digital and text and the environment is analog. Won’t this take a system of models? Not just one model?

Use them, but don’t be awed. Or bamboozled by CEOs.

This episode is sponsored by Inductive Automation.

Click on the Follow button at the bottom of the page to subscribe to a weekly email update of posts. Click on the mail icon to subscribe to additional email thoughts.

Follow this blog

Get a weekly email of all new posts.