Select Page

AI and Jobs—hype v common sense analysis

My last post considered the future of Generative AI.

It has become impossible to maintain thinking about AI at the pace of hype speed. While on vacation last week, a couple of new items popped across my screen in the brief times I was online.

These items refer to the relationship of jobs and AI. One is from a pessimist who sees only gloom. This news item from Axios was picked up by MAGA commentator Steve Bannon among others. The fear seems to cross party and demographic lines. The Axios headline (White Collar Bloodbath) was clearly designed for maximum fear leading to clicks and views.

As much as I like Axios as a news source, I continue to detest their click-bait and journalist over-hype headlines. Still, an interview contending the end of entry-level white collar jobs (probably too exaggerated).

Dario Amodei — CEO of Anthropic, one of the world’s most powerful creators of artificial intelligence — has a blunt, scary warning for the U.S. government and all of us:

  • AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years, Amodei told us in an interview from his San Francisco office.
  • Amodei said AI companies and government need to stop “sugar-coating” what’s coming: the possible mass elimination of jobs across technology, finance, law, consulting and other white-collar professions, especially entry-level gigs.

Sure, we’ve seen clueless managers mandating programmers use of AI in order to get more work out requiring fewer programmers. Certainly AI is a useful tool for programmers, but it certainly doesn’t replace the vision of what to program and how to best assemble routines.

No good journalist writes a piece with a point-of-view. They must always “balance” views. In this case they quote Sam Altman:

The other side: Amodei started Anthropic after leaving OpenAI, where he was VP of research. His former boss, OpenAI CEO Sam Altman, makes the case for realistic optimism, based on the history of technological advancements.

“If a lamplighter could see the world today,” Altman wrote in a September manifesto — sunnily titled “The Intelligence Age” — “he would think the prosperity all around him was unimaginable.”

Remember that Altman’s job is to project optimism and a profitable future since he’s in capital-raising mode.

Meanwhile, economics Nobel Laureate Daron Acemoglu challenges the AI revolution narrative with data-driven predictions and business advice. He takes a common sense approach to statistics and analysis similar (but far superior) to what I used back in my analyst days (not working for a firm, but performing analysis for executive decision making). His analysis was reported in the MIT Sloan Management Review.

Nobel Prize-winning economist Daron Acemoglu challenges the AI revolution narrative with surprising data: AI will likely automate just 5% of tasks and add only 1% to global GDP this decade. And where the internet’s potential was clear early on, AI’s is not, and the technology has yet to deliver applications that can transform production or create valuable new services.

Acemoglu reveals which roles face automation risk while explaining why jobs requiring judgment and social intelligence remain safe. His advice for leaders:

  • Resist hype-driven investments and competitive pressure.
  • Focus on creating new services, not just cutting costs.
  • Use AI to augment human workers, not replace them.
  • Partner with skilled employees to identify valuable AI applications.

This research-backed perspective cuts through speculation while acknowledging AI’s potential when it’s deployed strategically.

What he did was step back and look at all jobs, then the percentage that might be affected by AI one way or another and then continue to filter. Rather than “bloodbath”, he looks at sectors that may be affected. 

Future of GenAI?

Cal Newport, computer science professor at Georgetown and prolific workflow and lifestyle wrier, discussed another aspect of GenerativeAI on his May 26 podcast. His currently thinks generative AI companies will start making money when people use it several times daily like they use Google search. Responding to reports that OpenAI sells  $20,000 licenses to companies for specialty usages, he responds that ChatGPT and its competitors must become so usable that they reach a critical mass of daily users.

That’s not even close to happening right now—at least according to an analyst report I frequently receive purporting to show LLMs make little dent in Google search numbers.. 

Analyst Ben Evans recently released an essay with statistics from 2024 about daily average use (DAU) and weekly average use (WAU) of AI.

Generative AI chatbots might be a life-changing transformation in the nature of computing, that can replace all software, but so far, most of its users only pick it up every week or two, and far fewer have made it part of their lives. Is that a time problem or a product problem?

The proportion of daily users to weekly users is astonishingly small.

But another reaction is say that even with those advantages, if this is a life-changing transformation in the possibilities of computing, why is the DAU/WAU (daily average useage/weekly average usage) ratio so bad? Something between 5% and 15% of people are finding a use for this every day, but at least twice as many people are familiar with it, and know how it works, and know how to use it… and yet only find it useful once a week. Again, you didn’t have to buy a thousand dollar device, so you’re not committed – but if this is THE THING – why do most people shrug? 

Media coverage and hype could lead one to believe that everyone uses these tools. But, no.

DAU is everything. Sam Altman knows this – he was trying to build a social media app at the time, and yet the traction number he always gives is, well, ‘weekly active users’. That’s a big number (the latest is 1bn globally)… but then, why is he giving us that number instead of DAUs? If you’re only using ChatGPT once a week, is it really working for you?

Evans asks, is it a people problem or a product problem? We just have not found a sufficient use case to integrate ChatGPT into our daily workflow. That’s a problem for those companies.

Context-Driven MRO Optimization for Enterprises

AI Agents (aka AgenticAI) continue to find their way into the news. At least we are finding some reality amongst all the AI hype. This news concerns improving MRO operations.

[All hyperbole from the press release.]

Verusen launched its groundbreaking Explainability AI Agent for data and context-driven material and inventory optimization. This first-of-its-kind capability delivers unprecedented transparency into Verusen’s stocking policy recommendations, enabling procurement, operations, and supply chain teams to trust, understand, and confidently act on AI-driven insights, accelerating smarter execution and enterprise-wide alignment.

Verusen’s Material Graph — the world’s largest MRO materials knowledge base — has ingested over 41 million unique SKUs, $12 billion in annual inventory and spend, and all associated transactional POs. This powerful platform redefines how asset-intensive enterprises manage critical materials inventory, procurement, and risk across their global MRO supply chains.

By integrating Large Language Models (LLMs), Machine Learning, and Natural Language Processing technologies, Verusen transforms manual, disconnected inventory management practices into streamlined, context-rich optimization strategies—empowering teams to make smarter decisions faster while reducing risk and operational costs.

Enterprises adopting AI for MRO management often struggle with the “black box” problem—trusting recommendations without understanding the logic behind them. Verusen’s Explainability AI Agent eliminates this barrier by providing clear, concise insights into every recommendation’s rationale, supported by a powerful feedback loop that continuously learns and adapts based on user interactions.

Unlike traditional AI platforms—or even today’s general-purpose generative AI tools—Verusen’s Explainability Agent is task-driven to deliver clarifications and explanations to users. It examines model inputs, outputs, and logic to surface tailored insights directly within the platform, ensuring every decision is rooted in transparency and context.

Verusen’s Explainability AI Agent is part of the company’s broader commitment to responsible AI, ensuring that solutions are secure, accountable, and enterprise-ready from day one. Key pillars of Verusen’s responsible AI design include:

  • No exposure of Customer data to third-party LLMs
  • Built-in Explainability, not bolted-on as an afterthought
  • User-in-the-loop feedback models that improve recommendations over time

Siemens introduces AI agents for industrial automation

Finishing Agent Week at The Manufacturing Connection with this news from Siemens introducing agents for automation on its Xcelerator platform They claim up to 50% increase in productivity. I guess this would involve productivity across five disciplines detailed below.

The agents are designed to work across its established Industrial Copilot ecosystem. 

This new technology represents a fundamental shift from AI assistants that respond to queries towards truly autonomous agents that proactively execute entire processes without human intervention. Siemens’ new AI agent architecture features a sophisticated orchestrator. Like a craftsman, it deploys a toolbox of specialized agents to solve complex tasks across the entire industrial value chain. These agents work intelligently and autonomously – understanding intent, improving performance through continuous learning, and accessing external tools and other agents as needed. Users retain complete control, selecting which tasks they wish to delegate to AI agents.

Siemens’ approach distinguishes between Industrial Copilots, the interfaces users interact with, and the AI agents that power them behind the scenes. Furthermore, the company is developing digital agents, and integrating physical agents, including mobile robots. This way, Siemens is creating a comprehensive multi-AI-agent system where agents are highly connected and work collaboratively. 

Here’s a question I often ask only to receive unsatisfactory answers. What makes your offering different?

What sets Siemens’ approach apart is the orchestration of these agents utilizing a comprehensive ecosystem. These agents not only work with other Siemens agents but also integrate with third-party agents, enabling unprecedented levels of interoperability.

The five parts of Siemens ecosystem: 

  • Design Copilot: Currently available for NX CAD, helps users break new ground in creativity by accelerating the product design process. The AI-powered assistant enables users to ask questions in natural language, quickly access detailed technical insights, and streamline complex design tasks – all leading to significant efficiency gains in product development. Siemens is also currently developing a Hydrogen Configurator for designing hydrogen production plants.
  • Planning Copilot: Currently in pre-release with customer testimonials already available, this solution optimizes production planning, resource allocation, and scheduling through generative AI-powered insights, helping manufacturers maximize efficiency and minimize waste.
  • Engineering Copilot: Available for TIA Portal with Managed Service coming in 2025, it enables engineering without repetitive tasks. As the first generative AI-powered product for automation engineering, it empowers engineers to generate automation code through natural language inputs, speeding up SCL code generation while minimizing errors. In process industries, the copilot for P&ID Digitalization has already been tested by several customers. It’s an AI-assisted P&ID detection cloud service to digitalize and consolidate legacy P&ID diagrams.
  • Operations Copilot: Currently available for Insights Hub, the Copilot provides holistic insights into the entire plant. In addition, at the machine level, Siemens is planning to introduce an Operations Copilot for shop floor workers, which will be available by the end of 2025. This new product is designed to empower shop floor operators, service technicians, and maintenance engineers to work more efficiently by querying machine data and receiving error resolution guidance through natural language. The Operations Copilot can be easily implemented at the machine level to provide machine instructions and operator guidance. For the process industries, the generative AI-based assistant Simatic eaSie, enables technicians and maintenance personnel to access relevant plant and equipment data via chat or voice interaction. This makes operations and maintenance more reliable and safer both in the control room and in the field.
  • Services Copilot: The Maintenance Copilot Senseye provides maintenance teams with expert-level equipment diagnostics without the need for specialized technical knowledge. Recently expanded beyond predictive maintenance to cover the entire maintenance lifecycle, this solution supports everything from reactive repairs to predictive and preventive strategies, with pilot implementations demonstrating an average 25% reduction in reactive maintenance time.

Stochastic Parrot

Moira Gunn hosted a linguistics professor called Dr. Emily Bender on her podcast Tech Nation. Bender had released a book with Dr. Alex Hannah, The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want

My interest was piqued when they mentioned a 2021 paper by Bender, et. al., on language models called Stochastic Parrot.

As one of the thinkers attempting some common sense to cut through the AI hype, I love that term. Much of generative AI and large language models are simply probability calculations based on learned text. In other words:

Stochastic—a random probability distribution that may be analyzed statistically but may not be predicted precisely—plus Parrot—to repeat something said by someone else without thought or understanding.

There are writers on both sides of the hype divide—the doom sayers and the optimistic hype sayers—who have let imagination run amok. Shall we pull back a little and look for those applications where this will really help. Applications other than providing more words for marketers to stuff into a news release, that is.

BitSeek Introduces First Full-Stack Decentralized AI Infrastructure for Web3 AI

Still more AI news. I guess you might as well buckle-up and enjoy the ride. I remember many other memes that journalists love to perpetuate. I’m not saying there is no value to AI in its many various guises, but it is far from living up to its hype, if indeed it ever will.

BitSeek introduced the first end-to-end decentralized AI infrastructure purpose-built for Web3. At the heart of the platform is the BitSeek proprietary DeLLM (Decentralized Large Language Model) protocol, designed to deliver powerful AI without the tradeoffs of being centralized and controlled by a big tech corporation. By combining distributed compute, blockchain-native model governance, and privacy-preserving architecture, BitSeek empowers users and developers to own, control, and benefit from the AI systems they use.

The BitSeek AI tech stack includes a globally distributed computing network of independent nodes, a suite of Blockchain Model-Context-Protocol (MCP), and a data DAO. These components address two core limitations of the Web3 AI ecosystem: the lack of accessible decentralized LLMs and the absence of native on-chain interactions.

BitSeek delivers high-performance decentralized LLMs and a multi-blockchain MCP suite to power the next generation of intelligent Web3 agents, accelerating the evolution of the decentralized AI industry. Furthermore, Bitseek gives users and developers full control over AI data, computation, and monetization—marking a pivotal moment for Web3 artificial intelligence.

While decentralization has transformed finance and data in the crypto space, AI has remained centralized and remains in the hands of big tech. Most AI platforms claiming to be “decentralized” still rely on centrally hosted LLMs. BitSeek changes that by introducing model atomization architecture: a novel approach that distributes open-source LLMs—such as DeepSeek R1 and Llama 3—across a decentralized, privacy-first network. This eliminates the need for centralized hosting, placing both the model and the data it processes in the hands of the community.

For users, the DeLLM infrastructure preserves the key benefits of hosting an AI model locally: full data control, enhanced privacy for sensitive topics, and freedom from corporate surveillance.

Unlike commercial AI platforms that collect and monetize user data, the BitSeek model ensures personal information remains secure and customizable to individual needs. A recent survey shows 78% of users prefer AI that doesn’t analyze their data, and 80% favor decentralized, open-source models—reinforcing ideals that BitSeek champions.

In the BitSeek ecosystem, users retain ownership of every conversation they generate, deciding whether to keep their data private, monetize it through DataDAOs, or move it to another LLM platform. This approach gives participants direct authority over how their data is used, along with a meaningful stake in AI’s growth. Meanwhile, node operators earn tokenized rewards for providing the computational power behind the AI models, strengthening the network while keeping it decentralized.

BitSeek is a foundational infrastructure for the next generation of decentralized applications. Web3 developers can integrate DeLLM models into AI dApps, social protocols, and decentralized agents, without compromising on privacy or decentralization. The system is modular, scalable, and designed to evolve with open-source AI advances, including upcoming integrations with models like Qwen and open-weight variants of GPT.

Unlike other decentralized AI efforts, BitSeek decentralizes the model itself—delivering infrastructure-level transformation for the entire crypto space.

Follow this blog

Get a weekly email of all new posts.