Why Should I Use Low Code Software?

The beginnings of a trend in manufacturing software has appeared on my horizon about mid-way through last year. This would be the use of low-code software for application development. I first noted it with some acquisitions in my market space. Recently I have begun working with a company called Quickbase who has a platform built with low-code application development in mind.

[Note: In my work with Quickbase, I’m sometimes compensated for what I do. They do not dictate what I write or say.]

I recently had the opportunity to talk with two users of Quickbase’s platform for their manufacturing software needs. You can hear them plus me at the Quickbase Empower Virtual Customer Conference on May 8 (our session is at 11:30 am EDT immediately following the keynotes). Their stories verified what I was beginning to hear from my first encounters. Listening to their tone of voice, what really perked them up was the ability to be rapidly responsive to requests from users for modifications to screens and reports.

That discussion spurred me on to some additional research on the topic. Following is a list of benefits I uncovered on my research. This is not a list specific to Quickbase, but a more generic list that you might find with applications in a variety of areas. But check out Quickbase for your specific needs. I’m sure I’ll have more interviews in the future to take a deeper dive into Quickbase specifically. For now, I was interested in this new feature. Feel free to contact me with additional thoughts. Or stories about how you have used low-code in engineering or manufacturing operations software.

  • Faster Development: Low-code platforms enable rapid application development by providing pre-built templates, drag-and-drop interfaces, and visual development tools. 
  • Reduced Costs: With low-code development, you can save on development costs by eliminating the need for hiring expensive developers with specialized coding skills. Additionally, the time saved in development translates to cost savings.
  • Increased Productivity: Low-code platforms allow both professional developers and citizen developers (non-technical users) to build applications. This democratization of app development increases productivity by enabling more people within an organization to contribute to development efforts.
  • Flexibility and Customization: While low-code platforms provide pre-built components and templates, they also offer the flexibility to customize applications according to specific business requirements. Developers can extend functionality by writing custom code when needed.
  • Streamlined Maintenance: Low-code platforms often include built-in features for application monitoring, debugging, and performance optimization. This simplifies maintenance tasks and reduces the time required for ongoing support and updates.
  • Integration Capabilities: Many low-code platforms offer out-of-the-box integrations with popular third-party services, databases, and APIs. This makes it easier to connect your applications with other systems and data sources.
  • Scalability: Low-code platforms can scale with your business needs, allowing you to quickly add new features or expand functionality as your requirements evolve. This scalability helps future-proof your applications.
  • Accessibility: Low-code platforms often come with intuitive user interfaces and guided development processes, making app development accessible to a wider range of users, including those with limited technical expertise.
  • Faster Time-to-Market: By accelerating the development process and enabling iterative development cycles, low-code platforms help bring applications to market faster. This can give your business a competitive edge by allowing you to respond quickly to changing market demands.
  • Risk Reduction: Low-code platforms often come with built-in security features and compliance standards, reducing the risk of security vulnerabilities and ensuring regulatory compliance.

Overall, low-code application development software offers a compelling solution for businesses looking to rapidly build, deploy, and maintain applications with greater efficiency and flexibility.

AI Comes to Vision Software

Vision software for guiding robots news goes along with the burst of robot news. In this case, AI meets vision software. 

AI software company Micropsi Industries today announced MIRAI 2, the latest generation of its AI-vision software for robotic automation. MIRAI 2 comes with five new features that enhance manufacturers’ ability to reliably solve automation tasks with variance in position, shape, color, lighting or background. 

What sets the MIRAI 2 AI-Vision Software apart from traditional vision solutions is the ability to operate with real factory data without the need for CAD data, controlled light, visual-feature predefinition or extensive knowledge of computer vision.

Gary Jackson, CEO of Micropsi Industries, noted, “Recognizing the complexities of implementing advanced AI in robotic systems, we’ve assembled expert teams that combine our in-house talent with select system integration partners to ensure that our customers’ projects are supported successfully, no matter how complex the requirements.”

MIRAI is an advanced AI-vision software system that enables robots to dynamically respond to varying conditions within their factory environment, including variance in position, shape, color, lighting and background. What sets MIRAI apart from traditional vision solutions is the ability to operate with real factory data without the need for CAD data, controlled light, visual-feature predefinition or extensive knowledge of computer vision.

The five new features that will be available to MIRAI 2 users are:

Robot skill-sharing: This new feature allows users to share skills between multiple robots, at the same site or elsewhere. If conditions are identical (lighting, background, etc.), very little or no additional training is required in additional installations. MIRAI can also handle small differences in conditions by recording data from multiple installations into a single, robust skill. 

Semi-automatic data recording: Semi-automatic training allows users to record episodes (of data) for skills without having to hand-guide the robot, reducing the workload on users and increasing the quality of the recorded data. MIRAI can now automatically record all the relevant data—users only need to prepare the training situations and corresponding robot target poses.

No F/T sensor: Training and running skills is now possible without ever connecting a Force/Torque sensor. This reduces cost, simplifies tool geometry and cabling setup, and overall makes skill applications more robust and easier to train.

Abnormal condition detection: MIRAI can now be configured to stop skills when unexpected conditions are encountered, allowing users to handle these exceptions in their robot program or alert a human operator.

Industrial PC: The MIRAI software can now be run on a selection of industrial-grade hardware for higher dependability in rough factory conditions.

Fero Labs Redefines Trust in AI for Industrial Live Predictions

Fero Labs has developed software to help certain types of process manufacturing plants improve quality output economically when given a random mix of feedstock. I wrote about the company last August—A Better Way to Control Process Quality.

They sent a new press release, and I must admit that I understood almost nothing in it:

Fero Labs, the only Profitable Sustainability Platform for industrial optimization, announced the release of their ground-breaking feature ‘ExplainIt for Live Predictions’ which expands a factory’s production knowledge in real-time. This advanced feature for cross-functional teams increases trust in AI predictions by disclosing real-time text explanations about abnormal factors influencing their live production.

There were way too many marketing-type phrases in there. Worst of all was the concept of “trust in AI predictions.” So, I asked the very patient publicist. She suggested that I talk with Berk Birand, Fero Labs Co-founder and CEO. And, I did. He was most helpful.

We caught up from my last article about their ability to use the huge data sets manufacturers have accumulated over the past decade using advanced statistical methods and “white box machine learning (ML)” to help engineers optimize their plants. Make them more profitable and reduce waste (sustainability). Therefore the “Profitable Sustainability” company.

Birand took me through an example that I could understand, since I had a customer in the 90s who did this sort of process.

Imagine a plant with piles of scrap steel in a yard. They have an electric arc furnace that melts all that disparate steel that will be poured out eventually to make their final product. Given that the feedstock has high variability as to the composition of the steel, the typical plant overdesigns the process to allow for variations. This, of course, is wasteful on the surface. But if the final chemical analysis shows that the output will not make the desired tensile strength or other spec, then the waste is even higher.

What if you accumulated the data (feedstock, process, finished steel) over time built a modern AI model? Its predictions could be used to drive profits, reduce waste, save time. But, would anyone trust yet another advanced process control system? We all know that models eventually goes out of whack sometimes and sometimes gets the wrong answer.

Here comes the “trust” part of the trust in AI model. They built an explainable model from the beginning. It can predict characteristics, say tensile strength of the mix because of chromium or carbon levels and so forth. Since we know that every model is wrong sometimes,  they built in confidence levels in the prediction engine. Their AI looks at the material composition and suggests adding chemicals to the mix, but it gives an explanation and a confidence level. The engineer looks at the confidence report (I am confident in this prediction or I’m not confident in this prediction) and can decide whether to go with the AI or to go with gut feel based on years of experience.

He convinced me. Fero Labs has developed an AI engine that gives the engineer a level of trust in the prediction.

More explanation from the press release:

Expanding on Fero Labs’ white-box ML, which provides full transparency of Fero’s powerful machine learning models, the new ExplainIt feature provides a contextual explanation of anomalous factors involved in each live production optimization.

This type of analysis is typically addressed through linear Root Cause Analysis (RCA) tools. Unlike traditional methods, Fero Labs’ solution is non-linear, much like process operations, and delivers results in seconds rather than the hours or days typically needed. Traditional methods generally require the engineer to preselect a small sample of factors to investigate, which can introduce potentially misleading biases. Fero Labs’ software has the power to evaluate all relevant factors which improves insight and prediction accuracy.

Blockchain Rises Again

Kevin Rose interviews Chris Dixon recently. Dixon provides a good overview of the current status of blockchain. I really haven’t heard much about that technology for years. A speaker at a Siemens event maybe five years ago extolled the future of pharmaceutical supply chain data through blockchain. That may have been the last I heard. Check out the podcast for an update.

Meanwhile, according to research from Global Data, “The blockchain industry, although volatile and nascent, has made significant progress in a short span of time, driven by remarkable innovation. Global blockchain platform and services revenue is set to grow from $12 billion in 2023 to $291 billion in 2030.  This growth trajectory reflects a more delineated and specialized expenditure pattern, with specific areas such as asset tokenization, blockchain development, and infrastructure services serving as primary drivers of market expansion.”

GlobalData’s latest report, “Thematic Research: Blockchain,” reveals a pivotal shift from the technology’s broad, indiscriminate application to more focused, strategic uses. The industry is witnessing a quiet but steady increase in blockchain adoption, concentrating on its practical benefits. This trend is supported by a growing understanding that blockchain’s applicability is not universal and that a robust digital infrastructure is crucial for its successful deployment.

Emerson Jumps Into The Software-Defined Automation Architecture Fray

  • Sees Boundless Automation as Industry Inflection Point to Address Data Barriers & Modernize Operations
  • Advanced software-defined automation architecture to integrate intelligent field, edge and cloud, unlocking a new era of productivity
  • Global automation leaders convene to learn about Boundless Automation at Emerson Exchange in Düsseldorf

I seem have become sort of persona non grata by the new marketing regime at Emerson Automation group. However, I picked up this news from it’s meeting last month in Düsseldorf, Germany. I found this statement by automation President and CEO Lal Karsanbhai interesting. It reflects the underlying philosophy I wanted to address when Dave and Jane and I started Automation World back in 2003. The world requires suppliers to go beyond proprietary control and leverage all the data for higher level decision making.

“After decades of implementing evolving automation strategies, manufacturers recognize the need to extract greater value from data that is locked in a rigid and now outdated automation architecture,” said Emerson President and CEO Lal Karsanbhai. “The proliferation of data and the development of advanced software are moving us to an era of unprecedented productivity. Rich data and advanced software are converging to form the next major inflection point in the industry.”

Acknowledging the foundational problems we’ve identified for years, Emerson says it is “poised to transform industrial manufacturing with the next-generation automation architecture designed to break down data silos, liberate data and unleash the power of software with Boundless Automation.”

I applaud Emerson’s strategy, although I do wish it had been done along with the standards efforts of OPAF. But only a couple of competitors seem to be serious about that one. Further, I continue to find companies in my research still trying to break down the silos. I thought we had accomplished that 10 years ago. I guess not. We still have complex networks of Microsoft Excel spreadsheets and every department for itself on data definition and retention.

To address this challenge and help customers achieve their operational improvements, Emerson is introducing a vision and actionable strategy to push more computing power closest to where it’s needed and establish the blueprint for a modern industrial computing environment. This environment includes flexibility to deploy software across the intelligent field; a modern, software-defined edge; and the cloud. All three domains will be connected through a unifying data fabric, helping to maintain data context, improve its usability and increase security.

Emerson’s modern, software-defined automation architecture will break down hierarchical networks, securely democratizing and contextualizing data for both people and the artificial intelligence (AI) engines that depend on a continuous flow of information.

Here are the components within Boundless Automation:

  • Intelligent Field: An intelligent field will simplify access to more data from more sources and a greater diversity of applications. With smarter devices and new connection technologies like 5G and APL, customers can streamline both connectivity from anywhere in the world, and integration across the new architecture
  • Edge: The new OT edge creates a modern, secure, low-latency computing environment, putting new software tools and actionable data closest to its user. This enhanced edge environment establishes a platform for IT and OT colleagues to innovate and collaborate more than ever before.
  • Cloud: The cloud will power complex operations and engineering capabilities on-premise and across the enterprise by providing infinite analytical computing power, enterprise collaboration, attractive lifecycle costs and on-demand support and service.

MX Workmate OT-compliant GenerativeAI Solution for Connected Workers

It had to happen sooner or later—GenerativeAI Large Language Model (LLM) for human-machine interface applications. Funny that nowhere in the press release do they mention HMI while using more awkward workaround phrasing. Maybe that is a Finish translation?

  • Generative AI Large Language Model (LLM) technology for operational environments, bridging knowledge and language barriers between industrial workers and OT systems
  • On-premise edge based MX Workmate solution enables connected workers to get contextually relevant real-time information and query OT-systems in a secure and reliable way using natural language
  • OT-compliant MX Workmate automated IT/OT knowledge retrieval, eases interaction between workers and systems to drive efficiency, productivity and worker safety

MX Workmate leverages Generative AI (GenAI) and large language module (LLM) technologies to generate contextual, human-like language content based on real-time OT data, enabling workers to understand complex machines, get real time status information and industries to achieve greater flexibility, productivity, sustainability, as well as improve worker safety.

Follow this blog

Get a weekly email of all new posts.