30 kg Collaborative Robot Unveiled and Ready for Sale

Centers of excellence of certain types of technology seem to grow organically. Dayton, Ohio once had several manufacturers of special coil-winding machinery. Central western Ohio had several companies that manufactured special tube-bending machines. Odessa, Denmark? An entire ecosystem built around collaborative robots. And today, November 29, I sat in on a press conference announcing Universal Robots newest product—the UR30 collaborative robot (cobot) with a 30 kg payload capacity.

Why develop a new product? CEO and President Kim Povlsen says the market is good and growing—especially in Asia. The automotive and electronics markets are hot, and logistics has become an increasingly good market for cobots. Plus expanding the line perpetuates and extends UR’s mission of creating better workplaces.

The UR30 is a bit of a paradox. It can handle a heavier payload than its sibling the UR20, but it has a smaller footprint. This smaller size allows it to adapt to more application areas. A structural benefit comes from the smaller size leading to a more rigid structure allowing it to hold its arm steady in high-torque screw driving applications. 

UR30 is the second in Universal Robot’s new series of innovative, next generation cobots and is built on the same architecture as the award-winning UR20. Despite its compact size, UR30 offers extraordinary lift, and its superior motion control ensures the perfect placement of large payloads allowing it to work at higher speeds and lift heavier loads.

This makes UR30 ideal for several applications, including machine tending, material handling and high torque screw driving. For machine tending, the high payload brings new possibilities as it allows the cobot to use multiple grippers at the same time. This means it can remove finished parts and load more material in one single pass, shortening changeover times and maximizing productivity.

UR30 will also effectively support high torque screw driving as it can handle larger and higher-output torque tools, and thanks to a steady mode feature UR30 delivers straight and consistent screw driving. This will be beneficial in, for example, the automotive industry.

In addition to this, the 30 kg payload makes UR30 a great match for material handling and palletizing of heavy products across all industries, with the small footprint enabling it to fit into almost all workspace – relieving humans of the heavy lifting. Weighing only 63.5 kg, it can also be easily moved between work cells.

Povlsen adds, “The higher payload and greater flexibility underpin a new era in automation. Industries around the world are embracing more agile manufacturing and modularity in production – part of achieving that modularity and agility is about mobility and this cobot delivers that despite its payload.

“As industries evolve, the UR30 not only meets but anticipates shifting demands, enabling businesses to adapt and respond to changing needs effectively. As we continue to innovate, the UR30 is another step in UR’s journey in pushing the boundaries of what is possible in the world of automation.”

Kubernetes-as-a-Service for the Distributed Edge

Containers, specifically Kubernetes, constitute a powerful tool in the modern edge-to-cloud architecture, ZEDEDA has developed a service model for the technology.

In brief:

  • ZEDEDA Edge Kubernetes Service is a fully managed service including a Kubernetes runtime curated, managed and supported by ZEDEDA.
  • Organizations can instantly deploy Kubernetes infrastructure at the distributed edge, securely and cost-efficiently.
  • ZEDEDA’s partnerships and integrations with industry-leading orchestrators, such as Avassa, Rafay, Red Hat OpenShift, SUSE Rancher and VMware Tanzu, provide a robust solution for the modern edge landscape.

ZEDEDA has announced ZEDEDA Edge Kubernetes Service, a fully managed Kubernetes service for the distributed edge. The new service includes a Kubernetes runtime that is curated, managed and supported by ZEDEDA, as well as integrations with industry-leading orchestrators.

Deploying Kubernetes at the edge is challenging because it was built for centralized data centers and scale-out clouds and, therefore, not for inherently constrained and distributed edge environments. ZEDEDA Edge Kubernetes Service is a fully managed service that simplifies Kubernetes deployments at the edge, allowing customers to focus on their applications instead of managing and maintaining the underlying infrastructure. The new service eliminates the struggles typically associated with Kubernetes deployments at the edge, such as highly remote or distributed locations, constrained devices, unreliable security, lack of skilled IT personnel in the field and undependable network connectivity. ZEDEDA Edge Kubernetes Service enables organizations to deploy and run Kubernetes infrastructure at the distributed edge remotely, securely and cost-efficiently.

“Our customers are industry leaders who are pushing the boundaries of innovation at the distributed edge, and working with them, we realized the need for an edge service that would remove the obstacles of deploying Kubernetes in these environments,” said Said Ouissal, ZEDEDA’s CEO and founder. “ZEDEDA Edge Kubernetes Service is a first-of-its-kind fully managed edge solution that enables our customers to use any Kubernetes tools that fit their needs and provides a clear path to modernize edge infrastructure while leveraging existing IT investments.”

ZEDEDA Edge Kubernetes Service Provides Full Lifecycle-Managed Kubernetes.

Programming Model Enables Application Development for both Cloud and Edge

Edge compute continues to be the most talked about part of the network these days. This news concerns an application development platform for Edge and Cloud. I wish I could try out all this software like I used to many years ago. It’s all too complex and expensive today. Like everything, I don’t know if it works, but it sounds good.

Lightbend Inc., the company providing cloud native microservices frameworks for some of the world’s largest brands, has announced the release of its latest version of Akka, one of the industry’s most powerful platforms for distributed computing, which incorporates a new and unique programming model that enables developers to build an application once and have it work across both Cloud and Edge environments.

“Today, applications developed for cloud native environments are generally not well-suited to the Edge and vice versa,” said Jonas Bonér, Lightbend’s founder and CEO. “This has always struck me as counter-productive, as both architectures lean heavily on one another to be successful. As the line between Cloud and Edge environments continues to blur, Akka Edge brings industry-first capabilities to enable developers to build once for the Cloud and, when ready, deploy seamlessly to the Edge.”

“Akka has been a powerful enabling technology for us to build high-performance Cloud systems for our clients,” said Jean-Philippe Le Roux, CEO of Reflek.io, an innovative company delivering Digital Twin technologies to geo-distributed companies. “We have been able to dramatically speed our time-to-production by building a single solution for both Cloud and Edge with Akka.”

Akka provides a singular programming model that eliminates the high latency, large footprint, and complexity barriers the Edge has posed for development teams wanting to bridge the Edge and Cloud. Developers focus on business logic, not complicated, time-consuming tool integrations. As a result, businesses can harness, distribute, and fully utilize the vast amount of intelligent data to improve their operations, regardless of where that data is generated. Some specific capabilities of the latest version of Akka include:

  • Adaptive Data Availability
  • Projections over gRPC for the Edge – asynchronous, brokerless service-to-service communication
  • Scalability and efficiency improvements to handle the large scale of many Edge services
  • Programmatically defined low-footprint active entity migration
  • Temporal, geographic, and use-based migration
  • Run Efficiently In Resource Constrained Environments
  • Support for more constrained environments such as running with GraalVM native image and lightweight Kubernetes distributions
  • Support for multidimensional autoscaling and scale to near zero
  • Lightweight storage, for running durable actors at the far edge
  • A Single Programming Model for the Cloud-to-Edge Continuum
  • Akka single programming model keeps the code, the tools, the patterns, and the communication the same, regardless if it is Cloud, Edge, or in between
  • Seamless Integration – works at the Edge or in the Cluster automatically
  • Empowering New Innovation
  • Active/Active digital twins, and many other new use cases
  • No dealing with complicated logic to handle network segregation
  • Focus on business logic and flow (not on tool integrations)

Responsible Computing Aligns with UN Sustainable Development Goals

The big internal debate at OpenAI last week that spilled into the general news domain highlights the struggle over developing technology as quickly as possible (usually to be the first with a hugely profitable product) against those who seek some responsibility among the developers. These latter would be trying to avoid the social discord and personal angst caused by Facebook/Instagram/TikTok algorithmic feeds.

The Responsible Computing Consortium attempts to step into the general computing void.

Responsible Computing (RC), a consortium comprised of technology innovators working together to address sustainable development goals, published a new whitepaper: Aligning Responsible Computing Domains with the United Nations Sustainable Development Goals.

“Responsible Computing must align with an existing, globally adopted framework such as the United Nations Sustainable Development Goals (SDGs) to ensure our work is credible and reliable,” said Page Motes, Strategic Advisor.  “Leveraging the SDG helps ensure that each Responsible Computing domain is rooted in legitimate and practical concepts to implement and drive beneficial progress.”

The RC framework focuses on six domains of responsible computing, including data centers, infrastructure, code, data usage, systems, and impact. RC’s Self-Assessment survey helps organizations evaluate their sustainability practices for information and communications technologies (ICT) and other business areas.

The UN initially established SDGs to guide nation-states; however, thousands of public and private organizations across the globe have decided to align broad programs, as well as discreet projects/initiatives, to specific SDGs.

“Organizations need a way to measure their progress in meeting SDGs against a baseline as they implement new strategies and as domains evolve and mature,” said Oriette Nayel, Co-Chair for Data Usage, Responsible Computing Consortium. “The Responsible Computing Self-Assessment provides a clear opportunity for organizations to cross-reference the various sub-elements addressed per RC domain with the SDGs.”

Responsible Computing domain alignment to SDGs falls into three different categories:

  • Foundational SDGs ensure domains are structurally sound and rooted in the law or essential standards.
  • SDGs that benefit by proper scoping, planning, or execution of the elements of a domain.
  • SDGs that reap the positive impact of a purposeful, responsible computing use case based on its intended output/outcome.

Organizations can drive meaningful and lasting change by:

  • Adopting the RC model
  • Understanding the potential impacts (positive and negative) related to each RC domain
  • Identifying and progressing against a small number of UN SDGs
  • Providing transparent reporting of outcomes

“IBM continues to be proud of our leadership position related to responsible computing,” said Guncha Malik, Executive Architect, IBM Cloud CISO. “In alignment with our ESG framework of Environmental, Equitable and Ethical Impact, IBM co-founded the Responsible Computing Consortium to further highlight the importance across industry to act as good corporate citizens and address key areas of synergy in alignment with widely accepted frameworks such as the United Nations Sustainable Development Goals.”

“Whichever SDGs your organization chooses to advance, root your work in measurement and transparency, and be cautious with the use of the terms green, sustainable, or similar language that could lead to accusations of greenwashing and reputational damage,” said Bill Hoffman, Chairman & CEO, Object Management Group.

Industry IoT Consortium Publishes the Digital Twin Core Conceptual Models and Services Technical Report

Digital Twins form the core technology to Industry 4.0, Industrial Metaverse, and Digital Transformation. (Did I hit all the hype hot buttons there?) All joking aside, digital twins—making digital designs available across platforms—are important. The Industry IoT Consortium (IIC) has published this month the Digital Twin Core Conceptual Models and Services Technical Report.

The report guides technical decision-makers, system engineers, software architects, and modelers about connecting the foundational IT infrastructure with industry-specific business applications powered by digital twins in industrial settings.

The report describes digital twin fundamental concepts and basic requirements, core conceptual models and services, enabling architectures and technologies, and supported business applications. It provides high-level technical considerations in implementing the digital twin core layer, aligned to the Virtual Representation section of the Digital Twin Consortium (DTC) Platform Stack Architectural Framework: An Introductory Guide. The IIC report also includes a survey of relevant standards and can be used as input for standards development for digital twins.

The Industry IoT Consortium is a program of the Object Management Group (OMG). 

Follow this blog

Get a weekly email of all new posts.