Three Dangerous Ideas From Ray Kurzweil

Three Dangerous Ideas From Ray Kurzweil

Peter Diamandis, entrepreneur and founder of Singularity University and XPRIZE among many other things, interviewed his friend Ray Kurzweil at the Googleplex for a 90-minute (live) webinar on disruptive and dangerous ideas.

Diamandis promotes what he calls Abundance Thinking. He says, “By consuming and considering a steady diet of ‘crazy ideas,’ you train yourself to think bigger and bolder… a critical requirement for making impact. As humans, we are linear and scarcity-minded. As entrepreneurs, we must think exponentially and abundantly. At the end of the day, the formula for a true breakthrough is equal to ‘having a crazy idea’ you believe in, plus the passion to pursue that idea against all naysayers and obstacles.”

Kurzweil is Co-founder and Chancellor of Singularity University. He is also an XPRIZE Trustee, the Director of Engineering at Google, and “one of the best predictors of our exponential future.”

Diamandis and Kurzweil recorded a 90-minute conversation recorded on the YouTube video linked above. Here are 3 compelling ideas that came from the conversation as reported by Diamandis and sent in his newsletter. If you haven’t run across him, I recommend subscribing and having your mind blown.

The Nation-State Will Soon Be Irrelevant

Historically, we humans don’t like change. We like waking up in the morning and knowing that that the world is the same as the night before.

That’s one reason why government institutions exist: to stabilize society.

But how will this change in 20 or 30 years? What role will stabilizing institutions play in a world of continuous, accelerating change?

“Institutions stick around, but they change their role in our lives,” Ray explained. “They already have. The nation-state is not as profound as it was. Religion used to direct every aspect of your life, minute to minute. It’s still important in some ways, but it’s much less important, much less pervasive. [It] plays a much smaller role in most people’s lives than it did, and the same is true for governments.”

Ray continues: “We are fantastically interconnected already. Nation-states are not islands anymore. So we’re already much more of a global community. The generation growing up today really feels like world citizens much more than ever before, because they’re talking to people all over the world and it’s not a novelty.”

(Diamandis) previously shared (his) belief that national borders have become extremely porous, with ideas, people, capital and technology rapidly flowing between nations. In decades past, your cultural identity was tied to your birthplace. In the decades ahead, your identify is more a function of many other external factors. If you love space, you’ll be connected with fellow space-cadets around the globe more than you’ll be tied to someone born next door.

We’ll hit longevity escape velocity before we realize we’ve hit it

Ray and I share a passion for extending the healthy human lifespan.

I frequently discuss Ray’s concept of “longevity escape velocity” — the point at which, for every year that you’re alive, science is able to extend your life for more than a year.

Scientists are continually extending the human lifespan, helping us cure heart disease, cancer, and eventually neurodegenerative disease. This will keep accelerating as technology improves.

During my discussion with Ray, I asked him when he expects we’ll reach “escape velocity…”

His answer? “I predict it’s likely just another 10 to 12 years before the general public will hit longevity escape velocity.”

“At that point, biotechnology is going to have taken over medicine,” Ray added. “The next decade is going to be a profound revolution.”

From there, Ray predicts that nanorobots will “basically finish the job of the immune system,” with the ability to seek and destroy cancerous cells and repair damaged organs.

As we head into this sci-fi-like future, your most important job for the next 15 years is to stay alive. “Wear your seatbelt until we get the self-driving cars going,” Ray jokes.

The implications to society will be profound. While the scarcity-minded in government will react saying, “Social Security will be destroyed,” the more abundance-minded will realize that extending a person’s productive earning lifespace from 65 to 75 or 85 years old would be a massive boom to the GDP.

Technology will help us define and actualize human freedoms

The third dangerous idea from my conversation with Ray is about how technology will enhance our humanity, not detract from it.

You may have heard critics complain that technology is making us less human, and increasingly disconnected.

Ray and I share a slightly different viewpoint: that technology enables us to tap into the very essence of what it means to be human.

“I don’t think humans even have to be biological,” explained Ray. “I think humans are the species that changes who we are.”

Ray argues that this began when humans developed the earliest technologies — fire and stone tools. These tools gave people new capabilities, and became extensions of our physical bodies.

At its base level, technology is the means by which we change our environment, and change ourselves. This will continue, even as the technologies themselves evolve.

“People say, ‘Well, do I really want to become part machine?’ You’re not even going to notice it,” says Ray, “because it’s going to be a sensible thing to do at each point.”

Today, we take medicine to fight disease and maintain good health, and would likely consider it irresponsible if someone refused to take a proven, life-saving medicine.

In the future, this will still happen — except the medicine might have nanobots that can target disease, or will also improve your memory so you can recall things more easily.

And because this new medicine works so well for so many, public perception will change. Eventually, it will become the norm… as ubiquitous as penicillin and ibuprofen are today.

In this way, ingesting nanorobots, uploading your brain to the cloud, and using devices like smart contact lenses can help humans become, well, better at being human.

Ray sums it up: “We are the species that changes who we are to become smarter and more profound, more beautiful, more creative, more musical, funnier, sexier.”

My Take

I began studying international relations 50 years ago under an interesting professor. He was well up the chain at the CIA, Colonel in US Army Intelligence, PhD from Georgetown. Also, he was sort of a rebel. He took a liking to a somewhat rebellious kid from the farmlands.

It’s evident that the nation-state is in its death-throes. Trump and Xi and Putin are all trying to find ways to reassert power over a society and businesses that are increasingly global. Yes, there are emotional loyalties. But take a big step back and look at the sweep of history of the past 150 years. Think about what you see.

Technology throughout the entire history of humans has been both good and bad. But overall, it has benefitted humans. We eat better (well within our power of choice—don’t choose Doritos), live longer, have better housing and clothing, travel faster. We also have machines to help with backbreaking and dangerous labor.

As Diamandis says, think abundance rather than scarcity.

Looking At Technology 2030 Compliments of Dell Technologies and IFTF

Looking At Technology 2030 Compliments of Dell Technologies and IFTF

Living with technology a decade from now. Dell Technologies and the Institute for the Future conducted an in-depth discussion with 20 experts to explore how various social and technological drivers will influence the next decade and, specifically, how emerging technologies will recast our society and the way we conduct business by the year 2030.

There is no universally agreed upon determination of which technologies are considered emerging. For the purpose of this study, IFTF explored the impact that Robotics, Artificial Intelligence (AI) and Machine Learning, Virtual Reality (VR) and Augmented Reality (AR), and Cloud Computing, will have on society by 2030. These technologies, enabled by significant advances in software, will underpin the formation of new human-machine partnerships, according to the IFTF.

Talk of digital transformation is virtually everywhere in Information Technology circles and Operations Technology circles. My long and varied experiences have often placed me at the boundaries where the two meet—and are now increasingly overlapping.

The take on robotics is right on target. And forget about all the SciFi scare stories that mainstream media loves to promote. The future is definitely all about human-machine partnership or collaboration. For example I often talk with EMTs about life in the rescue squad. These people are always in the gym. Our population in the US has gotten so large and obese that they often have to lift 300+ lb. people who haven’t the strength to help themselves up. Think about a robot assistant helping the EMT.

The AI discussion is also fraught with prominent people like Ray Kurzweil or Elon Musk giving dystopian SciFi views of the future. We are a long way from “intelligence.” Where we are is really the use of machine learning and neural networks that help machines (and us) learn by deciphering recurring patterns.

Back to the study, the authors state, “If we start to approach the next decade as one in which partnerships between humans and machines transcend our limitations and build on our strengths, we can begin to create a more favorable future for everyone.”

Jordan Howard, Social Good Strategist and Executive Director of GenYNot, sees tremendous promise for the future of human-machine partnerships: “Many of the complex issues facing society today are rooted in waste, inefficiency, and simply not knowing stuff, like how to stop certain genes from mutating. What if we could solve these problems by pairing up more closely with machines and using the mass of data they provide to make breakthroughs at speed? As a team, we can aim higher, dream bigger, and accomplish more.”

Liam Quinn, Dell Chief Technology Officer, likens the emerging technologies of today to the roll-out of electricity 100 years ago. Quinn argues that we no longer fixate on the “mechanics” or the “wonders” of electricity, yet it underpins almost everything we do in our lives. Similarly, Quinn argues, in the 2030s, today’s emerging technologies will underpin our daily lives. As Quinn provokes, “Imagine the creativity and outlook that’s possible from the vantage point these tools will provide: In 2030, it will be less about the wonderment of the tool itself and more about what that tool can do.”

By 2030, we will no longer revere the technologies that are emerging today. They will have long disappeared into the background conditions of everyday life. If we engage in the hard work of empowering human-machine partnerships to succeed, their impact on society will enrich us all.

Robots

While offshoring manufacturing jobs to low-cost economies can save up to 65% on labor costs, replacing human workers with robots can save up to 90% of these costs.

China is currently embarking upon an effort to fill its factories with advanced manufacturing robots, as workers’ wages rise and technology allows the industry to become more efficient. The province of Guangdong, the heartland of Chinese manufacturing, has promised to invest $154 billion in installing robots.

Buoyed by their commercial success, the adoption of robots will extend beyond manufacturing plants and the workplace. Family robots, caregiving robots, and civic robots will all become commonplace as deep learning improves robots’ abilities to empathize and reason. Google recently won a patent to build worker robots with personalities.

Artificial Intelligence and Machine Learning

Approximately 1,500 companies in North America alone are doing something related to AI today, which equates to less than 1% of all medium-to-large companies. We’re seeing this in the financial services industry already, with data recognition, pattern recognition, and predictive analytics being applied to huge data sets on a broad scale. In a 2015 report, Bank of America Merrill Lynch estimated that the AI market will expand to $153 billion over the next five years—$83 billion for robots, and $70 billion for artificial intelligence-based systems.

In addition to their ability to make decisions with imperfect information, machines are now able to learn from their experiences and share that learning with other AI programs and robots. But AI progress also brings new challenges. Discussions surrounding who or what has moral and ethical responsibility for decisions made by machines will only increase in importance over the next decade.

Virtual Reality and Augmented Reality

Although both Virtual and Augmented Reality are changing the form factor of computing, there is a simple distinction between the two. VR blocks out the physical world and transports the user to a simulated world, whereas AR creates a digital layer over the physical world.

Despite the difference, both technologies represent a fundamental shift in information presentation because they allow people to engage in what Toshi Hoo, Director of IFTF’s Emerging Media Lab, calls “experiential media” as opposed to representative media. No longer depending on one or two of our senses to process data, immersive technologies like AR and VR will enable people to apply multiple senses—sight, touch, hearing, and soon, taste and smell—to experience media through embodied cognition.

Over the next decade, Hoo forecasts that VR, combined with vast sensor networks and connected technologies, will be one of many tools that enable distributed presence and embodied cognition, allowing people to experience media with all their senses.

Cloud Computing

It’s important to recognize that Cloud Computing isn’t a place, it’s a way of doing IT. Whether public, private, or hybrid (a combination of private and public), the technology is now used by 70% of U.S. organizations. This figure is expected to grow further, with 56% of businesses surveyed saying they are working on transferring more IT operations to the cloud, according to IDG Enterprise’s 2016 Cloud Computing Executive Summary.

While the cloud is not a recent technological advancement, cloud technology only really gathered momentum in recent years, as enterprise grade applications hit the market, virtualization technologies matured, and businesses became increasingly aware of its benefits in terms of efficiency and profitability. Increasing innovation in cloud-native apps and their propensity to be built and deployed in quick cadence to offer greater agility, resilience, and portability across clouds will drive further uptake. Start-ups are starting to use cloud-native approaches to disrupt traditional industries; and by 2030, cloud technologies will be so embedded, memories from the pre-cloud era will feel positively archaic by comparison.

Human Machine Partnership

Recent conversations, reports, and articles about the intersection of emerging technologies and society have tended to promote one of two extreme perspectives about the future: the anxiety-driven issue of technological unemployment or the optimistic view of tech-enabled panaceas for all social and environmental ills.

Perhaps a more useful conversation would focus on what the new relationship between technology and society could look like, and what needs to be considered to prepare accordingly.

By framing the relationship between humans and machines as a partnership, we can begin to build capacity in machines to improve their understanding of humans, and in society and organizations, so that more of us are prepared to engage meaningfully with emerging technologies.

Digital (Orchestra) Conductors

Digital natives will lead the charge. By 2030, many will be savvy digital orchestra conductors, relying on their suite of personal technologies, including voice-enabled connected devices, wearables, and implantables; to infer intent from their patterns and relationships, and activate and deactivate resources accordingly.

Yet, as is often the case with any shift in society, there is a risk that some segments of the population will get left behind. Individuals will need to strengthen their ability to team up with machines to arrange the elements of their daily lives to produce optimal outcomes. Without empowering more to hone their digital conducting skills, the benefits that will come from offloading ‘life admin’ to machine partners will be limited to the digitally literate.

Work Chasing People

Human-machine partnerships will not only help automate and coordinate lives, they will also transform how organizations find talent, manage teams, deliver products and services, and support professional development. Human-machine partnerships won’t spell the end of human jobs, but work will be vastly different.

By 2030, expectations of work will reset and the landscape for organizations will be redrawn, as the process of finding work gets flipped on its head. As an extension of what is often referred to as the ‘gig economy’ today, organizations will begin to automate how they source work and teams, breaking up work into tasks, and seeking out the best talent for a task.

Instead of expecting workers to bear the brunt of finding work, work will compete for the best resource to complete the job. Reputation engines, data visualization, and smart analytics will make individuals’ skills and competencies searchable, and organizations will pursue the best talent for discrete work tasks.

Modernizing Manufacturing Operations With AI

Modernizing Manufacturing Operations With AI

Artificial Intelligence, always known as AI, along with its sometime companion robots leads the mainstream media hype cycle. It’s going to put everyone out of jobs, destroy civilization as we know it, and probable destroy the planet.

I lived through the Japanese robotic revolution-that-wasn’t in the 80s. Media loved stories about robots taking over and how Japan was going to rule the industrialized world because they had so many. Probing the details told an entirely different story. Japan and the US counted robots differently. What we called simple pick-and-place mechanisms they called robots.

What set Japanese industrial companies apart in those days was not technology. It was management. The Toyota Production Method (aka Lean Manufacturing) turned the manufacturing world on its head.

My take for years based on living in manufacturing and selling and installing automation has been, and still is, that much of this technology actually assisted humans—it performed the dangerous work, removing humans from danger, taking over repetitive tasks that lead to long-term stress related injuries, and performing work humans realistically couldn’t do.

Now for AI. This press release went out the other day, “With AI, humans and machines work smarter and better, together.” So, I was intrigued. How do they define AI and what does it do?

Sensai, an augmented productivity platform for manufacturing operations, recently announced the launch of its pilot program in the United States. Sensai increases throughput and decreases downtime with an AI technology that enables manufacturing operations teams to effectively monitor machinery, accurately diagnose problems before they happen and quickly implement solutions.

The company says it empowers both people and digital transformation using a cloud-based collaboration hub.

“The possibility for momentous change within manufacturing operations through digital transformation is here and now,” said Porfirio Lima, CEO of Sensai. “As an augmented productivity platform, Sensai integrates seamlessly into old or new machinery and instantly maximizes uptime and productivity by harnessing the power of real time data, analytics and predictive AI. Armed with this information, every person involved – from the shop floor to the top floor – has the power to make better and faster decisions to increase productivity. Sensai is a true digital partner for the operations and maintenance team as the manufacturing industry takes the next step in digital transformation.”

By installing a set of non-invasive wireless sensors that interconnect through a smart mesh network of gateways, Sensai collects data through its IIoT Hub, gateways and sensors, and sends it to the cloud or an on-premise location to be processed and secured. Data visualization and collaboration are fostered through user-friendly dashboards, mobile applications and cloud-based connectivity to machinery.

The AI part

Sensai’s differentiator is that it provides a full state of awareness, not only of the current status, but also of the future conditions of the people, assets and processes on the manufacturing floor. Sensai will learn a businesses’ process and systems with coaching from machine operators, process and maintenance engineers. It will then make recommendations based on repeating patterns that were not previously detected. Sensai does this by assessing the team’s experiences and historical data from the knowledge base and cross checking patterns of previous failures against a real-time feed. With this information, Sensai provides recommendations to avoid costly downtime and production shutdowns. Sensai is a true digital peer connecting variables in ways that are not humanly possible to process at the speed required on a today’s modern plant floor.

About the Pilot Program

Participation in Sensai’s pilot program is possible now for interested manufacturers. Already incorporated throughout Metalsa, a leading global manufacturer of automotive structural components, Sensai is set to digitally disrupt the manufacturing industry through AI, including those in automotive, heavy metal and stamping, construction materials, consumer goods and more.

Porfirio Lima, Sensai CEO, answered a number of follow up questions I had. (I hate when I receive press releases with lots of vague benefits and buzz words.)

1. You mention AI, What specifically is meant by AI and how is it used?

Sensai uses many different aspects of Artificial Intelligence. We are specifically focused on machine learning (ML), natural language processing (NLP), deep learning, data science, and predictive analytics. When used together correctly, these tools serve a specific use case allowing us to generate knowledge from the resulting data. We use NLP to enable human and computer interaction helping us derive meaning from human input. We use ML and deep learning to learn from data and create predictive and statistical models. Finally, we use data science and predictive analytics to extract insights from the unstructured data deriving from multiple sources. All of these tools and techniques allow us to cultivate an environment of meaningful data that is coming from people, sensors, programmable logistics controllers (PLCs) and business systems.

2. “Learn processes through operators”—How do you get the input, how do you log it, how does it feed it back?

Our primary sources of data (inputs) are people, sensors, PLCs, and business systems. In the case of people on the shop floor or operators, we created a very intuitive and easy to use interface that they can use on their cellphones or in the Human Machine Interfaces (HMIs) that are installed in their machines, so they can give us feedback about the root causes of failures and machine stoppages. We acquire this data in real-time and utilize complex machine learning algorithms to generate knowledge that people can use in their day-to-day operations. Currently, we offer web and mobile interfaces so that users can quickly consume this knowledge to make decisions. We then store their decisions in our system and correlate it with the existing data allowing us to optimize their decision-making process through time. The more a set of decisions and conditions repeats, the easier for our system is to determine the expected outcome of a given set of data.

3. Pattern? What patterns? How is it derived? Where did the data come from? How is it displayed to managers/engineers?

We create “digital fingerprints” (patterns) with ALL the data we are collecting. These “patterns” allow us to see how indicators look before a failure occurs, enabling us to then predict when another failure will happen. Data comes from the machine operators, the machines or equipment, our sensors, and other systems that have been integrated to Sensai’s IIOT hub.

We trigger alerts to let managers and engineers know that a specific situation is happening. They are then able to review it in their cellphones as a push notification that takes them to a detailed description of the condition in their web browser where they can review more information in depth.

4. What specifically are you looking for from the pilots?

We are not a cumbersome solution, for us is all about staying true about agility and value creation. We look for pilots that can give us four main outcomes:

– Learn more about our customer needs and how to better serve them

– A clear business case that can deliver ROI in less than 6 months after implementation and can begin demonstrating value in less than 3 months.

– A pilot that is easy to scale up and replicate across the organization so we can take the findings from the pilot and capitalize them in a short period of time.

– A pilot that can help Sensai and its customers create a state of suspended disbelief that technology can truly deliver the value that is intended and that can be quickly deployed across the entire organization.

Podcast Developing the Soul of a Leader

Podcast Developing the Soul of a Leader

I’ve published another podcast. Despite the many dystopian views of technology, automation, and robots in the future, it is human decision making and leadership that determines what will happen. Gaurav Bhalla’s “Awakening a Leader’s Soul” teaches a different perspective on leadership.

You can subscribe on Apple Podcasts, Overcast, or any other podcatcher. You can also subscribe to my YouTube channel and get the video edition. If you like my thoughts, please give me a good rating on your source.

Modernizing Manufacturing Operations With AI

Software Development Kit Boost Usage Of Robotic Platform

Whenever people hear about automation or manufacturing technology, they always respond with “robots?”. Reading mainstream media where writers discuss manufacturing without a clue, they also seem fixated on robots. And most all of this ranges from partial information to misinformation. I seldom write about robotics because the typical SCARA or six-axis robots are still doing the same things they’ve always done—pick-and-place, welding, painting, material handling. They are better, faster, more connected, and in different industries, but in the end it’s still the same thing.

That is why I’m a fan of Rethink Robotics. These engineers are out there trying new paradigms and applications. Here is a recent release that I think bears watching. This news is especially relevant in the context of the visit I made last week to Oakland University and conversations with some students.

Rethink Robotics unveiled the Sawyer Software Development Kit (SDK), a software upgrade designed for researchers and students to build and test programs on the Sawyer robot. With a wide range of uses for university research teams and corporate R&D laboratories around the world, Sawyer SDK offers further compatibility with ROS and state-of-art Open Source robotics tools, as well as an affordable solution to increase access to advanced robotics in the classroom.

Sawyer SDK includes several advanced features that allow users to visualize and control how the robot interacts with its environment. Sawyer SDK now integrates with the popular Gazebo Simulator, which creates a simulated world that will visualize the robot and its contact with the environment, allowing researchers to run and test code in the simulation before running it on the robot. Sawyer’s Gazebo integration is completely open source, allowing students to run simulations from their individual laptops without a robot until they’re ready to test the code in real time. This approach allows professors to provide students with access to the industry-leading collaborative robots.

In addition to the Gazebo integration, Sawyer SDK includes a new motion interface that allows researchers to program the robot in Cartesian space. This development lowers the barriers for motion planning for programmers without a full robotics background. The new release also allows researchers to leverage new impedance and force control. Sawyer SDK also includes support for ClickSmart, the family of gripper kits that Rethink announced in 2017 to create a fully integrated robotic solution.

“Rethink’s robots are used in the world’s leading research institutions, which provides us with a wealth of feedback on what our research customers really want,” said Scott Eckert, president and CEO, Rethink Robotics. “As we have with all of our SDK releases, we’re continuing to set the standard in research with industry-leading features that allow universities and corporate labs to push the field of robotics forward and publish their research faster.”

Sawyer SDK is being piloted in robotics programs at multiple universities, including Stanford University, University of California at Berkeley, Georgia Institute of Technology and Northwestern University. Stanford’s Vision and Learning Lab works on endowing robots with diverse skills for both industrial and day-to-day personal robotics applications.

“Robotics is a field that combines technological and engineering skills with creativity, and the inventiveness our students have shown so far with the robots has been astounding,” said Dr. Animesh Garg, postdoctoral researcher in the Stanford University department of computer science. Animesh and his team of researchers have put Sawyer to use executing tasks directly from virtual reality (VR) input using automatic decomposition in simpler activities. Sawyer is also used for ongoing work in learning to use simple tools, such as hammers and screwdrivers.

Stanford University’s Experimental Robotics class allows students to think beyond day-to-day industrial tasks. They’ve trained Sawyer to draw, and track moving targets and hovering drones. Rethink’s Sawyer has enabled faster learning curves for researchers and students alike, making it easier than ever with the Sawyer SDK release.

The SDK will be available on all Sawyer robots, allowing access to both the Intera manufacturing software and the SDK software, starting in March 2018.

Follow this blog

Get a weekly email of all new posts.