Understanding and debating the fears about artificial intelligence

In recent years more has been written about artificial intelligence in technology and business publications than ever before: the current wave of artificial intelligence innovations has caught the attention of virtually everyone, not in the least because of artificial intelligence fears.

Artificial intelligence (AI) isn’t new but this time it’s different. Cognitive systems and AI are innovation accelerators of the nascent digital transformation economy.

The evolution of AI-powered innovations and solutions in a myriad of areas has led to numerous articles and reports on the value of AI and its application across a wide range of domains, as well as the necessity and possibilities of artificial intelligence in a hyperconnected reality of people, information, processes, devices, technologies and transformations. Artificial intelligence in business is a reality.

Artificial intelligence concept

Artificial intelligence a threat for the future of life and human existence?

Even mainstream media have reported numerous times on artificial intelligence, albeit often with a different twist than : is artificial intelligence something we should fear or not? The question and concerns are real. So let’s take a look.

Several media and specialized publications mentioned the stark warnings regarding the impact of artificial intelligence for the future of humanity which has been coming from a wide range of globally recognized and respected science, technology and business leaders with very influential voices.

Several of them mentioned artificial intelligence as a potential threat for humanity. Think about people like Elon Musk, Bill Gates and Stephen Hawking, for instance. Influential indeed.

Stephen Hawking called artificial intelligence the greatest but also last break-through in human history.  Elon Musk, among others CEO of Tesla Motors and SpaceX, was quoted end 2014 in the Guardian in an article with a title that leaves little room for interpretation: “Elon Musk: artificial intelligence is our biggest existential threat”. And early 2015 he decided to donate $10 million to the Future of Life Institute (FLI), which runs a global program aimed at keeping AI “beneficial to humanity” (the Future of Life Institute also looks at other topics such as biotech, nuclear and climate).

Superintelligence and artificial intelligence fear: beyond intelligence we can humanly understand

It’s important to remember that Musk, Gates, Hawking and many others are not “against” artificial intelligence.

What they are warning about are the potential dangers of superintelligence (as we start seeing in some neural networks), maybe even intelligence we don’t understand. And is there anything humans fear more than what they can possibly not understand?

Still, to quote Tom Koulopoulos: “The real shift will be when computers think in ways we can’t even begin to understand”.

When Bill Gates expressed his concerns about AI this is what he said, according to an article on Quartz: “I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

In that sense we fully agree. But, then again, is looking ahead and being “concerned” or at least “vigilant” regarding technological evolutions and societal evolutions overall, not a matter of common sense and caring about “values”? Admittedly, with artificial intelligence (not as we know it today) and robotics, it’s a different ball game. Or is it?

If a superintelligent AI system is not purposefully built to respect our values, then its actions could lead to global catastrophe or even human extinction, as it neglects our needs in pursuit of its task. The superintelligence control problem is the problem of understanding and managing these risks. Though superintelligent systems are quite unlikely to be possible in the next few decades, further study of the superintelligence control problem seems worthwhile.

Discussions about the risks of AI are about the future – there is also a now

Maybe “The Second Machine Age: Work, Progress and Prosperity in a Time of Brilliant Technologies” by Erik Brynjolfsson and Andrew McAfee has added to the increasing number of alarm bells.

We haven’t read the book but here is a quote from someone who has, Roland Simonis: “As thinking machines are taking our jobs, what do we humans do? Brynjolfsson and McAfee have an answer to that. As more work gets done by machines, people can spend more time on other activities like leisure and amusement but also invention and exploration”.

If you think about it, all the opinions are not about the next decade or so but about the decades after that (if not further ahead of us). We’re sure that people like Bill Gates, AI optimist Ray Kurzweil, Andrew McAfee and Stephen Hawking know a lot more about the (future) potential and risks of AI but we’re also sure they don’t have a crystal ball and there is a big difference between what we think now and will think in, let’s say, 20 years from now.

In more than one way it’s a pity that AI is associated with what it could become (and what it was in previous waves when it failed to deliver upon its promises) instead of what it is today.

Artificial intelligence is far from a thing of the future. It exists today in business applications, clearly offering multiple benefits to the organizations using these solutions. It exists in so many platforms we use on a daily basis. Admittedly, it’s not here in the sense of super intelligence.

Promises versus realities and shudden shifts in behavior

We see a strong link between the debates regarding artificial intelligence and those about data, ethics and privacy. Let’s explain. Many businesses claim they will improve the lives and experiences of consumers when they share their data. In marketing, where data-driven is the talk of the day, providing data enables personalization and a “better and more relevant experience”, for instance by receiving personalized promotions and seeing “personalized content”.

But do consumers buy this official explanation? Do they trust it, let alone care about it? We’ve mentioned before how people have given up on believing they have control over their data. They lost confidence and are increasingly distrusting the message that sharing their data will make their lives better, are more vigilant and there is even a potential backlash and countermovement. Also look at changing legislation regarding data, for instance. Is this a temporary phenomenon and will people in the end simply feel that life is indeed better when they share their data? We guess it depends on whom they share it with and whether that promise of a better life is really true. Until now we can’t exactly say that marketing automation and data-driven have made our lives more enjoyable. But that’s us.

Knowing the power of sudden shifts in human behavior, essentially the main cause of what we call digital disruption and the digital transformations that occur as a consequence, people’s trust, values, beliefs and most of all actions are of extreme importance.

The promise of businesses regarding their marketing based upon personal data as a way “to make the experiences of consumers better and more relevant” sounds a bit like the previously mentioned promise of AI in a distant future in the words of Brynjolfsson and McAfee. I quote again: “as more work gets done by machines, people can spend more time on other activities like leisure and amusement but also invention and exploration“.

Really, are we sure? Will we have time to amuse ourselves if maybe we’re out of a job? Do we all want to spend more time on such activities? Do we want to give up on very old human activities and “tasks”, just because machines CAN take them over? Again, people’s reactions and preferences/desires/values are not to be taken lightly and it can change over time in any given direction.

The value question and lack of universal answers in artificial intelligence fears

The Future of Life Institute – watching over the future of AI - and other topics
The Future of Life Institute – watching over the future of AI – and other topics

That value dimension is hugely important. Also in 2015, Wired published an article about the views on AI of pioneer Stuart Russell. Russel wrote an open letter (which you can sign) urging researchers to not just make artificial intelligence more powerful but to also make artificial intelligence more “provably aligned” with human values.

And herein lies a conundrum. What human values are we talking about and how do we prove alignment?

Whose human values are we focusing on? By now it should at least be clear that, although there is quite some commonality in essentially human values (we all need love, social interaction, food, water and a sense of self-esteem), human values also differ a lot on other levels, depending on the where and who. Things such as good and bad, valuable and not or less valuable are very individual and highly related with (unconscious) belief systems, philosophical-cultural backgrounds, personal experiences, political convictions, the list goes on and on.

So, the question remains who defines these human values and how you prove alignment with such a human and personal/cultural given (Russel tackles the question in the article). The example of the various reactions to the “data privacy debate” indicate values differ. For many people privacy is a thing of the past, for many others and several governments it clearly seems not to be.

Among the people who signed the letter of Stuart Russel, posted on the website of the earlier mentioned Future of Life Institute (FLI): Erik Brynjolfsson, Elon Musk, Apple co-founder Steve Wozniak, Stephen Hawking and a growing list of many others. Note: several of the people mentioned in this article are in the scientific advisory board of the FLI.

Thinking and acting in the now for the tomorrow: let debates and value diversity remain human

Here is the thing. What if, as long as people are able to debate and think about human values, stand up for their beliefs (as some who are fed up with the digital and data-driven privacy issues and the many people warning about the potential future impact of AI seem to do) and keep having different views, the future remains “open” and we can look at the benefits of artificial intelligence as it is today? For instance: to improve the customer experience by giving context and meaning to unstructured data, leading to actions, without even the need to bring in private data into the equation.

As humans we value our being human. And one of the ‘holy grails’ as it has been cherished for ages is our “intelligence”. We emphasize it as a way to distinguish ourselves from other beings. We fear superintelligence as we see it as a risk to what we believe sets us apart. We fear it because we don’t know what it will or might be and become.

But we are more than intelligence and can’t define intelligence in any other way than we can grasp it as Tom Koulopoulos reminds us. We haven’t even started to understand the subconscious and pay less attention to it in an age of technology, science and rationality. But it exists, just as our intelligence exists as it is and as we know it.

Overlooking these aspects and not seeing how human changes in behavior and definitions of value make it impossible to predict what artificial intelligence will become. It also makes it impossible to predict whether in the end we will even have the “collective will” to accept our intelligence is just what it is and to “allow’ something that is more that that:  super intelligence as in not mimicking but beyond and maybe “above” human intelligence. It’s our belief that this fear and human dilemma contributes to an – as far as we remember – unseen collective effort of leading scientists and entrepreneurs to warn for and think about the dangers of AI. Maybe some are indeed led by fears regarding the unknown, the intelligence that might “beat” the human intelligence that makes us…human.

At the same time, however, while we try to protect what many believe defines our being human for the future, we risk not understanding the benefits and challenges of what is today. Whether it concerns the use of artificial intelligence, the usage of personal data or anything else for that matter. And looking at the privacy issues and AI debates it’s clear that people are acting today, not in the future, debating about values and risks. Let these debates and the rich diversity of human values remain human.

The future of AI: 3 goals?

At a 2016 symposium by the Future of Life Institute, Alphabet Chairman Eric Schmidt (and others) advised the “AI community to “rally around three goals”:

  1. AI should benefit the many, not the few.
  2. AI R&D should be open, responsible and socially engaged.
  3. Developers of AI should establish best practices to minimize risks and maximize the beneficial impact.

Top image: Shutterstock – Copyright: agsandrew