“AI is the most profound technology created by mankind”

What is AI? How will it impact our jobs and businesses? Should we regulate it? How can we control it? Martin Ebers (RAILS) and Sandra von Lingen (GRUR) sat down with Sharad Gandhi to delve into these and other fascinating questions. The physicist, IT engineer, technology philosopher and strategy consultant has worked for Siemens in Germany, IBM in New York and Intel in the Silicon Valley in the past. Recently, he has published the book AI&U – Translating Artificial Intelligence into Business.

RAILS & GRUR in Conversation with Sharad Gandhi

 

GRUR: Sharad, what led you to write a book on AI?

SG: I believe AI is the most profound technology created by mankind and I am intrigued by how it is about to change most aspects of our society and lifestyle. I want to be able to help others understand AI in its totality because it will eventually touch everyone, everywhere.

RAILS: How can the human species keep pace with their own creation AI?

SG: Well, we humans have come to believe that we are unique on the planet, in terms of our intelligence and our intellectual capacities… and suddenly, we see another species – I call it species, because of the lack of a better word – evolving: Artificial Intelligence. But actually, it is our own creation and getting smarter. Eventually, it might become far smarter that anyone of us. How do we live and deal with that? It’s a challenge. But I am an optimist. Humans have always learned to adapt to a variety of different situations.

The Next Big Thing? 

GRUR: Currently, AI is considered as `The Next Big Thing´. Everybody seems to talk about it, but it seems that very few people really know or profoundly understand what AI is, let alone how it works. Is there such a thing as a concise definition of “Artificial Intelligence”?

SG: Simply stated: AI is the ability of machines to mimic elements of human intelligence. AI is able to exhibit behaviour that we normally associate with human intelligent behaviour.

RAILS: The impression of AI in the public and media is that AI differs from technologies in the past. Could you explain why?

SG: In general, the technologies in the past, like computer and the Internet, were positioned and seen as helping people to do things better or what was previously never possible. AI has been tainted with a negative image due to the science fiction films like Matrix and Terminator. Media has also contributed in painting AI as a job killer.

GRUR: And what about the hype – why did it take AI so long to enter the consciousness of politics and the wider public?

SG: AI has gone through several hype and disillusionment cycles in the last 70 years. The success of making “Deep Learning” work and produce impressive results starting in 2012 surprised even many technologists.

Societal concerns and business opportunities

RAILS: Much of the criticism is based on fundamental, social concerns and fears of new dangers that AI brings. What is your position on this?

SG: I have written a long article on the social concerns about AI. It addresses the prime concerns most people have: job loss, ethical standards of AI, and dangers that AI may bring for humanity.

GRUR: How about the issue of wealth distribution: Who will benefit from the AI revolution?

SG: Generally speaking, those who see the opportunity first and quickly decide to act to capitalize on AI can benefit a lot. Initially, big early adopter companies like Google, Amazon, Facebook and Apple have – and still are – enjoying the first-mover benefit. They have leveraged their immense resources to attract and acquire the best AI talents and start-ups in the world. They are now being rewarded for being the biggest risk takers. Also China has committed immense resources to develop powerful AI applications and become the leading AI country in the world. China also has very capable companies like Alibaba, Tencent and Baidu developing excellent AI applications.

Foto: Vivien Hertz, Brüssel

RAILS: Many entrepreneurs are enthusiastic about AI and venturing into the use of AI for business purposes. For which sectors, companies or types of businesses do you see the most economic benefits of AI?

SG: The most promising (vertical) industries for early adoption of AI technology are: Healthcare, retail, finance, manufacturing and defence. Many horizontal segments like hiring, sales and customer services will also benefit from AI and bring benefits to all industries.

RAILS: In your book “AI&U”, you have put a focus on how to relate AI and business. Can you tell us a little bit about this aspect, and why you chose this particular approach?

SG: I see myself as a translator of technology. When I worked for Intel and Siemens, my objective was always to help businesses understand how technology really helps them achieve business goals. Goals are all about making higher revenues and profits by becoming more competitive, by addressing bigger markets or a new market, and reducing costs. The key value of AI comes from automating decision-making, and this is what business people need to understand.

 “The way we do our jobs will change”

GRUR: You “admitted” that AI would probably transform our society also regarding the future of jobs and work. Do you think the job losses could be compensated by new jobs and would these jobs be on the same intellectual and qualification level?

SG: I believe, most people who are willing to invest in continued learning will not be affected adversely by AI – but will benefit from it. Any technology – like computing and the Internet – will displace specific human activities, AI will also replace humans in certain jobs. That is inevitable. Re-training and the willingness to learn new skills are going to become essential.

However, different jobs will require different skills. There is a McKinsey report which I quote in my article – which states that 40% of the jobs will not change, and of the 60% of the other jobs, only 30% of the job activities will be transformed. So people will still be doing marketing or will still work as a lawyer, but the way we do our jobs will change: technology will just do certain things better and faster. However, it doesn’t mean that jobs must go away. As McKinsey says, “The future of jobs is more jobs.”

RAILS: In some fields, AI can do repetitive decision-making much better than humans can (you name granting loans, assessing insurance claims etc.). Would this include the legal professions, too? The logic of algorithms seems to be quiet different than the legal system.

SG: AI can be very helpful for legal applications, because it is not about formal logic. AI is really about learning from a history of past decisions where law has been applied and transferring that experience into new cases. You would not want AI (or a robot) to be a judge, but a human judge can use AI as an expert assistant, who is always consistent and accurate in applying the wisdom of all the past judgments.

The AI Arms Race

GRUR: To address regulatory issues: Do you see a problem in the fact that some Internet companies already have a large market lead in the development of AI, because they have a lot of data to train their algorithms? How can start-ups and SMEs catch up and succeed in the future?

SG: I agree – data has become the critical resource: The quality of AI decisions improve with the more data that you have. That is the reason why companies which have a lot of data, like Google, Amazon and Facebook, are having a significant advantage. And of course, they do not just have more data, they also have a lot more economical power to acquire companies who can help them further grow their business.

Foto: Vivien Hertz, Brüssel

So the life of start-ups is going to be difficult if they try to do exactly the same thing that these big companies do. However, I think that there are many other areas opening up for innovation for start-ups using the tools generated by these big companies and invent something that the large companies are unable to do.

RAILS: Viktor Mayer-Schönberger has published a book with the title “Reinventing Capitalism in the Age of Big Data” in which he states that big companies, like Facebook etc., should be legally required to share their data. Do you think that would be a first step to break the monopoly and to allow the market forces to become back to normal?

SG: I think these companies have shown little motivation to self-regulate themselves. It’s a self-feeding mechanism: the big players have become very successful because their business model was about acquiring more and more data. Today, they are unwilling to share their data with others. This can only happen through regulation, by forcing them to share it.

On regulation – or: “All humans are a black box – and so is AI”

GRUR: Many talk about setting up a state agency that can regulate and control algorithms, AI and robotics. But talking about state supervision: Is self-learning AI interpretable at all… Do you think this aspect should be crucial in order to let this kind of AI onto the market?

SG: It is very important to understand that self-learning AI is not interpretable. In the deep learning AI systems, the algorithm is developed within the neural network. There is no way to extract it out of the machine. Take a simple example like face recognition. Basically, you train the machine to recognise faces. But the machine will not be able to tell you how or why it was able to recognise the face(s). It can do an extremely good job at doing so, but it´s decision-making process cannot be re-constructed or `reverse-engineered´. It is simply not possible to extract such information out of the machine! Therefore, you will never be able to do a check if the algorithm is OK or not. The only way of doing it is subjecting it to tests. This is also what we do for human beings by the way.

RAILS: Should we only allow supervised training of AI systems and prohibit unsupervised training? Is it necessary to prohibit certain AI learning systems (for instance deep neural networks) if they are too opaque and not even transparent for coders?

Foto: Vivien Hertz, Brüssel

SG: Many new AI systems are a combination of mostly supervised and some unsupervised training. Prohibition never works. And deep learning AI systems will be opaque. However, humans are also opaque. Human beings are also a black box like AI machines. You can only roughly assess how they think by asking them various questions or submitting them to various situations (in a test). If they come up consistently with a satisfying answer or reaction, we believe they are OK. The same can be done for an AI system.

GRUR: So who should be doing these checks – consortiums, groups or platforms?

SG: Someone neutral. The same way as it happens with today’s certification tests and labels, such as a “Microsoft certified engineer”… There has to be a body constructing tests based on the basic situations which this particular AI technology encounters, AI should pass the test every year. And if it works well, it could be considered qualified.

GRUR: In your book, you suggest that it is necessary to have a human in the loop in many cases, for example in in health care. But how can a human doctor verify the result of a system was fed by millions of data the doctor can never read (quantity problem) and has made correlations between various factors humans cannot understand (quality problem)?

SG: Humans cannot compete with AI decision-making accuracy or consistency. However, a person may be able to sense or feel something else and has the ability to overrule the AI decision. You can use Google Maps for navigation. It is very reliable in most cases and you follow it. However, you have the ability to overrule it and take a different road than recommended because it is more scenic. What I am trying to say is that humans must see AI as an expert helper tool that they trust. However, it cannot have emotions and empathy.

“We have to feed AI a wide diversity of examples to prevent discrimination”    

RAILS: What about biased decision-making?

SG: A deep learning AI develops its algorithm from the examples used to teach it. All examples are based on how human beings have made a decision with the supplied data. That means that the bias of human beings is implicit in what the AI learns. This is just like children who grow up in Saudi Arabia and learn from what they experience. Their ethics – the sense of right and wrong – are based on the Saudi lifestyle and culture. The same happens to all of us. We are all biased by our upbringing.

RAILS: But how can we make the decision of an AI more balanced?

Slide: Sharad Gandhi

SG: One way is to use a wide diversity of examples – involving decisions taken by people of varied background. I think that the level of ethics and bias neutrality in a system will become a quality differentiation feature of different AI systems. The less bias and the more ethical it is rated, the more market acceptance it will have. That’s the way the market normally resolves such matters.

RAILS: You seem to suggest that market forces can solve this problem by itself. However, there still needs to be competition and the biased decision has to be detected in the first place. But in many cases, when an individual is subject to biased decisions, they don’t even know which parameters have been taken into account.

SG: Yes, it might be that we need a combination of regulation and some standardisation of the level on diversity, ensuring that the examples used in an AI-system have a diverse basis. Maybe there’s going to be a way to measure diversity, in the sense that the training must imply examples with certain diversity levels.

The effects of data protection and privacy rules on competition

GRUR: In Europe, we have adopted the GDPR with strict data protection regulation. So since data is so important to train the algorithm, do you think that will become a problem for European companies?

SG: The USA has a different approach to privacy and they also have these mega companies like Google etc. with an excellent access to personal data, so they have an advantage from that point of view. Europe has neither of those two – neither data, nor companies. If European businesses have to compete against US-American giants in AI, how can they be successful?

GRUR: It has become clear that AI needs data, tons of it. Do you think the existing data protection laws including, inter alia, the principle of data minimisation and the principle of informed consent, are still up-to-date?

Foto: Vivien Hertz, Brüssel

SG: We are trying to restrict the collection of data, which is exactly the resource needed for an AI. This is a complicated dilemma: Better AI or better privacy? The point I´d like to make here is that Europe is the regulation king, which can actually also hurt European companies in competing against the US companies.

RAILS: How will the big players react to the new legal framework?

SG: Yesterday I was at a conference which revealed to me something very interesting: One speaker said that big companies, of the likes of Amazon, Google, do a financial calculation: “If we violate the law, what is it going to cost us? On the other hand, what’s the price we have to pay to comply with a law?” So there will be companies which will ignore regulations like the GDPR and still operate. They will be willing to pay a fine, but this intentional breach of law will give them a significant long-term competitive advantage.

 “Liability – do we need an ePerson?”

GRUR: In legal science we have a discussion whether an AI system should be recognized as a legal person (“ePerson”), similar to legal entities like certain companies. I have read that you do not think this could be a useful approach to regulate AI making right decisions. Could you tell us why?

SG: I don’t support the concept of AI as an ePerson. AI is a dead tool, and not a person. It does not take responsibility for its decisions. It has no understanding of the context and consequences of its decisions. The ownership and responsibility for using it for a given situation must be with some natural person, who understands the consequences of a wrong decision or the method and timing for putting that decision into action.

Will regulation always be too late to catch up with innovations?

RAILS: One last question: Where could Europe stand in 5 or 10 years as regards the use, acceptance and regulation of AI?

SG: EU product manufacturers can make good use of AI technologies developed elsewhere in their products for enhancing the value to customers. However, competing with US and Chinese products will be a challenge. They have an advantage because privacy is not a b

Foto: Vivien Hertz, Brüssel

ig a concern, and data access helps to make better products.

As to acceptance: Europe has always been slow in accepting new technologies. For AI, the acceptance will be even slower than normal because of data privacy concerns.

And finally – yes, we will develop regulations for AI. However, they will always be too late to catch up with innovations.

GRUR/RAILS: Sharad, many thanks for taking the time for this conversation!

A modified version of the interview will appear in the GRUR Newsletter 02-2018 which will be released on 18th of December 2018.

The interview was only possible thanks to the collaboration with GRUR!

This Blogpost was written by

Author

Rails-Redaktion Written by: