TU Wien Informatics

20 Years

Demon AI? Interview with Oren Etzioni

  • By Claudia Vitt
  • 2019-08-03
  • People
  • Interview

Oren Etzioni, CEO of the Seattle-based Allen Institute for Artificial Intelligence, gave insights about AI and its challenges.

Oren Etzioni giving the 2019 Vienna Gödel Lecture
Oren Etzioni giving the 2019 Vienna Gödel Lecture
Picture: Amélie Chapalain / TU Wien Informatics

What is your view concerning the regulation of AI?

This is a very good question, and I think my view is somewhere in the middle. We have to be very careful with regulation, because regulation is slow, is subject to political distortion, and technology on the other hand is moving very fast. An example of a bad regulation is obviously web tracking: It seems that it does not really increase my privacy, but it does increase the number of times I have to click “okay” instead. All over the world people are setting up these things and have to click “okay”, but it does not make your privacy better. At the other extreme, there are places where regulation is very important. People are building toys that have AI in them and are talking to kids - and they can elicit information from children.

So I wrote an article a couple of years ago for the NY Times, where I tried to outline some principles for regulation. One is that an AI system should have an off-switch, like my electricity has a circuit breaker. I should be able to turn off the AI system. If I am talking to someone, get a phone call or an e-mail, a tweet or a FB post, I should be able to identify whether it is from a person or from a bot. It is becoming increasingly difficult to tell which one it is. There are programs like Google Duplex, that are now making phone calls that sound like humans. I think we have the right to know if we are talking to a person or to a machine. There is now some legislation, an example of a good regulation.

The last point I want to make is, that if you think about the bot situation, we are regulating the application of AI, the use of AI in a particular context, like in a toy or in making a phone call. I think that is much better than trying to regulate AI in general.

Many people are afraid that they will be replaced by robots or at least by people who understand robots. Giving an outlook over the next twenty years - how would you say will AI change everyday life and working life? And how can we deal with it?

There is a general trend of digital technology creating displacement in jobs. This is already happening. We don’t have toll operators or elevator operators. Secretaries used to type, now executives can just send e-mails. We have Amazon, so there are fewer shops, this is a big trend. AI is the next generation of computer technologies, and it is going to continue and even accelerate this trend. This is a very realistic concern. Addressing it is not simple, but here are some things that people can do: Everybody should learn how to write a basic computer programme, so people are not scared of computer programmes. It is not as nearly as hard as some people think, it is just part of literacy in the modern age. I used to teach a course in college for one week and teach people to write simple computer programmes. If you can do multiplication, then you can write a simple computer programme, and I think it will help many persons not to be afraid. I also think that simple skills like the elevator operator are going to be displaced, but other skills like imagination, communication and collaboration are going to remain, since they are more human and more nuanced. So obviously, if people can continue to develop their skills and work together with the machines that will also help. I think of artificial intelligence as augmented intelligence, the same way that you use a calculator or smartphone or Google or all these systems. They have AI capabilities and they make you more effective at what you do.

Do you think there is a special responsibility of computer science towards AI and society?

Yeah, I love that question! Because one thing that is changing is the power dynamic in society. It used to be that doctors and lawyers were the most powerful people. And of course they still have power, but now there is a new class of powerful people and these are the people who are working in informatics. They are designing algorithms and information systems and so the people with power have an ethical responsibility both to in how they shape the technology and also to think about the impact that it has. Not all the impact or the questions can be answered technologically. There is a radiant economic impact and people in informatics need to think about it. They need to advocate and to have the right impact on society - the impact that is consistent with our values. I do think we have a special responsibility, I wrote an article which appeared in Tech Crunch, where I suggested that people working on AI and informatics take a hippocratic oath for AI. It is more a symbol that we have a responsibility towards society.

From your personal experience: What would you say is the biggest challenge in dealing with AI?

People have to remember that computers are very literal-minded. We are often trying to get a computer programme to do something. You might think if it can play chess, can play Go and beat the world champion, then AI can do anything. But that is very far from the truth. Often it is very difficult to get the computer programme to exhibit the right behaviour - that is something that I call the Murphy’s law of AI: If the computer can find a way to make a mistake, it will find it and it will do the wrong thing. So for me, when working on AI systems, the biggest challenge is to get the AI system to do the right thing. This is because it is lacking common sense, it is lacking understanding of language, it is lacking understanding of the real world. It is not an accident that where AI has been successful often very artificial tasks like playing Go are involved.

AI systems are not nearly as sophisticated and powerful as you see in the Hollywood movies. And they are not generalists, they can either play Go, or another programme is driving a car. People are generalists, they can do all these different things. We are still very, very far from a programme like the terminator. Most robots cannot even cross the street very well.

Interview: Claudia Vitt, 2019

Curious about our other news? Subscribe to our news feed, calendar, or newsletter, or follow us on social media.

Note: This is one of the thousands of items we imported from the old website. We’re in the process of reviewing each and every one, but if you notice something strange about this particular one, please let us know. — Thanks!