Thursday, October 3, 2019

Voices in AI – Episode 97: A Conversation with Alexandra Levit

Source: https://gigaom.com/2019/10/03/voices-in-ai-episode-97-a-conversation-with-alexandra-levit/
October 03, 2019 at 03:00PM

[voices_in_ai_byline]

About this Episode

On this Episode of Voices in AI Byron speaks with futurist and author Alexandra Levit about the nature of intelligence and her new book ‘Humanity Works‘.

Listen to this episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm and I’m Byron Reese. Today my guest is Alexandra Levit, she is a futurist, a managing partner at People Results and the author of the new book, Humanity Works. She holds a degree in psychology and communications from Northwestern University. Welcome to the show Alexandra.

Alexandra Levit: Thanks for having me Byron, it’s great to be here.

I always like to start with the same question. What exactly is intelligence and so by extension, what is artificial intelligence?

I believe I don’t have the scientific definition on hand, but I believe intelligence is understanding how to solve problems. It is possessing a degree of knowledge in an area so we can have what we consider to be traditional intelligence, which more is in the realm of knowing facts and knowing a particular area of expertise. We can also have emotional intelligence, which reflects your ability to understand people’s motivations, their emotions and to interact with them in a way that is going to form productive relationships. So there are different kinds of human intelligence, but at the end of the day, I think it’s really about having the ability to figure things out, to respond accordingly and to get a hold of a body of knowledge or sensations or feelings and use them to your advantage.

Now when we look at artificial intelligence, we are referring to not the human ability to do this, but a machine’s ability to do this, and the reason it’s called artificial is because it’s not human. It’s something that we’ve built. It’s something that we’ve created and the machine then has the ability to at least somewhat act autonomously—meaning that we might program it initially but then it can make decisions, it can complete tasks based on what we have given as general guidelines or parameters.

So I hope one of the reasons that this discussion is interesting to your listeners or readers is that I think probably everyone who is listening or reading has their own opinion about what artificial intelligence is and in fact we could probably spend the whole hour talking about that because obviously there are different books, but that that’s my take on it.

Absolutely. And to be clear, there is no consensus definition of what intelligence is to begin with, even before all this came along, though. I wholeheartedly agree, I don’t think of the question so much as an ‘angels dancing on the head of a pin’ type thing because in the end as we get closer to talking about your book, I think we’re really interested in how humans and machines are different. So when you were talking about human intelligence, you used words like ‘understand.’ Can a computer ever understand anything?

I think so. You look at some of the early experiments that were done around chess with IBM programming Watson to do chess games, and you see that the algorithm was able to understand the game of chess and was able to determine what moves to make in what sequence. That it required a possession of the game and being able to really manipulate the rules. So it wasn’t just that the machine was completing a set task or as set series of tasks in fact. And that I think is what makes artificial intelligence different than just ‘we’re going to program machine to do X, Y and Z.’

Artificial intelligence requires a degree of autonomy and that’s something that we’re seeing today when we program algorithms that we have not seen in the past. And Byron, of course you know this—we’ve been working with computers for a really long time and they’ve been helping us out with things for a really long time. But it has required direct human intermediation at every single stage. If this machine is going to do one thing, you got to tell it what to do next. And that’s where the whole field of programming code arose. It’s just like OK we’re going to write this code and the machine is going to do exactly that.

Artificial intelligence is different because then they almost start to think for themselves and build upon their primary knowledge. My favorite example of this lately—I love to use this example because I think most people know it—and that’s Gmail. Most people have Gmail and most people have noticed in the last couple of months that the artificial intelligence behind Gmail is really learning from your experiences writing e-mails and archiving things and adding attachments and putting things on your calendar. It starts to anticipate what you’re going to do based on what you’ve done in the past. And so I’ve got my Gmail finishing sentences for me in e-mail in the correct way. I’ve got it asking me where I want to put something, in what folder is correct. I’ve got it saying you know you mentioned the word attachment but there isn’t one. Do you want me to attach something? And it’s not to attach anything, it’s that the algorithm has guessed the correct attachment. And every time I use it as it goes on it gets smarter.

And to me that’s one of the best, most salient examples of artificial intelligence it’s learning from its experience working with me. It’s not just doing exactly what I told Gmail to do. And I find it fascinating and I love sharing it because I feel like virtually everyone has Gmail now and has experiences over the last couple months and then like “Wow, how did it know that?” And this is AI.

But still, the computer is just a giant clockwork. I mean, you just wind it up and the AI just does its thing. If e-mail copy contains “attachment” then suggest perhaps attachment, scan text, file type doc. I mean there’s nothing particularly… the computer doesn’t know anything any more than the compass knows which way is north. I guess if you think the compass knows how to find north, then maybe the computer knows something. But I mean it’s as dead and lifeless as a wind up clock isn’t it?

Well I think you bring up a really good point in it being dead and lifeless. I think that’s a different thing than it knowing. I think it can ‘know’ things just based on other things that have happened. So this is what I call—and again this is not official—but I call it the ‘assimilation of information.’ So it has the ability to determine what happened in the past and know what might happen in the future given that. And so that is a form of knowing. And it is a form of being able to do something differently than what you’ve been specifically programmed to do.

I think specificity is a really important part of this. And the other thing that I also would take you back on, you know we’re going there, is again I think where I talk about the difference between humans and machines in my book has a lot to do with the human traits that machines as of now and probably as a great deal in the future do not have and these are things like empathy and judgment and creativity, interpersonal sensitivity. These are the things that make us different. And it’s because until machines develop consciousness they’re not going to have anything like a moral compass or ethics or really even the ability to determine if something subjective is appropriate or any good. Even if they’re intelligent they’re going to be focused on things like the bottom line.

I’ve been using the example a ton lately because again, it’s one of those that everybody’s familiar with: When United Airlines called that guy off the plane at O’Hare because the algorithm that was governing the flight attendants’ schedule told the staff that these flight attendants in no uncertain terms had to get to their destination or else it was going to cost United Airlines a lot of money. And so what we saw happen is that the United staff just sat passively by and said, “Well the computer tells us we got to move these people” and nobody stopped to pay attention to the nuances of how it’s going to feel to that passenger if we pull him off this plane against his will. What might happen from a reputational standpoint if it gets caught on YouTube that we forcefully removed someone from a plane, and that people have a sort of a repugnancy toward this type of behavior?

And this is I think really where it’s important that we keep in mind the difference. I referred to this, and I didn’t make up this term, but it’s human in the loop wherever there is a the machine that is inserted into a process, there has to be human error at every step of the way checking the process, just like the government maybe it’s not the best example these days. But our Constitution is supposed to be written so the different branches of government check each other.

And that’s where I think we need humans to build a machine to not only just program it—figure out how it’s going to be deployed; to oversee and manage it to make sure it’s serving its intended function; to fix it when it’s broken and then figure out what it’s going to do next. Even in the simplest applications of automation and you still need that human in the loop to be overseeing things. And it’s because humans have that moral compass, that judgment that you don’t see machines have because they’re programmed to focus on the bottom line. And even if they’re smart, that’s what they’re going to do. They’re not going to go ‘well what’s going to happen?’ How are people going to react? Is this going to motivate people? They don’t know the answers to that.

That’s I think really important. We have to be careful not to automate large swaths of our employee population because without the ‘humans in the loop’ bad things are going to happen and we’re gonna see a lot more happening like that United Airlines example for people who aren’t sure how that ended up… it ended up very badly. United Airlines took a huge hit to its reputation and ends up having to pay that dude a lot more money than they would have paid if those flights had just had to take another flight. So lesson learned.

I hope other companies don’t go through the same thing but I suspect that’s going to be a painful learning process for companies realizing machines aren’t perfect, and they’re never going to fully replace human beings. You’ve got to have your humans checking the power of those machines.

Listen to this episode or read the full transcript at www.VoicesinAI.com

[voices_in_ai_link_back]

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

No comments:

Post a Comment

Blog Archive