Friday, August 23, 2019

Voices in AI – Episode 94: A Conversation with Amy Webb

Source: https://gigaom.com/2019/08/22/voices-in-ai-episode-94-a-conversation-with-amy-webb/
August 22, 2019 at 03:00PM

[voices_in_ai_byline]

About this Episode

Episode 94 of Voices in AI features Byron speaking with fellow futurist and author Amy Webb on the nature of artificial intelligence and the morality and ethics tied to its study.

Listen to this episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by Gigaom, and I’m Byron Reese. Today, I’m so excited. My guest is Amy Webb. She is a quantitative futurist. She is the founder and CEO of the Future Today Institute. She’s the professor of strategic foresight at NYU’s Stern School of Business.

She’s a co-founder of Spark Camp. She holds a BS in game theory and economics, and an MS in journalism from Columbia, and she’s a Nieman Fellow at Harvard University. And she- as if all of that weren’t enough, she’s the author of a new book called The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity. Welcome to the show, Amy.

Amy Webb: Hello. Thank you for having me.

So, I always like to start off with definitions. Maybe that can be tedious, but start me off with how do you define intelligence?

Well, I think intelligence in different context means different things. As it relates to artificial intelligence, I think, generally, what we’re trying to describe is the ability for a system to make decisions and choices that would mirror or come close to the way that we make decisions and choices. So, I think, when the term Artificial Intelligence was originally coined at Duke University in the 1950s by Marvin Minsky and John McCarthy, back then the thinking was to try and build a machine that came close to human cognition and whose intelligence could be measured against our own. Five decades later, I think it’s clear that, while we’re able to build incredibly powerful machines, there’s a lot that we don’t understand about how the–a lot we don’t understand about our own intelligence and our own cognition, and I think, Minsky and McCarthy, if they were around today still working very intently, that they would say that intelligence, AI was probably the wrong term to use.

Actually, McCarthy did say that pretty quickly after he coined the term. He said he thought it set the bar too high. But I guess the question I really want to ask is, in what sense is it artificial? Do you think AI is actually intelligent, or do you think it’s something that mimics intelligence the way that, say, artificial turf mimics grass?

Well it’s certainly been billed to mimic what we think we know about human intelligence. And that extends from hardware architecture. The current deep neural net system is a layered approach to machine learning that is built to physically mimic what we think the sort of synapses in our own brains–how they’re transferring and parsing and understanding information. That being said, I think that–I also think that intelligence is probably the wrong term because it’s fairly loaded, and when we talk about intelligence, we tend to–I think we talk about intelligence because we’re trying to quantify in some way our own cognitive power. I don’t know if that’s the right term for AI because I think AI inherently is different. Just because we’ve built it to mimic what we think we do doesn’t mean that from here out it continues to behave in that manner.

Yeah, it’s funny. I’ve always thought all of the analogies between how AIs function, how the brain functions are actually just kind of techno marketing. The brain is still largely a black box, right? We don’t know how even a thought is encoded or anything like that, and so, I’ve often suspected that’s just–that machine intelligence really doesn’t have anything in common with human intelligence. Would you agree, disagree, defer?

Yeah, I mean, I think that some–we’re talking in broad strokes and generalities. So, I think that there are certainly some areas of the AI ecosystem that do a fairly good job of mimicking what we do, so reinforcement learning and hierarchical reinforcement learning is present in dogs and children and adults. So basically, you’ve got a, say, toddler, and you’re trying to teach the toddler correct and incorrect, right? So the correct- I don’t know. The correct word is mommy for your mom. The correct term for the color that is the color of the sky is blue, stuff like that. And through praise or correction, the child over time learns. So we are reinforcing the correct answer, and we are, hopefully, in a very gentle loving way punishing the–guiding the child away from the wrong answer. So, the same is true in how some AI systems learn. And in fact, for anybody who’s been following AlphaGo and all of the work that the DeepMind team has been doing- a lot of what they’re doing is self-improving. It was reinforcement learning and then self-improvement, so.

Listen to this episode or read the full transcript at www.VoicesinAI.com

[voices_in_ai_link_back]

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

No comments:

Post a Comment

Blog Archive