Voices in AI – Episode 105: A Conversation with Andrew Busey

Share
  • January 27, 2020

[voices_in_ai_byline]

About this Episode

On Episode 105 of Voices in AI, Byron speaks with Andrew Busey about the nature of intelligence and how we differentiate between artificial intelligence and ‘real’ intelligence.

Listen to this episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today my guest is Andrew Busey. He is a serial entrepreneur with a focus on building products. He created the first web-based chat systems and the first chat with a customer service rep option and many other early e-commerce social and gaming platforms. He most recently founded Conversable to make it easy for big brands to build experiences on Facebook Messenger, Twitter, Alexa, Google assistant and other next generation conversational platforms. He has 26 patents and one novel, Accidental Gods. He has a computer science degree from Duke University and an MBA from the Wharton School at the University of Pennsylvania. Welcome to the show Andrew.

Andrew Busey: Thanks. I’m excited to chat.

Well I’ve read your book, Accidental Gods. Of course we know each other in real life, but I did read the book. I’d love to start talking [about it] before we get into AI. Tell me

the whole premise of the book and why you wrote it. Because I think it’s fascinating.

The book is about my views on AI in some ways. I started thinking a lot about both: from a philosophical point of view, there’s all sorts of things to think about in religion and where we come from and why; and then there’s also the converging point of view of ‘What is intelligence and how does it exist?’ So the book is really about what would happen if there were things that we created that were like us, that had an intelligence and sentience and awareness but weren’t aware of us. How would that play out? And so that was the premise of the book, which conveys a lot of my views on certain areas of artificial intelligence as well, but also where we came from. Since writing that book, there’s been a lot more broad conversation about simulation theory and are we living in a simulation, and those types of things that dovetail with with a lot of the book as well.

Because you wrote that a while ago.

Yeah, I self-published it in 2014. I think I wrote it mostly in like 2009.

So where do you think we came from?

I think statistically speaking, there’s a high probability we’re living in a simulation of some sort –mostly on the theory that I think at some point the future we’ll be able to build a simulation that’s roughly as complex as we are, so…

Well that’s always the argument. But to just flip it around: that all begins with the assumption that consciousness is a programmable property and the fact that we experience the world is programmable and that we are basically machines ourselves. Doesn’t that assume a certain view, because a character in a video game right now doesn’t experience the game, correct?

Certainly not as we would understand it.

And yet we experience the world. So why is that not proof that we’re not in the simulation?

Well I think we’re still early in the process. I mean you know we’ve only really been thinking about hard problems for best case, 6,000 years, worst case a few thousand less — depends on when you view a lot of this stuff in Egypt starting. I think that’s not very much time in the grand scale of our understanding of the universe, which is also somewhat constrained still.

And so I think our advancement of technology really has only been happening at the level that we’re talking about right now with computers and games and simulations and all that have that type of complexity or any remote semblance of that type of complexity for less than 100 years, and more like 30 or 40, 50 maybe at the most. So to think that we’ve even touched the beginnings of what can be done with technology…

But the basic assumption there is that an iPhone and a person are the same type of thing — maybe an orders of magnitude difference in scale, but the same basic computational device in a way. Is that true?

In a very simplified level I might agree to that. I think that humans are computational and brains are effectively a type of computer. I think they’re much more complex, and we obviously don’t really understand how the brain works either. So it could turn out that there’s things that we just don’t understand yet that exist in the brain that are a big part of what gives us consciousness and self-awareness, which I think are sort of the defining traits of, at least as you describe them, as sort of seeing and understanding the world and having a sense of place in it. I think that’s a pretty interesting way of viewing the world, and I think that it’s going to be a it’s going to be a while before you really understand how the brain is creating that and whether you know what that really means.

It could turn out that like in the brain, there’s some type of quantum computation for example. I don’t think that’s necessarily what it’ll be, but at the neuron level that we just don’t really understand. It could be that because neurons are not as binary as a neuron is represented in a neural network, that you know things are, and it’s more adaptable in different ways than we really understand. Those could all be different types of computational machinery that we just have not figured out yet.

Just take neural networks for example. When the original neural network designs were created in I guess like the late ‘50s, and they were discarded because they didn’t really do anything — mostly because they couldn’t perform at an efficiency level that delivered any value. And then in the beginning of the 2000s people started trying to run neural networks again, trying to run them on new CPUs, then new GPUs and they’re like ‘holy crap’ these things do some pretty amazing things if you put them on tasks enough. Computational systems that are designed to do that type of processing, [and] GPUs are much better at linear algebra than CPUs.

Turns out that you can build even better more specialized hardware for processing sensors, which is a lot of what Google GPUs and what Nvidia is doing with TPUs. Those things make these neural networks orders and orders of magnitude faster. They allow more complex forms of neural networks to be created. They allow things like backward propagation to work, which really helps make neural network training much better. Those things just weren’t even possible when the neural network idea was conceived, and now because computation has advanced enough that these mathematical functions can run orders and orders of magnitude faster, we’re seeing all sorts of new ways to use them. And that’s what’s really causing the machine learning explosion that we’re seeing right now. And I think that’s just the tip of the iceberg.

Well I’ll just ask one more question about consciousness and then let’s move on. But right now would you agree [regarding] the idea that matter can experience the universe, we don’t really have a way to think of that scientifically? My hammer doesn’t experience the nail. And yet, just the idea that inanimate matter can have a first person experience just seems so implausible, and it sounds like what you’re saying is: somewhere in the brain we’re going to figure out how that happens even though we don’t really have any way to understand it right now. Isn’t that kind of a punt on the question — just like the article that [says] ‘we shall know when we understand the brain, all will be made clear?’

Yes I’m happy to deviate from punting, so… To unpack what you what you said, I would argue that computers are not inanimate matter in the same sense that the hammer is, right? So things are passing through a computer right, [and] it understands time. It maybe doesn’t understand it, but it uses time and time as a core function of its mechanics, just like the human mind.

So there’s a lot of things for example we don’t understand about time. And I think time is probably pretty critical to consciousness, because you can’t really understand yourself. You can’t build predictive models of what you’re going to do and how you’re going to act if you don’t have an understanding that something’s going to happen in the future. You can’t learn things if you don’t understand things that happened in the past.

And so things like time are happening in the physical world and there’s things in our brains, chemicals and electrical signals and all sorts of other stuff that are analyzing the data that our sensory organs like our eyes and skin and nose and ears and whatever are detecting and they’re accumulating lots of data. And computers can do big parts of that. They take that data, they put it, they send your visual (what your eyes see) and they process that data and it gets sent to your brain and your visual cortex and then you do something with it. So we lose I think, our deep understanding when we start to get at why are we doing certain things, and you can get diverged on a lot of conversational paths there like, “Do we have free will? Are we stuck in kind of a…?” “Is the universe kind of just ticking along and we’re just riding it, or we just process that in a different way that makes us feel like we have free will and choice?”

I think those are unknown questions and we just don’t have the data. But I do think that it’s not say, comparing the brain to a hammer and saying that it’s an inanimate object, — not a fair comparison to even your iPhone and human brain example. Your iPhone is not an inanimate object. It’s doing things like… it’s not necessarily smart and self-aware and analyzing the universe around it. But if you applied kindergartner level observation to it, it might look that way like it might pop up. I might get a notification at some point that says “Hey you know, Duke is about to play a basketball game. You might want to watch that on ESPN.” Well that seems like it’s aware of things around it. Right?

Even though we understand that that’s just programming and that’s just pulling data from all sorts of data sources, it does look like it absorbs some data feed somewhere that said Duke’s game is coming. It knows that I want to watch Duke games and it notified me of that fact. So in some ways, that’s not that different than a lot of things that humans do.

Fair enough. You mentioned Duke. When did you graduate from college?

Way to make me feel old: 1993

So you and I are basically sitting on 50 you must be about 48?

47

Back in 1993 when you were at Duke and you were studying computer science, what was the skinny on artificial intelligence?

I almost went to U Penn undergrad in an AI curriculum that was computer science and psychology. I think that at the time, AI was much different. People thought you could program things to make decisions using more expert systems that were more complex, but dichotomous tree kind of things. So there was stuff like LISP and things like that and it never did it. It wasn’t I think the same as the situation that we have now, which has changed a lot because of basically just computational speed, data and networks. None of those things were really that amazing when I graduated. So there was the Internet. But there was no Google. When I graduated there was no Yahoo. There was no Google. There was no internet browser…

Mosaic came out in March of ‘93. So you know you’re right.

I was in fact the product manager for Mosaic and Spyglass. That was my first real job. So I am in Champaign, Illinois. Why does the commercialization of the Internet happen? Mosaic was developed by Marc Andreessen and a bunch of other people at the National Center for Supercomputing Applications at the University of Illinois. And as a way of basically creating a more interesting and amazing client to access the World Wide Web that had been created at CERN really. It’s lot about adding graphics and imagery to it that made it much more compelling to people. So that was a pretty big leap forward for getting people to use computers and networks.

Yeah and if you think about it, what’s always interesting to me about the Internet is you’ve just implied it’s kind of big and dumb. All it is is computers communicating on a common protocol. And yet think about what it did to society in 25 years.

It created $25 trillion dollars in wealth, a million businesses, it transformed media, politics, so many things. How do you compare the impact of narrow AI just what we know how to do now — just machine learning — is it going to have an effect equal to that, [or] massively more? How big of a deal do you think [it will be], again [given] no big breakthroughs? Just plain old, like we know how to do now?

Just machine learning-based computer vision will change the world.

Listen to this episode or read the full transcript at www.VoicesinAI.com

[voices_in_ai_link_back]

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.

Source : Voices in AI – Episode 105: A Conversation with Andrew Busey