[voices_in_ai_byline]
About this Episode
On Episode 111 of Voices in AI, Byron discusses the nature of intelligence and Artificial Intelligence within the industrial internet with Robert Booker of Win-911.
Listen to this episode or read the full transcript at www.VoicesinAI.com
Transcript Excerpt
Byron: This is Voices in AI brought to you by GigaOm, and I’m Byron Reese. Today my guest is Robert Brooker. He is the Chairman of WIN-911, a technology company and part of the whole ‘industrial internet’ with offices in the US in Austin, Texas and in Mexico and Asia and in Europe. He holds an undergraduate degree in economics from Harvard as well as an MBA from the same institution.
He is a person with an amazing entrepreneurial past. He is said to have brought the bagel to Eastern Europe. Although he denies it, some people say he brought the hookah to the United States. I’m excited to have him on the show. Welcome to the show, Robert.
Robert: It’s nice to be here, Byron.
You’ve heard the show enough times to know I usually start off by asking “What is artificial intelligence?” How do you think of it? How do you define it? I’ll just leave it at that.
Artificial intelligence is semantically ambiguous. It could be that it’s artificial in the sense that it’s not real intelligence or it could be intelligence achieved artificially; in other words, without the use of the human brain. I think most people in this space adopt the latter because that’s really the more useful interpretation of artificial intelligence, that it’s something that is real intelligence and can be useful to the world and to our lives.
Sometimes I think of that as the difference between the instantiation of something and the simulation of something. Case in point: a computer can simulate a hurricane, but there isn’t really a hurricane there. It’s not an instantiation of a hurricane. I guess the same question is, is it simulating intelligence or is it actually intelligent? Do you think that distinction matters?
When I say ‘artificial’ as in the former definition, it seems on the surface to be intelligent. When you look further down, you determine it’s not intelligent. It may be helpful in terms of how I define intelligence. I like the standard dictionary definition of intelligence, and that is: the ability to acquire and apply knowledge and skills.
You could argue that a nematode worm is intelligent. It’s hard to argue that, for example, a mechanical clock is intelligent. Ultimately, different people are defining intelligence in different ways. I think it ultimately comes down to what people in the field are doing. They’re trying to make it useful when how it’s defined is almost on the side.
The most singular thing about AI and the way we do it now is that it isn’t general. I don’t mean that even in the science fiction artificial general intelligence [sense]. We have to take one very specific thing and spend a lot of time and energy and effort teaching the computer to do that one thing. To teach it to do something else, you largely have to start over. That doesn’t seem like intelligence.
At some level it feels like a bunch of simulations of solving one particular kind of problem. If you’re using the ‘acquire new skills’ definition, in a way it’s almost like none of it does that right now. No matter what, it’s limited to what it’s been programmed to do. Additional data alters that, but it doesn’t itself acquire new skills, does it?
I think the skills part is hard. The ‘acquire and apply knowledge’ is a little bit easier. In the case of a nematode worm, 302 neurons, what it can do is it can detect a smell and move toward the smell. If there’s food there, it says, “a-ha, this smell indicates food. When I smell it in the future, I’m going to go towards that smell and get the food.”
If the world later changes where that smell is longer associated with food, the nematode worm will start to not go towards the smell, learning that that smell no longer indicates food. Maybe some other smell indicates food. That in my mind indicates that the nematode worm is acquiring knowledge and applying knowledge. The skill part is harder, and I think that’s the same with AI. The skill part is very difficult. It’s not difficult for a chimpanzee or a human or some other animals, but I think it’s difficult for machines to do that.
The nematode worm, like you said, has 302 neurons, two of which don’t appear to be connected to anything. It functionally has 300. Don’t you think that amount of sophisticated behavior… do we even have a model for how 300 neurons [work]? Even if we don’t know the mechanics of it, a neuron can fire. It can fire on an analog basis. It’s not binary. The interplay of 300 of those can create that complex behavior of finding a mate and moving away from things that poke it and all of the rest. Does it seem odd that that can be achieved with so little when it takes us so much more time, hassle, and energy to get a computer to do the simplest, most rudimentary thing?
I think it’s amazing. The exponentialism of the nematode worm and real neural networks is incredible. For anyone who hasn’t spent time at openworm.org, which is the crowdsource effort to understand the nematode worm, I encourage you to spend at least an hour there. It’s fascinating. You think: ‘302 neurons, that’s simple. I should be able to figure it out.’
Then it’s all mapped out. Each neuron connects between a couple or maybe a couple dozen other neurons. You suddenly have 7,000 synapses. Wait, that’s not all. Each synapse is different. Figuring out how each synapse works becomes even more complicated.
Then there’s on top of that, the inner workings of a neuron. Change is going on within each neuron itself. I don’t know if this is the case with the nematode worm, but certainly in the case of the human brain and probably many other brains, there’s communication between and among neurons that takes place not in the synapses, but by exchanging chemicals. It’s incredible how just 300 neurons can suddenly become who knows how many bits. We really almost can’t even call them bits of information. It’s more of an analog concept, which has magnitudes more possibilities.
Viewed that way, the nematode worm is sort of an underachiever. It’s not getting a lot done with all that stuff it seems like, although they are 70% of all animals on the planet by one count. Would you agree that progress in artificial intelligence is moving quickly… or slowly?
It seems very slow. It’s interesting that most of your guests, at least from the podcasts I listen to, predict artificial general intelligence being 100 years or hundreds of years away. It does seem very slow. To your point a moment ago about how it’s very hard to transfer one thing to the other, we get visited by companies all the time in the industrial space. Industrial space is really good for artificial intelligence in theory because there’s really no or very little human language.
All the complexities of human language are gone because essentially it’s a machine. In the industrial setting it’s about: ‘how can you save a million dollars by using less energy? How can you make the defect rate of your product lower?’ These are all sort of readily quantifiable outcomes. Companies come to us that have created some sort of artificial intelligence to revolutionize or make industry much more efficient.
Typically what happens is that they’ll come to us because either they’re looking for funding or they’re looking for customers. We have a lot of customers, so they think we can somehow work together. They come to us and say oftentimes, “We have our first customer, and we save them a million dollars a year by making their process so much more efficient. If we could only apply that artificial intelligence to a thousand other companies, that’s a billion dollars’ worth of value. Therefore, we’re going to be great.”
You dig into it, and that one customer, the amount of human services, and this speaks a little bit to the issue about whether artificial intelligence will cause all these people to be out of work, there’s so much human interaction in just figuring out one project: all the normalization of the data, and then the AI is not quite figuring things out. A human intercedes and inserts another type of model based on human mental model. It’s almost like this notion that when humans and machines work together, you get a better outcome than machines alone. The nirvana or what people are trying to get at is that one thing, one AI that looks at all the industrial data. You don’t have any human language.
There’s a lot of things that you could call very simple even though there are a lot of complexities. The thing you want is something that will just look at all the data and figure everything out. No one’s been able to do that. It’s always been very specific to the context. Even in areas that should be simpler like industrial, which is more akin to playing chess or playing Go because it’s a game with fixed rules and fixed objectives, that are easily quantifiable, it’s still very difficult.
Listen to this episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
Source : Voices in AI – Episode 111: A Conversation with Robert Brooker