[voices_in_ai_byline]
About this Episode
On Episode 104 of Voices in AI, Byron Reese discusses the nature of intelligence and how artificial intelligence evolves and becomes viable in today’s world.
Listen to this episode or read the full transcript at www.VoicesinAI.com
Transcript Excerpt
Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today, my guest is Anirudh Koul. He is the head of Artificial Intelligence and Research at Aira and the founder of Seeing AI. Before that he was a data scientist at Microsoft for six years. He has a Masters of Computational Data Science from Carnegie Mellon and some of his work was just called by Time magazine, ‘One of the best inventions of 2018,’ which I’m sure we will come to in a minute. Welcome to the show Anirudh.
Anirudh Koul: It’s a pleasure being here. Hi to everyone.
So I always like to start off with—I don’t wanna call it a philosophical question—but it’s sort of definitional question which is, what is artificial intelligence and more specifically, what is intelligence?
Technology has always been here to fill the gaps between whatever ability is in our task and we are noticing this transformational technology—artificial intelligence—which can now try to mimic and predict based on previous observations, and hopefully try to mimic human intelligence which is like the long term goal—which might probably take 100 years to happen. Just noticing the evolution of it over the last few decades, where we are and where the future is going to be based on how much we have achieved so far, is just exciting to be in and be playing a part of it.
It’s interesting you use the word ‘mimic’ human intelligence as opposed to achieve human intelligence. So do you think artificial intelligence isn’t really intelligence? All it can do is kind of look like intelligence, but it is not really intelligence?
From the outside when you see something happen for the first time, it’s like magical. When you see the demo of an image being described by a computer in an English sentence. If you saw one of those demos in 2015, it just knocks the socks off when you see it the first time. But then, if you ask a researcher it said, “Well, it kind of has you know sort of learned the data, the pattern behind the scenes and it does make mistakes. It’s like a three year old. It knows a little bit but the more of the world they show it, the smarter it gets.” So from the outside—from the point of press, the reason why there’s a lot of hype is because of the magical effect when you see it happen for the first time. But the more you play with it, you also start to learn how far it has to go. So right now, mimicking might probably be a better word to use for it and hopefully in the future, maybe go closer to real intelligence. Maybe in a few centuries.
I notice the closer people are to actually coding, the further off they think general intelligence is. Have you observed that?
Yeah. If you look at the industrial trend and especially talking to people who are actively working on it, if you try to ask them when is artificial general intelligence (the field that you’re just talking about) going to come, most people on average will give you the year 20… They’ll basically give the end of this century. That’s when they think that artificial general intelligence will be achieved. And the reason is because of how far we have to go to achieve it.
At the same time, you also start to learn as the year 2017/18 comes, you start to learn that AI is really often an optimization problem trying to achieve the goal and that many times, these goals can be misaligned, so if you try to achieve—no matter how—it needs to achieve the goal. Some of the fun examples, which are like famous failure cases where there was a robot which was trying to minimize the time a pancake should be on the surface of the pancake maker. What it would do is, it would basically flip the pancake up in the air but because optimization probably was minimized the time it would flip the pancakes so high in the air that it would basically go to space during simulation and you minimize the time.
A lot of those failure cases are now being studied to understand the best practices and also learn the fact that, “Hey, we need to be keeping a realistic view of how to achieve that.” They’re just fun on both sides of what you can achieve realistically. Maybe some of those failure cases and just keeping appreciation for [the fact that] we have a long way to achieve that.
Who do you think is actually working on general intelligence? Because 99% of all the money put in AI is, like you said, to solve problems like get that pancake cooked as fast as you can. When I start to think about who’s working on general intelligence, it’s an incredibly short list. You could say OpenAI the Human Brain Project in Europe, maybe your alma mater Carnegie Mellon. Who is working on it? Or will we just get it eventually by getting so good at narrow AI, or is narrow AI just really a whole different thing?
So when you try to achieve any task, you break it down into subtasks that you can achieve well, right? So if you’re building a self-driving car, you would divide it into different teams. One team would just be working on one single problem of lane finding. Another team would just be working on the single problem of how to back up a car or park it. And if you want to achieve a long term vision, you have to divide it into smaller sub pieces of things that are achievable, that are bite sized, and then in those smaller near-term goals, you can get to some wins.
In a very similar way, when you try to build a complex thing, you bring it down to pieces. Some are obviously: Google, Microsoft Research, OpenAI, especially OpenAI. This is probably the bigger one who is betting on this particular field, making investments in this field. Obviously, universities are getting into it but interestingly, there are other factors even from the point of funding. So, for example, DARPA is trying to get in this field of putting funding behind AI. As an example, they put in like a $2 billion investment on something called the ‘AI Next’ program. What they’re really trying to achieve is to overcome the limitations of the current state of AI.
To give a few examples: Right now if you’re creating an image recognition system that typically takes somewhere around a million images to train for something like ‘imageness’ in it which is considered the benchmark. What DARPA is saying, “look this is great, but could you do it at one tenth of the data or could you do that at one hundredth of the data? But we’ll give you the real money if you can do it at 1000th of the data.” They literally want to cut the scale logarithmically by half, which is amazing.
Listen to this episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
Source : Voices in AI – Episode 104: A Conversation with Anirudh Koul