About this Episode
Episode 80 of Voices in AI features host Byron Reese and Charlie Burgoyne discussing the difficulty of defining AI and how computer intelligence and human intelligence intersect and differ.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
Transcript Excerpt
Byron Reese: This is Voices in AI brought you by GigaOm and I’m Byron Reese. Today my guest is Charlie Burgoyne. He is the founder and CEO of Valkyrie Intelligence, a consulting firm with domain expertise in applied science and strategy. He’s also a general partner for Valkyrie Signals, an AI-driven hedge fund based in Austin, as well as the managing partner for Valkyrie labs, an AI credit company. Charlie holds a master’s degree in theoretical physics from Georgetown University and a bachelor’s in nuclear physics from George Washington University.
I had the occasion to meet Charlie when we shared a stage when we were talking about AI and about 30 seconds into my conversation with him I said we gotta get this guy on the show. And so I think ‘strap in’ it should be a fun episode. Welcome to the show Charlie.
Charlie Burgoyne: Thanks so much Byron for having me, excited to talk to you today.
Let’s start with [this]: maybe re-enact a little bit of our conversation when we first met. Tell me how you think of artificial intelligence, like what is it? What is artificial about it and what is intelligent about it?
Sure, so the further I get down in this field, I start thinking about AI with two different definitions. It’s a servant with two masters. It has its private sector, applied narrowband applications where AI is really all about understanding patterns that we perform and that we capitalize on every day and automating those — things like approving time cards and making selections within a retail environment. And that’s really where the real value of AI is right now in the market and [there’s] a lot of people in that space who are developing really cool algorithms that capitalize on the potential patterns that exist and largely lay dormant in data. In that definition, intelligence is really about the cycles that we use within a cognitive capability to instrument our life and it’s artificial in that we don’t need an organic brain to do it.
Now the AI that I’m obsessed with from a research standpoint (a lot of academics are and I know you are as well Byron) — that AI definition is actually much more around the nature of intelligence itself, because in order to artificially create something, we must first understand it in its primitive state and its in its unadulterated state. And I think that’s where the bulk of the really fascinating research in this domain is going, is just understanding what intelligence is, in and of itself.
Now I’ll come kind of straight to the interesting part of this conversation, which is I’ve had not quite a hundred guests on the show. I can count on one hand the number who think it may not be possible to build a general intelligence. According to our conversation, you are convinced that we can do it. Is that true? And if so why?
Yes… The short answer is I am not convinced we can create a generalized intelligence, and that’s become more and more solidified the deeper and deeper I go into research and familiarity with the field. If you really unpack intelligent decision making, it’s actually much more complicated than a simple collection of gates, a simple collection of empirically driven singular decisions, right? A lot of the neural network scientists would have us believe that all decisions are really the right permutation of weighted neurons interacting with other layers of weighted neurons.
From what I’ve been able to tell so far with our research, either that is not getting us towards the goal of creating a truly intelligent entity or it’s doing the best within the confines of the mechanics we have at our disposal now. In other words, I’m not sure whether or not the lack of progress towards a true generalized intelligence is due to the fact that (a) the digital environment that we have tried to create said artificial intelligence in is unamenable to that objective or (b) the nuances that are inherent to intelligence… I’m not positive yet those are things through which we have an understanding of modeling, nor would we ever be able to create a way of modeling that.
I’ll give you a quick example: If we think of any science fiction movie that encapsulates the nature of what AI will eventually be, whether it’s Her, or Ex Machina or Skynet or you name it. There are a couple of big leaps that get glossed over in all science fiction literature and film, and those leaps are really around things like motivation. What motivates an AI, like what truly at its core motivates AI like the one in Ex Machina to leave her creator and to enter into the world and explore? How is that intelligence derived from innate creativity? How are they designing things? How are they thinking about drawings and how are they identifying clothing that they need to put on? All these different nuances that are intelligently derived from that behavior. We really don’t have a good understanding of that, and we’re not really making progress towards an understanding of that, because we’ve been distracted for the last 20 years with research in fields of computer science that aren’t really that closely related to understanding those core drivers.
So when you say a sentence like ‘I don’t know if we’ll ever be able to make a general intelligence,’ ever is a long time. So do you mean that literally? Tell me a scenario in which it is literally impossible — like it can’t be done, even if you came across a genie that could grant your wish. It just can’t be done. Like maybe time travel, you know — back in time, it just may not be possible. Do you mean that ‘may not’ be possible? Or do you just mean on a time horizon that is meaningful to humans?
I think it’s on the spectrum between the two. But I think it leans closer towards ‘not ever possible under any condition.’ I was at a conference recently and I made this claim which admittedly as any claim with this particular question would be based off of intuition and experience which are totally fungible assets. But I made this claim that I didn’t think it was ever possible, and something the audience asked me, well, have you considered meditating to create a synthetic AI? And the audience laughed and I stopped and I said: “You know that’s actually not the worst idea I’ve been exposed to.” That’s not the worst potential solution for understanding intelligence to try and reverse engineer my own brain with as little distractions from its normal working mechanics as possible. That may very easily be a credible aid to understanding how the brain works.
If we think about gravity, gravity is not a bad analog. Gravity is this force that everybody and their mother who’s older than, you know who’s past fifth grade understands how it works, you drop an apple you know which direction it’s going to go. Not only that but as you get experienced you can have a prediction of how fast it will fall, right? If you were to see a simulation drop an apple and it takes twelve seconds to hit the ground, you’d know that that was wrong, even if the rest of the vector was correct, the scaler is off a little bit. Right?
The reality is that we can’t create an artificial gravity environment, right? We can create forces that simulate gravity. Centrifugal force is not a bad way of replicating gravity but we don’t actually know enough about the underlying mechanics that guide gravity such that we could create an artificial gravity using the same techniques, relatively the same mechanics that are used in organic gravity. In fact it was only a year and a half ago or so closer to two years now where the Nobel Prize for Physics was awarded to the individuals who identified that it was gravitational waves that permeate gravity (actually that’s how they do gravitons), putting to rest an argument that’s been going on since Einstein truly.
So I guess my point is that we haven’t really made progress in understanding the underlying mechanics, and every step we’ve taken has proven to be extremely valuable in the industrial sector but actually opened up more and more unknowns in the actual inner workings of intelligence. If I had to bet today, not only is the time horizon on a true artificial intelligence extremely long-tailed but I actually think that it’s not impossible that it’s completely impossible altogether.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
Source : Voices in AI – Episode 80: A Conversation with Charlie Burgoyne