This Much I Know: Byron Reese on Conscious Computers and the Future of Humanity

Share
  • October 11, 2018

Recently GigaOm publisher and CEO, Byron Reese, sat down for a chat with Seedcamp’s Carlos Espinal on their podcast ‘This Much I Know.’ It’s an illuminating 80-minute conversation about the future of technology, the future of humanity, Star Trek, and much, much more.

You can listen to the podcast at Seedcamp or Soundcloud, or read the full transcript here.


Carlos Espinal: Hi everyone, welcome to ‘This Much I Know,’ the Seedcamp podcast with me, your host Carlos Espinal bringing you the inside story from founders, investors, and leading tech voices. Tune in to hear from the people who built businesses and products scaled globally, failed fantastically, and learned massively. Welcome everyone! On today’s podcast we have Byron Reese, the author of a new book called The Fourth Age: Smart Robots, Conscious Computers and the Future of Humanity. Not only is Byron an author, he’s also the CEO of publisher GigaOm, and he’s also been a founder of several high-tech companies, but I won’t steal his thunder by saying every great thing he’s done. I want to hear from the man himself. So welcome, Byron.

Byron Reese: Thank you so much for having me. I’m so glad to be here.

Excellent. Well, I think I mentioned this before: one of the key things that we like to do in this podcast is get to the origins of the person; in this case, the origins of the author. Where did you start your career and what did you study in college?

I grew up on a farm in east Texas, a small farm. And when I left high school I went to Rice University, which is in Houston. And I studied Economics and Business, a pretty standard general thing to study. When I graduated, I realized that it seemed to me that like every generation had… something that was ‘it’ at that time, the Zeitgeist of that time, and I knew I wanted to get into technology. I’d always been a tinkerer, I built my first computer, blah, blah, blah, all of that normal kind of nerdy stuff that I did.

But I knew I wanted to get into technology. So, I ended up moving out to the Bay Area and that was in the early 90s, and I worked for a technology company and that one was successful, and we sold it and it was good. And I worked for another technology company, got an idea and spun out a company and raised the financing for that. And we sold that company. And then I started another one and after 7 hard years, we sold that one to a company and it went public and so forth. So, from my mother’s perspective, I can’t seem to hold a job; but from another view, it’s kind of like the thing of our time instead. We’re in this industry that changes so rapidly. There are more opportunities that always come along and I find that whole feeling intoxicating.

That’s great. That’s a very illustrious career with that many companies having been built and sold. And now you’re running GigaOm.  Do you want to share a little bit for people who may not be as familiar with GigaOm and what it is and what you do?

Certainly. And I hasten to add that I’ve been fortunate that I’ve never had a failure in any of my companies, but they’ve always had harder times. They’ve always had these great periods of like, ‘Boy, I don’t know how we’re going to pull this through,’ and they always end up [okay]. I think tenacity is a great trait in the startup world, because they’re all very hard. And I don’t feel like I figured it all out or anything. Every one is a struggle.

GigaOm is a technology research company. So, if you’re familiar with companies like Forrester or Gartner or those kinds of companies, what we are is a company that tries to help enterprises, help businesses deal with all of the rapidly changing technology that happens. So, you can imagine if you’re a CIO of a large company and there are so many technologies, and it all moves so quickly and  how does anybody keep up with all of that? And so, what we have are a bunch of analysts who are each subject matter experts in some area, and we produce reports that try to orient somebody in this world we’re in, and say ‘These kinds of solutions work here, and these work there’ and so forth.

And that’s GigaOm’s mission. It’s a big, big challenge, because you can never rest. Big new companies I find almost every day that I’ve never even heard of and I think, ‘How did I miss this?’ and you have to dive into that, and so it’s a relentless, nonstop effort to stay current on these technologies.

On that note, one of the things that describes you on your LinkedIn page is the word ‘futurist.’ Do you want to walk us through what that means in the context of a label and how does the futurist really look at industries and how they change?

Well, it’s a lower case ‘f’ futurist, so anybody who seriously thinks about how the future might unfold, is to one degree or another, a futurist. I think what makes it into a discipline is to try to understand how change itself happens, how does technology drive changes and to do that, you almost by definition, have to be a historian as well. And so, I think to be a futurist is to be deliberate and reflective on how it is that we came from where we were, in savagery and low tech and all of that, to this world we are in today and can you in fact look forward.

The interesting thing about the future is it always progresses very neatly and linearly until it doesn’t, until something comes along so profound that it changes it. And that’s why you hear all of these things about one prediction in the 19th Century was that, by some year in the future, London would be unnavigable because of all the horse manure or the number of horses that would be needed to support the population, and that maybe would have happened, except you had the car, and like that. So, everything’s a straight-line, until one day it isn’t. And I think the challenge of the futurist is to figure out ‘When does it [move in] a line and when is it a hockey stick?’

So, on that definition of line versus hockey stick, your background as having been CEO of various companies, a couple of which were media centric, what is it that drew you to artificial intelligence specifically to futurize on?

Well, that is a fantastic question. Artificial intelligence is first of all, a technology that people widely differ on its impact, and that’s usually like a marker that something may be going on there. There are people who think it’s just oversold hype. It’s just data mining, big data renamed. It’s just the tool for raising money better. Then there are people who say this is going to be the end of humanity, as we know it. And philosophically the idea that a machine can think, maybe, is a fantastically interesting one, because we know that when you can teach a machine to do something, you can usually double and double and double and double and double its ability to do that over time. And if you could ever get it to reason, and then it could double and double and double and double, well that could potentially be very interesting.

Humans only evolve, computers are able to evolve kind of at the speed of light, they get better and humans evolve at the speed of life.  It takes generations.  And so, if a machine can think, a question famously posed by Alan Turing, if a machine could think then that could potentially be a game changer. Likewise, I have a similar fascination for robots because it’s a machine that can act, that can move and can interact physically in the world. And I got to thinking what would happen, what is it a human in a world where machines can think better and act better, then what are we? What is uniquely human at that point?

And so, when you start asking those kinds of questions about a technology, that gets very interesting. You can take something like air conditioning and you can say, wow, air conditioning. Think of the impact that had. It meant that in the evenings people wouldn’t… in warm areas, people don’t go out on their front porch anymore. They close the house up and air condition it, and therefore they have less interaction with their neighbors. And you can take some technology as simple as that and say that had all these ripples throughout the world.

The discovery of the new world ended the Italian Renaissance effectively, because it changed the focus of Europe to a whole different direction. So, when those sorts of things had those kinds of ripples through history, you can only imagine what if the machine could think, like that’s a big deal. Twenty-five years ago, we made the first browser, the Mosaic browser, and if you had an enormous amount of foresight and somebody said to you, in 25 years, 2 billion people are going to be using this, what do your think’s going to happen?

If you had an enormous amount of foresight, you might’ve said, well, the Yellow Pages are going to have it rough and the newspapers are, and travel agents are, and stock brokers are going to have a hard time, and you would have been right about everything, but nobody would have guessed there would be Google, or eBay, or Etsy, or Airbnb, or Amazon, or $25 trillion worth of a million new companies.  And all that was, was computers being able to talk to each other. Imagine if they could think. That is a big question.

You’re right and I think that there is…I was joking and I said ‘Tinder’ in the background just because that’s a social transformation. Not even like a utility, but rather the social expectation of where certain things happen that was brought about that. So, you’re right… and we’re going to get into some of those [new platforms] as we review your book. In order to do that, let’s go through the table of contents. So, for those of you that don’t have the book yet, because hopefully you will after this chat, the book is broken up into five parts and in some ways these parts are arguably chronological in their stage of development.

The first one I would label as the historical, and it’s broken out into the fourth ages that we’ve had as humans, the first age being language and fire, the second one being agriculture and cities, the third one being writing and wheels, and the fourth one being the one that we’re currently in, which is robots and AI. And we’re left with three questions, which are: what is the composition of the universe, what are we, and what is yourself? And those are big, deep philosophical ones that will manifest themselves in the book a little bit later as we get into consciousness.

Part two of the book is about narrow AI and robots. Arguably I would say this is where we are today, and Seedcamp as an investor in AI companies has broadly invested in narrow AI through different companies. And this is I think the cutting edge of AI, as far as we understand it. Part three in the book covers artificial general intelligence, which is everything we’ve always wanted to see, where science fiction represents quite well, everything from that movie AI, with the little robot boy, to Bicentennial Man with Robin Williams, and sort of the ethical implications of that.

Then part four of the book is computer consciousness, which is a huge debate, because as Byron articulates in the book, there’s a whole debate on what is consciousness and there’s a distinction between a monist and the dualist and how they experience consciousness and how they define it. And hopefully Byron will walk us through that in more detail. And lastly, the road from here is the future, as far as we can see it in the futurist portion of the book, I mean part three, four and five are all futurist portions of the book, but this one is where I think, Byron, you go to the ‘nth’ degree  possible with a few exceptions. So maybe we can kick off with your commentary on why you have broken up the book into these five parts.

Well you’re right that they’re chronological, and you may have noticed each one opens with what you could call a parable, and the parables themselves are chronological as well. The first one is about Prometheus and it’s about technology, and about how the technology changed and all the rest. And like you said, that’s where you want to kind of lay the groundwork of the last 100,000 years and that’s why it’s named something like ‘the road to here,’ it’s like how we got to where we are today.

And then I think there are three big questions that everywhere I go I hear one variant of them or another. The first one is around narrow AI and like you said, it’s a real technology that’s going to impact us, what’s it going to do with jobs, what’s this going to do in warfare, what will it do with income? All of these things we are certainly going to deal with. And then we’re unfortunate with the term ‘artificial intelligence,’ because it can mean many different things, but one is that it can be narrow AI, it can be a Nest thermometer that can adjust the temperature, but it can also be Commander Data of Star Trek. It can be C-3PO out of Star Wars. It can be something as versatile as a human and fortunately those two things share the same name, but they’re different technologies, so it has to kind of be drawn out on its own, and to say, “Is this very different thing that shares the same name likely? possible? What are its implications and whatnot?”

Interestingly, of the people who believe we’re going to build [an AGI] very immensely and when, some say as soon as five years, and some say as long away as five hundred. And that’s very telling that these people had such wide viewpoints on when we’ll get it. And then to people who believe we’re going to build one, the question then becomes, ‘well is it alive? Can it feel pain?  Does it experience the world? And therefore, by that basis does it have rights?’ And if it does, does that mean you can no longer order it to plunge your toilet when it gets stopped up, because all you’ve made is a sentient being that you can control, and is that possible?

And why is it that we don’t even know this? The only real thing any of us know is our own consciousness and we don’t even know where that comes about. And then finally the book starts 100,000 years ago. I wanted to look 100,000 years out or something like that. I wanted to start thinking about, no matter how these other issues shake out, what is the long trajectory of the human race? Like how did we get here and what does that tell us about where we’re going? Is human history a story of things getting better or things getting worse, and how do they get better or worse and all of the rest. So that was a structure that I made for the book before I wrote a single word.

Yeah, and it makes sense. Maybe for the sake of not stealing the thunder of those that want to read it, we’ll skip a few of those, but before we go straight into questions about the book itself, maybe you can explain who you want this book to be read by. Who is the customer?

There are two customers for the book. The first is people who are in the orbit of technology one way or the other, like it’s their job, or their day to day, and these questions are things they deal with and think about constantly. The value of the book, the value prop of the book is that it never actually tells you what I think on any of these issues. Now, let me clarify that ever so slightly because the book isn’t just another guy with another opinion telling you what I think is going to happen. That isn’t what I was writing it for at all.

What I was really intrigued by is how people have so many different views on what’s going to happen. Like with the jobs question, which I’m sure we’ll come to. Are we going to have universal unemployment or are we going to have too few humans? These are very different outcomes all by very technical minded informed people. So, what I’ve written or tried to write is a guidebook that says I will help you get to the bottom of all the assumptions underlying these opinions and do so in a way that you can take your own values, your own beliefs, and project them onto these issues and have a lot of clarity. So, it’s a book about how to get organized and understand why the debate exists about these things.

And then the second group are people who, they just see headlines every now and then where Elon Musk says, “Hey, I hope we’re not just the boot loaders for the AI, but it seems to be the case,” or “There’s very little chance we’re going to survive this.” And Stephen Hawking would say, “This may be the last invention we’re permitted to make.” Bill Gates says he’s worried about AI as well. And the people who see these headlines, they’re bound to think, “Wow, if Bill Gates and Elon Musk and Stephen Hawking are worried about this, then I guess I should be worried as well.” Just on the basis of that, there’s a lot of fear and angst about these technologies.

The book actually isn’t about technology. It’s about how much you believe and what that means for your beliefs about technology. And so, I think after reading the book, you may still be afraid of AI, you may not, but you will be able to say, ‘I know why Elon Musk, or whoever, thinks what they think. It isn’t that they know something I don’t know, they don’t have some special knowledge I don’t have, it’s that they believe something. They believe something very specific about what people are, what the brain is.  They have a certain view of the world as completely mechanistic and all these other things.’ You may agree with them, you may not, but I tried to get at all of the assumptions that live underneath those headlines you see. And so why would Stephen Hawking say that, why would he? Well, there are certain assumptions that you would have to believe to come to that same conclusion.

Do you believe that’s the main reason that very intelligent people will disagree on with respect to how optimistic they are about what artificial intelligence will do? You mentioned Elon Musk who is pretty pessimistic about what AI might do, whereas there are others like Mark Zuckerberg from Facebook, who is pretty optimistic, comparatively speaking. Do you think it’s this different account of what we are, that’s explaining the difference?

Absolutely. The basic rules that govern the universe and what our self is, what is that voice you hear in your head?

The three big questions.

Exactly.  I think the answer to all these questions boil down to those three questions, which as I pointed out, are very old questions. They go back as far as we have writing, and presumably therefore they go back before that, way beyond that.

So we’ll try to answer some of those questions and maybe I can prod you. I know that you’ve mentioned in the past that you’re not necessarily expressing your specific views, you’re just laying out the groundwork for people to have a debate, but maybe we can tease some of your opinions.

I make no effort to hide them. I have beliefs about all those questions as well, and I’m happy to share them, but the reason they don’t have a place in the book is: it doesn’t matter whether I think I’m a machine or not. Who cares whether I think I’m a machine? The reader already has an opinion of whether a human being is a machine. The fact that I’m just one more person who says ‘yay’ or ‘nay,’ that doesn’t have any bearing on the book.

True. Although, in all fairness, you are a highly qualified person to give an opinion.

I know, but to your point, if Elon Musk says one thing and Mark Zuckerberg says another, and they’re diametrically opposed, they are both eminently qualified to have an opinion and so these people who are eminently qualified to have opinions have no consensus, and that means something.

That does mean something. So, one thing I would like to comment about the general spirit of your book, is that I generally felt like the book was built from a position of optimism. Even towards the very end of the book, towards the 100,000 years in the future, there was always this underlying tone of, we will be better off because of this entire revolution, no matter how it plays out versus not.And I think that maybe I can tease out of you that fact that you are telegraphing your view on ‘what are we?’ Effectively, are we a benevolent race in a benevolent existence, or are we something that’s more destructive in nature? So, I don’t know if you would agree with that statement about the spirit of the book or whether…

Absolutely. I am unequivocally, undeniably optimistic about the future, for a very simple reason, which is, there was a time in the past, maybe 70,000 years ago, that humans were down to something like maybe a 1000 breeding pairs. We were an endangered species and we were one epidemic, one famine, one away from total annihilation and somehow, we got past that. And then 10,000 years ago, we got agriculture and we learned to regularly produce food, but it took us 90 percent of our people for 10,000 years to make our food.

But then we learned a trick and the trick is technology, because what technology does is it multiplies what you are able to do. And what we saw is that all of a sudden, it didn’t take 90 percent, 80 percent, 70, 60, all the way down, in the West to 2 percent. And furthermore, we learned all of these other tricks we could do with technology. It’s almost magic that what it does is it multiplies human ability. And we know of no upward limit of what technology can do and therefore, there is no end to how it can multiply what we can do.

And so, one has to ask the question, “Are we on balance going to use that for good or ill?” And the answer obviously is for good. I know maybe it doesn’t seem obvious if you caught the news this morning, but the simple fact of the matter is by any standard you choose today, life is better than it was in the past, by that same standard anywhere in the world. And so, we have an unending story of 10,000 years of human progress.

And what has marred humanity for the longest time is the concept of scarcity.  There was never enough good stuff for everybody, not enough food, not enough medicine, not enough education, not enough leisure, and technology lets us overcome scarcity. And so, I think if you keep that at the core, that on balance, there have been more people who wanted to build than destroy, we know that, because we have been building for 10,000 years. That on balance, on net, we use technology for good on net, always, without fail.

I’d be interested to know the limits to your optimism there. Is your optimism probabilistic? Do you assign, say a 90 percent chance to the idea that technology and AI will be on balance, good for humans? Or do you think it’s pretty precarious, there’s maybe a 10 percent chance, 20 percent chance that that might be a point where if we fail to institute the right sort of arrangements, it might be bad. How would you sort of describe your optimism in that sense?

I find it hard to find historic cases where technology came along that magnified what people were able to do and that was bad for us. If in fact artificial intelligence makes everybody effectively smarter, it’s really hard to spin that to a bad thing. If you think that’s a bad thing, then one would advocate that maybe it would be great if tomorrow everybody woke up with 10 fewer IQ points. I can’t construct that in my mind.

And what artificial intelligence is, is it’s a collective memory of the planet. We take data from all these people’s life experiences and we learn from that data, and so to somehow say that’s going to end up badly, is to say ignorance is better than knowledge. It’s to say that, yeah, now that we have a collective memory of the planet, things are going to get worse. If you believe that, then it would be great if everybody forgot everything they know tomorrow. And so, to me, the antithetical position that somehow making everybody smarter, remembering our mistakes better, all of these other things can somehow lead to a bad result…I think is…I shall politely say, unproven in the extreme.

You see, I believe that people are inherently…we have evolved to be by default, extremely cautious. Somebody said it’s much better to mistake a rock for a bear and to run away from it, than it is to mistake a bear for a rock and just stand there. So, we are a skittish people and our skittishness has served us well. But what happens is it means anytime you’re born with some bias, some cognitive bias, and we’re born I think with one of fear, it does one well to be aware of that and to say, “I know I’m born this way. I know that for 10,000 years things have gotten better, but tomorrow they might just be worse.” We come by that honestly, it served us well in the past, but that doesn’t mean it’s not wrong.

All right, well if we take that and use that as a sort of a veneer for the rest of the conversation, let’s move into the narrow AI portion of your book. We can go into the whole variance of whether robots are going to take all of our jobs, some of our jobs, or none of our jobs and we can kind of explore that.

I know that you’ve covered that in other interviews, and one of the things that maybe we also should cover is how we train our AI systems in this narrow era. How we can inadvertently create issues for ourselves by having old data sets that represent social norms that have changed and therefore skew things in the wrong way, and inherently create momentum for machines to believe and make wrong conclusions of us, even though we as humans might be able to derive that out of contextual relevance at some point, but is no longer. Maybe you can just kick off that whole section with commentary on that.

So, that is certainly a real problem. You see when you take a data set and let’s say the data is 100 percent accurate and you come up with some conclusion about it, it takes on a halo of, ‘well that’s just the facts, that’s just how things are, that’s just the truth.’ And in a sense, it is just the truth, and AI is only going to come to conclusions based on like you said, the data that it’s trained on. You see, the interesting thing about artificial intelligence, is it has a philosophical assumption behindit, and it is that the future is like the past and for many things that is true. A cat tomorrow looks like a cat today and so you can take a bunch of cats from yesterday, or a week ago, or a month, or a year and you can train it and it’s going to be correct. A cell phone tomorrow doesn’t look like a cell phone ten years ago though, and so if you took a bunch of photos of cell phones from 10 years ago, trained an AI, it’s going to be fabulously wrong.  And so, you hit the nail on the head.

The onus is on us to make sure that whatever we are teaching it is a truth that will be true tomorrow, and that is a real concern. There is no machine that can kind of ‘sanity check’ that for you, that you tell the machine, “This is the truth, now, tell me about tomorrow,” but people have to get very good at that. Luckily there’s a lot of awareness around this issue, like people who assemble large datasets, are aware that data has a ‘best-by’ date that varies widely. For how to play a game of chess, it’s hundreds of years. That hasn’t changed.  If it’s what a cell phone looks like, it’s a year. So the trick is to just be very cognizant of the data you’re using.

I find the people who are in this industry are very reflective about these kinds of things, and this gives me a lot of encouragement. There have been times in the past where people associated with a new technology had a keen sense that it was something very serious, like the Manhattan project in the United States in World War II, or the computers that were built in the United Kingdom in that same period.  They realized they were doing something of import, and they were very reflective about it, even in that time. And I find that to be the case with people in AI today.

I think that generally speaking, a lot of the companies that we’ve invested in this sector and in this stage of effectively narrow based AI, as you said, are going through and thinking through it. But what’s interesting is that I’ve noticed that there is a limit to what we can teach as metadata to data for machine learning algorithms to learn and evolve by themselves. So, the age-old argument is that you can’t build an artificial general intelligence. You have to grow it. You have to nurture it. And it’s done over time. And part of the challenge of nurturing or growing something is knowing what pieces of input to give it. 

Now, if you use children as the best approximation of what we do, there’s a lot of built in features, including curiosity and a desire to self-preserve and all these things that then enable the acquisition of metadata, which then justifies and rewrites existing data as either valid or invalid, to use your cell phone example. How do you see us being able to tackle that when we’re inherently flawed in our ability to add metadata to existing data? Are we going to effectively never be able to make it to artificial general intelligence because of our inability to add that additional color to data so that it isn’t effectively a very tarnished and limited utility?

Well, yes, it could very easily be the case, and by the way, that’s an extremely minority view among people in AI. I will just say that up front. I’m not representing a majority of people in AI, but I think that could very well be the case. Let me just dive into that a little bit about how people know what we know. How is it that we are generally intelligent, have general intelligence? If I asked, “Does it hurt your thumb when you hit it with a hammer?” You would say “yes,” and then I would say, “Have you ever done it?” “Yes.” And then I would say, “Well, when?” And you likely can’t remember, and so you’re right, we have data that we take somehow learning from, and we store it and we don’t knowhow we store it. There’s no place in your brain which is ‘hitting your thumb with a hammer hurts,’ and then if I somehow could cut that out, you no longer know that.  It doesn’t exist. We don’t know how we’d do that.

Then we do something really clever. We know how to take data we know in one area and apply it to another area.  I could draw a picture of a completely made up alien that is weird beyond imagination. And I could show that picture to you and then I could give you a bunch of photographs and say find that alien in these. And if the alien is upside down or underwater or covered in peanut butter, or half behind a tree or whatever, you’re like, “There it is. There it is. There it is. There it is.” We don’t know how we do that. So, we don’t know how to make computers do it.

And then if you think about it, if I were to ask you to imagine a trout swimming in a river, and imagine the same trout in a jar of formaldehyde and in a laboratory. “Do they weigh the same?” You would say, “yeah.” “Do they smell the same?” “Uh, no.” “Are they the same color?” “Probably not.” “Are they the same temperature?” “Definitely not.” And even though you have no experience with any of that, you instinctively know how to apply it. These are things that people do very naturally, and we don’t know how to make machines do them.

If you were to think of a question to ask a computer like, “Dr. Smith is having lunch at his favorite restaurant when he receives a phone call.  Looking worried, he runs out the door neglecting to pay his bill. Are the owners liable to call the police?” You would say a human would say no. Clearly, he’s a doctor. It’s his favorite restaurant, he must eat there a lot, he must’ve gotten an emergency call. He ran out the door forgetting to pay.  We’ll just ask him to pay the next night he comes in. The amount of knowledge you had to have, just to answer that question is complexity in the extreme.

I can’t even find a chatbot that can answer [the question:] “What’s bigger, a nickel or the sun?”  And so to try to answer a question that requires this nuance and all of this inference and understanding and all of that, I do not believe we know how to build that now. That would be, I believe, a statement within the consensus. I don’t believe we know how to build it, and even if you were to say, “Well, if you had enough data and enough computers, you could figure that out.” It may just literally be impossible, like every instantiation of every possibility. We don’t know how we do it. It’s a great mystery and it’s even hotly debated [around] even if we knew how we do it, could we build a machine to do it? I don’t even know that that’s the case.

I think that’s part of the thing that baffles me in your book. I’m jumping a little bit around here in your book now. You do talk about consciousness and you talk about sentience and how we know what we know, who we are, what we are. You talk about the dot on pets and how they identify themselves as themselves, and with any engineering problem, sometimes you can conceive of a solution before actually the method by which to get there is accomplished.  You can conceive the idea of flying. You just don’t know what combination of anything that you are copying from birds or copying from leaves, or whatever, will function in getting to that goal: flying.

The problem with this one is that from an engineering point of view, this idea of having another human or another human-like entity that not only has consciousness, but has free will and sentience as far as we can perceive it, [doesn’t recognize that] there’s a lot of things that you described in your chapter on consciousness that we don’t even know how to qualify. Like which is a huge catalyst in being able to create the metadata that structures data in a way that then gives the illusion and perception of consciousness. Maybe this is where you give me your personal opinion… do you think we’ll ever be able to create an answer to that engineering question, such that technology can be built around it? Because otherwise we might just be stuck on the formulation of the problem.

The logic that says we can build it is very straightforward and seemingly ironclad. The logic goes like this. If we figure out how a neuron works, we can build one. Either physically build one or model it in a computer.  And if you can model that neuron in a computer, then you learn how it talks to other neurons and then you model a 100 billion of them in the computer, and all of a sudden you have a human mind.  So that that says, we don’t have to know it, we just have to understand the physics.  So, the position just says whatever a neuron does, it behaves the laws of physics and if we can understand how those laws are interacting, then we will be able to build it. Case closed. There’s no question at all that it cannot be done.

So I would say that’s the majority viewpoint. The other viewpoint says, “Well wait a minute, we have this brain that we don’t understand how it works. And then we have this mind, and a mind is a concept everybody uses and if you want a definition, it’s kind of everything your brain can do that an organ doesn’t seem like it would be able to. You have a sense of humor; your liver may not have a sense of humor.  You have emotions, your stomach may not have emotions, and so forth.” So somehow, we have a mind that we don’t know how it comes about. And then to your point, we are conscious and what that means is we experience the world. I feel warmth, [whereas] a computer measures temperature. Those are very different things and we not only don’t know how it is that we are conscious, we don’t even know how to ask the question in a scientific method, nor what the answer looks like.

And so, I would say my position to be perfectly clear is, we have brains we don’t understand, minds we don’t understand and consciousness we don’t understand.  And therefore, I am unconvinced that we can ever build something like this. And so I see no evidence that we can build it because the only example that we have is something that we don’t understand. I don’t think you have to appeal to spiritualism or anything like that, to come to that conclusion, although many people would disagree with me.

Yeah, it’s interesting. I think one thing underlying the pessimistic view is this belief that while we may not have the technology now or have an idea of how we’re going to get there, the kinetic sort of an AI explosion—that’s what I think Nick Bostrom, the philosopher has called it—may be pretty rapid in the sense that once there is material success in developing these AI models, that will encourage researchers to sort of pile on and therefore they bring in more people to produce those models and then secondly, if there are advancements in self-improving AI models. So there’s a belief that it may be pretty quick that we get super intelligence that underlies this pessimism and the belief that we sort of have to act now.  What would be your thoughts on that?

Oh, well I don’t agree. I think that’s the “Is that a bear or a rock?” kind of thing. The only evidence we really have for that scenario is movies, and they’re very compelling and I’m not conspiratorial, and they’re entertaining. But what happens is you see that enough, and you do something that has a name, it’s called ‘reasoning from fictional evidence’ and that’s what we do. Where you say, “Well, that could happen, and then you see it again, and yeah, that could happen. That really could again.”  Again, and again and again.

To put it in perspective, when I say we don’t understand how the brain works, let me be really clear about that. Your brain has 100 billion neurons, roughly the same number of stars in the Milky Way. You might say, “Well, we don’t understand it because there’s so many.” This is not true. There’s a worm called the nematode worms. He’s about as long as a hair is thick, and his brain has 302 neurons. These are the most successful creatures on the planet, by the way. Seventy percent of all animals are nematode worms and 302 neurons. That’s it. [This is about] the number of pieces of cereal in a bowl of cereal.  So, for 20 years a group of people in something called the ‘open worm project’ had been trying to model those 302 neurons in a computer to get it to display some of complex behavior that a nematode worm does.  And not only have they not done it, there’s even a debate among them whether it is even possible to do that. So that’s the reality of the situation. We haven’t even gotten to the mind.

Again, how is it that we’re creative? And we haven’t even gotten to, how is it that we experience the world? We’re just talking about how does a brain work, if it only has 302 neurons, a bunch of smart people, 20 years working on it, may not even be possible. So somehow to spin a narrative that, well, yeah, that all may be true, but what if there was a breakthrough and then it sped up on itself and sped up and then it got smarter and then it got so smart it had 100 IQ, then a thousand, then a million, then 100 million. And then it doesn’t even see us anymore. That’s as speculative as any other kind of scenario you want to come up with. It’s so removed from the facts on the ground that you can’t rebut it because it is not based on any evidence that you can rebuke.

You know, the fun thing about chatting with you, Byron, is that the temptation is to sort of jump into all these theories and which ones are your favorites. So because I have the microphone, I will.  Let me just jump into one.  Best science fiction theory that you like. I think we’ve touched on a few of these things, but what is the best unified theory of everything, from science fiction that you feel like, ‘you know what, this might just explain it all’?

Star Trek.

Okay. Which variant of it?  Because there’s not…

Oh, I would take either….I’ll take ‘The Next Generation.’ So, what is that narrative? We use technology to overcome scarcity. We have bumps all along the way. We are insatiably curious, and we go out to explore the stars as Captain Picard told the guy he thought out from the 20th Century. He said the challenge in our time is to better yourself, is to discover who you are. And what we found interestingly with the Internet, and sure, you can list all the nefarious uses you want. What we found is the minute you make blogs, 100 million people want to tell you what they think. The minute you make YouTube, millions of people want to upload video; the minute you make iTunes, music flourishes.

I think in my father’s generation, they didn’t write anything after they left college. We wake up in the morning, and we write all day long. You send emails constantly and so what we have found is that it isn’t that there were just a few people, and like the Italian Renaissance, there were only a few people who wanted to paint or cared to paint. It was like everybody probably did. Only there wasn’t enough of the good stuff, and so only either you had extreme talent or extreme wealth and then you got to paint.

Well, in the future, in the Star Trek variant of it, we’ve eliminated scarcity through technology, and everybody is empowered, every Dante to write their Inferno, every Marie Curie to discover radium and all of the rest. And so that vision of the future, you know, Gene Roddenberry said in the future there will be no hunger and there will be no greed and all the children would know how to read.  That variant of the future is the one that’s most consistent with the past. That’s the one you can say, “Yeah, somebody in the 1400s looking at our life today, that would look like Star Trek to them. These people like push up a button and the temperature in the room gets cooler, and they have leisure time. They have hobbies.”  That would’ve seemed like science fiction.

I think there’s a couple of things that I want to tackle with the Star Trek analogy to get us sort of warmed up on this and I think Kyran’s waiting here at the top to ask some of them, but I think the most obvious one to ask, if we use that as a parable of the future, is about Lieutenant Commander Data. Lieutenant Commander Data is one of the characters starring in The Next Generation and is the closest attempt to artificial general intelligence, and yet he’s crippled from fully comprehending the human condition because he’s got an emotion chip that has to be turned off because when it’s turned on, he goes nuts; and his brother is also nuts because he was overly emotional.  And then he ends up representing every negative quality of humanity. So to some extent, not only have I just shown off about my knowledge of the Star Trek era…

Lore wasn’t over overly emotional. He got the chip that was meant for Data and it wasn’t designed for him. That was his backstory.

Oh, that’s right. I stand corrected, but maybe you can explore that.  In that future, walk us through why you think Gene had that level of limitation for Data, and whether or not that’s an implication of ultimately the limits of what we can expect from robots.

Well, obviously that story is about…that whole setup is just not hard science. Right? That whole setup is, like you said, it’s embodying us and it’s the Pinocchio Story of Data wanting to be a boy and all of the rest. So, it’s just storytelling as far as I’m concerned. You know, it’s convenient that he has a positronic brain and having removed part of his scalp, you just see all this light coursing through, but that’s not something that science is behind, like Warp 10 or something, the tri-quarter. You know Uhura in the original series, she had a Bluetooth device in her ear all the time, right?

Yeah, but I guess with the Data metaphor, I guess what I’m asking is: the limitations that prevented Data from being able to do some of the things that humans do, and therefore ultimately come around full circle into being a fully independent, conscious, free-willed, sentient being, were entirely because of some human elements he was lacking. I guess the question and you brought it up in your book is, whether or not we need those human elements to really drive that final conversion of a machine to some sort of entity that we can respect as an equivalent peer to us.

Yeah. Data is a tricky one because he could not feel pain, so you would say he’s not sentient. And to be clear, sentient means, it’s often misused, to mean ‘smart.’ That’s sapient. Sentient means you can experience pain. He didn’t, but as you said, at some point in the show, he experienced emotional pain through that chip and therefore he is sentient. They had a whole episode about, “Does Data have a soul?” And you’re right, I think there are things that humans do that it’s hard to…unless you start with the assumption everything in a human being is mechanistic, in physics and that you’re a bag of chemicals with electrical impulses going through you.

If you start with that, then everything has to be mechanical, but most people don’t see themselves that way, I have found, and so if there is something else, some emergent or something else that’s going on, then yeah, I believe that has to be wrapped up in our intelligence. That being said, everybody I think has had this experience of when you’re driving along and you kind of space [out] and then you kind of ‘come to’ and you’re like, “Holy cow, I’m three miles along. I don’t remember driving there.” Yet you behaved very intelligently. You navigated traffic and did all of that, but you weren’t kind of conscious. You weren’t experiencing the world at least that much. That may be the limit of what we can do, that a person during that three minutes when you’re kind of spaced, because that person also didn’t write a new poem or do anything creative. They just merely mechanically went through the motions of driving. That may be the limit. That may be that last little bit that makes us human.

The Star Trek view has two pieces to it. It has a technological optimism, which I don’t contest. I think I’m aligned with you and agreeing with that. There’s also an economic or a social optimism there and that’s also about how that technology is owned, who owns the means of production, who owns the replicators.  When it comes to that, how precarious do you think the Star Trek Universe is in the sense that if the replicators are only in the hand of a certain group of people, if they’re so expensive that only a few people learn them, or only a few people own the robots,  then it’s no longer such an optimistic scenario that we have. I’d just be interested in hearing your views there.

You’re right, that the replicator is a little bit of a convenient…I don’t want to say it’s a cheat, but it’s a convenient way to get around scarcity and they never really go into, well, how is it that anybody could go to the library and replicate whatever they wanted.  Like how did they get that?  I understand those arguments. We have [a world where] the ability of a person using technology to affect a lot of lives goes up and that’s why we have more billionaires. We have more self-made billionaires now; a higher percentage of billionaires are self-made now than ever before. You know, Google and Facebook together made 12 billionaires. The ability to make a billion dollars gets easier and easier, at least for some people (not me) because technology allows them to multiply and affect more lives and you’re right. So that does tend to make more super, super, super rich people. But, I think the income inequality debate is a little…maybe needs a slight bit of focus.

To my mind it doesn’t matter all that much how many super rich people there are. The question is how many poor people are there? How many people have a good life? How many people can have medical care and can, you know, if I could get everybody to that state, but I had to make a bunch of super rich people, it’s like, absolutely, we’ll take that. So I think, income inequality by itself is a distraction.

I think the question is how do you raise the lot of everybody else and what we know about technology is that it gets better over time and the prices fall over time. And that goes on ad infinitum. Who could have afforded an iPhone 20 years ago?  Nobody. Who could have afforded the cell phone 30 years ago? Rich people. Who could have afforded any of this stuff all these years ago?  Nobody but the very rich, and yet now because they get rich, all the prices of all that continue to fall and everybody else benefits from it.

I don’t deny there are all kinds of issues. You have your Hepatitis C vaccine, costs $100,000 and there are a lot of people who need it and only a few people are going to [get it]. There’s all kinds of things like that, but I would just take some degree of comfort that if history has taught us anything, is that the price of anything related to technology falls over time. You probably have 100 computers in your house.  You certainly have dozens of them, and who from 1960 would have ever thought that ? Yet here they are here. Here we are in that future.

So, I think you almost have to be conspiratorial to say, yeah, we’re going to get these great new technologies, and only a few people are going to control them and they’re just going to use them to increase their wealth ad infinitum. And everybody else is just going to get the short end of the stick. Again, I think that’s playing on fear. I think that’s playing on all of that, because if you just say, “What are the facts on the ground? Are we better off than we were 50 years ago, 100 years ago, 200 years ago?” I think you can only say “yes.”

Those are all very good points and I’m actually tempted to jump around a little bit in your book and maybe revisit a couple of ideas from the narrow AI section, but maybe what we can do is we can merge the question about robot proofing jobs with some of the stuff that you’ve talked about in the last part, which is the road from here.

One of the things that you mentioned before is this general idea that the world is getting better, no matter what. These things that we just discussed about iPhones and computers being more and more accessible is an example of it.  You talked about the section of ‘murderous meerkats’ where you know, even things like crime are things that are improving over time, and therefore there is no real reason for us to fear the future. But at the same time, I’m curious as to whether or not you think that there is a decline in certain elements of society, which we aren’t factoring into the dataset of positivity.

For example, do we feel that there is a decline in the social values that have developed in the current era, in this sort of decline of social values, things like helping each other out, things like looking out for the collective versus the individual, has come and gone, and we’re now starting to see the manifestations of that through some of the social media and how it represents itself.  And I just wanted to get your ideas down the road from here and whether or not you would revisit them, if somebody were to tell you and show you some sociologists’ research regarding the decline of social values, and how that might affect the kinds of jobs humans will have in the future versus robots.

So I’m an optimist about the future. I’m clear about that. Everything is hard. It’s like me talking about my companies. Everything’s a struggle to get from here to there. I’m not going to try to spin every single thing. I think these technologies have real implications on people’s privacy and they’re going to affect warfare and there are all these things that are real problems that we’re really going to have to have to think about.  The idea that somehow these technologies make us less empathetic, I don’t agree with. And you can just run through a list of examples like everybody kind of has a cause now. Everybody has some charity or thing that they support. Volunteerism, Go-Fund-Me’s are up…People can do something as simple as post a problem they have online and some stranger who will get nothing in return is going to give them a big, long answer.

People toil on a free encyclopedia and they toil in anonymity. They get no credit whatsoever. We had the ‘open source’ movement. Nobody saw that. Nobody said “Yeah, programmers are going to work really hard and write really good stuff and give it away.” Nobody said we’re going to have Creative Commons where people are going to create things that are digital and they’re going to give them away. Nobody said, “Oh yeah, people are going to upload videos on YouTube and just let other people watch them for free.” Everywhere you look, technology empowers us and our benevolence.

To take the other view is like a “Kids these days!” shaking your cane, “Get off my grass!” kind of view that things are bad now. They’re getting worse. Which is what people have said for as long as people have been reflecting on the age.  And so, I don’t buy any of that.  In terms of specifically about jobs, I’ve tried hard to figure out what the half-life of a job is.  And I think every 40 years, every 50 years, half of all the jobs vanish. Because what does technology do? It makes great new high paying jobs, like a geneticist. And it destroys low-paying tedious jobs, like an order taker at a fast food restaurant.

And what people sometimes say is, “You really think that order taker is going to become a geneticist? They’re not trained for these new jobs.” And the answer is, “Well, no.” What’ll happen is a college professor will become a geneticist and a high school biology teacher gets the college job and the substitute teacher gets hired at the high school job, all the way down. The question isn’t, “Can that person who lost their job to automation get one of these great new jobs?” The question is, “Can everybody on the planet do a job a little harder than the job they have today?” And if the answer to that is yes, then what happens is, every time technology creates great new jobs, everybody down the line gets a promotion. And that is 250 years of why have we had in the West full employment, because employment other than during the depression has always been 5 to 10 percent… for 250 years.

Why have we had full employment for 250 years and rising wages? Even when something like the assembly line came out, or something like we replaced all the animal power with steam, you never had bumps in unemployment because people just used those technologies to do more. So yes, in 40 or 50 years, half the jobs are going to be gone, that’s just how the economy works. The good news is though, when I think back to my K-12 education, and I think if I knew the whole future, what would I have taken then that would help me today.  And I can only think of one thing that I really just missed out on. And can you guess by the way?

Computer education?

No, because anything they taught me then would no longer be useful. Typing. I should’ve taken typing. Who would have thought that that would be like the skill I need every day the most? But I didn’t know that. So you have to say, “Wow, like everything you have, everything that I do in my job today is not stuff I learned in school.” What we all do now is you hear a new term or concept and you google it and you click on that and you go to Wikipedia and you follow the link, and then it’s 3:00 AM in the morning and you wake up the next morning, and you know something about it.  And that’s what every single one of us does, what every single one of us has always done, what every single one of us will continue to do. And that’s how the workforce morphs. It isn’t that we’re facing this kind of cataclysmic disconnect between our education system and our job market. It’s that people are going to learn to do the new things, as they learned to be web designers, and they learned every other thing that they didn’t learn in school.

Yeah, we’d love to dive into the economic arguments in a second, but just to bring it back to your point that technology is always empowering. I’m going to play devil’s advocate here and mention someone we had on the podcast about a year ago. Tristan Harris, who’s the leader of an initiative called ‘Time Well Spent’ and his arguments were that the effects of technology can be nefarious. Two days ago, there was a New York Times article, referring to a research paper on statistical analysis and anti-refugee violence in Germany, and one of the biggest correlating factors was time spent on social media, suggesting that it isn’t always like beneficial or benign for humans. Just to play devil’s advocate here, what is your take on that?

So, is your point that social media causes people to be violent, or is the interpretation people prone to violence also are prone to using social media?

Maybe one variant of that, and Kyran can provide his own, is that the good is getting better with technology and the bad is getting badder with technology. You just hope that one doesn’t detonate something that is irreversible.

Well, I will not uniformly defend every application of technology to every single situation. I could rattle off all the nefarious uses of the Internet, right? I mean bilking people, you know them all, you don’t need me to list it. The question isn’t, “Do any of those things happen?” The question is, “On balance, are more people using the Internet for good, than evil?” And we know the answer is ‘good.’

It has to be, because if we were more evil than good as a species, we never would have survived this way. We’re highly communal. We’ve only survived because we like to support each other, forget about all the wars, granted, all of the problems, all the social strife, all of that. But in the end, you’re left with the question, “How did we make progress to begin with?” And we made progress because there are more people who are working for progress than there are…who are carrying torches and doing all the rest. It just is simple.

I guess I’m not qualified to make this statement, but I’m going to go ahead and do it anyway. Humans have those attributes because we’re inherently social animals, and as a consequence we’re driven to survive and forego being right at times, because we value the social structure more than we do our own selves; and we value the success of the social structure more than ourselves; and there’s always going to be deviations from that, but on average it then answers and  shows and represents itself in the way that you have articulated it. 

And that’s a theory that I have, but one of the things that if you accept that theory, well you can let me know or not, but let’s, for the sake of the question, let’s just assume that it’s correct, then how do you impart that onto a collection of artificial intelligences such that they mirror that? And as we start delegating more and more to those collective artificial intelligences, can we rely on them to have that same drive when they’re no longer as socially dependent on each other, the way that humans are for reproduction and defense and emotional validation?

That could well be the case, yes. I mean, we have to make sure that we program them to reflect an ethical code, and that’s an inherently very hard thing to do, because people aren’t great at articulating them and even when they articulate them, they’re full of all these provisos and exceptions and everybody’s is different. But luckily, there are certain broad concepts that almost everybody agrees with. That life is better than death, and that building is better than destroying, and there are these very high-level concepts that we will need to take great pains in how we build our AIs, and this is an old debate, even in AI.

There was a man named Weizenbaum, who made a chatbot in the sixties. It was simple. You would say, “I’m having a bad day today,” and it would say, “Why are you having a bad day?” “I’m having a bad day because of my mother.” “Why are you having a bad day because of your mother?” Back and forth. Super simple. Everybody knew it was a chatbot, and yet he saw people getting like emotionally attached to it, and he kind of turned on it and he said, “In the end, we never want computers.”

When the computer says ‘I understand,’ it’s just a lie, that there is no ‘I,’ and there is no understanding. And he came to believe we should never let computers do those kinds of things. They should never be…recipients of our emotions. We should never make them caregivers and all of these other things because in the end, they don’t have any moral capacity at all. They have no empathy. They have faked empathy, they have simulated empathy, and so I think there is something to that, that there will just simply be jobs we’re not going to want them to do because in the end they’re going to require a person I think.

You see, any job a computer could do; a robot could do. If you make a person do that job, there’s a word for that. That’s dehumanizing. If a machine can, in theory, do a job, if you make a person do it, that’s dehumanizing.  You’re not using anything about them that makes them a human being, you’re using them as a stand-in for a machine, and those are the jobs machines should do.

But then there are all the other jobs that only people can do, and that’s what I think people should do. I think they’re going to be a lot of things like that, that we are going to be uncomfortable with and we still don’t have any idea. Like, when you’re on a chatbot, you need to be told it’s a chatbot. Should robotic voices on the phone actually sound somewhat robotic, so you know that’s not a person? You think about R2-D2 or C-3PO, just think if their names were Jack and Larry.  That’s a subtle difference in how we regard them that we don’t know how we’re going to do that, but you’re entirely right. Machines don’t have any empathy and they can only fake it, and there are real questions if that’s good or not.

Well, that’s a great way of looking at it, and one of the things that’s been really great during this chat is understanding the origin of some of these views and how you end up at this positive outcome at the end of the day on average. And the book does a really good job of leaving the reader with that thought in mind, but arms them to have these kinds of engaging conversations. So thanks for sharing the book with us and thanks for providing your opinion on different elements of the book.

However, you know, it’d be great to get some thoughts about things that you feel that inspired you or that you left out of the book. For example, which movies have most affected you in the vein of this particular book. What are your thoughts on a TV show like Westworld and how that illustrates the development of the mind of the artificial intelligence in the show? Maybe just share a little bit about how your thoughts have evolved.

Certainly, and I would also like to add, I do think there’s one way it can all go south. I think there is one pessimistic future and I think that will come about if people stop believing in a better tomorrow. I think pessimism is what will get us all killed. The reason we’ve had optimism, be so successful, is there’ve been a number of people who get up and say, “Somebody needs to invent the blank. Somebody needs to find a cure for this, somebody needs to do it. I will do it.” And you have enough people who believe in one form or another, in a better tomorrow.

There’s a mentality of, don’t polish brass on a sinking ship. And that’s where you just say, “Well what’s the point? Why bother? Why bother?” And if enough people said “Why bother?” then we are going to have to build that world. We’re going to have to build that better world. And just like I said earlier with my companies, it’s going to be hard. Everybody’s got to work hard at it. And so, it’s not a gift, it’s not free.  We’ve clawed our way from savagery to civilization and we’ve got to keep clawing. But the interesting thing is, finally I think there is enough of the good stuff for everybody and you’re right, there are big distribution problems about that, and there are a lot of people who aren’t getting any of the good stuff, and those are all real things we’re going to have to deal with.

When it comes to movies and TV, I have to see them all because everybody asks me about them on shows. So I have to go see them.  And I used to loathe going to all the pessimistic movies that have far and away dominated…In fact, I even get to think of, you know, Black Mirror, it’s like I started writing out story ideas for a show in my head, I call ‘White Mirror.’  Who’s telling those stories about how everything can be good in the future? That doesn’t mean they’re bereft of drama. It just means that it’s a different setting to explore these issues.

I used to be so annoyed at having to go to all of these movies. I would go to see some movie like Elysium and then be like, yeah, they’re the 99 percent, yeah, they’re poor and beaten down. Yeah, they’re covered in dirt. And now, yeah, the 1 percent, I bet they live in someplace high up in the sky, pretty and clean. Yeah, there that is. And then, you know, you see Metropolis, the most expensive movie ever made, adjusted for inflation, from almost a century ago. And yeah, there are the 99 percent. They’re dirty, they’re covered in dirt, everybody forgets to bathe in the future. I wonder where the…oh yeah, the one percent, yeah, they live in that tower up there.  Oh, everything up there is white and clean. Wow. Isn’t that something. And I have to sit through these things.

And then I read a quote by Frank Herbert, and he said sometimes the purpose of science fiction is to keep the future from happening. And I said, okay, these are cautionary tales. These are warnings, and now I view them all like that.  And so, I think there are a lot of cautionary tales out there and very few things that we can…like Star Trek. You heard me answer that so quickly because there aren’t a lot of positive views about the future that are in science fiction. It just doesn’t seem to be as rich of a ground to tells stories and even in that world, you had to have the Ferengi, and you had to have the Klingons and you had to have the Romulans and so forth.

So, I’ve watched them all and you know, I enjoy Westworld, like the next person.  But I also realized those are people playing those androids and that nobody can build a machine that does any of that. And so it’s fiction. It’s not speculative in my mind. It’s pure fiction. It’s what they are and that doesn’t mean they’re any less enjoyable… When I ask people on my AI podcast what science fiction influenced you, they all, almost all say Star Trek. That was a show that inspired people, and so I really gravitate towards things that inspire me and inspire me in a vision of a better tomorrow.

For me, if I had to answer that question, I would say The Matrix. And I think that it brings up a lot of philosophical questions and even questions about reality. And it’s dystopian in some ways I guess, but in some ways, it illustrates how we got there and how we can get out of it. And it has a utopian conclusion I guess, because it’s ultimately in the form of liberation. But it is an interesting point you make.

And it actually makes me reflect back on all the movies that I’ve seen, and it actually also brings up another question which is whether or not it’s just representative of the times. Because if you look at art and if you look at literature over the years, in many ways they are inspired by what’s going on during that era. And you can see bouts of optimism, post- the resolution of some conflict. And then you can see the brewing of social upheaval, which then ends up with some sort of a conflict, and you see that all across the decades and it is interesting.  And I guess that brings up a moral responsibility for us not to generate the most intense set of innovations around artificial intelligence, in a point where maybe society is quite split at the moment.  We might inject unfortunate conclusions into AI systems just because of the state of where we are in our geopolitical evolution.

Yeah. I call my airline of choice once a week to do something, and it asked me to state my member number, which unfortunately has an A, an H, and an 8 in it.  And it never gets it right. So that’s what people are trying to do with AI today, is it’s just like make a lot of really tedious stuff less tedious and use caller ID by the way. I always call from the same number, but that’s a different subject.

And so most of the problems that we try to solve with it are relatively mundane, and most of them are about how do we stop disease, and how do we… all of these very worthwhile things. It’s not a scary technology. It’s study the past, look for patterns in data, project into the future. That’s it. And anything around that that tries to make it terrifying, I think is sensationalism. I think the responsibility is to tell the story about AI like that, without the fear and emphasizing the positivity of all the good that can come out of this technology.

What do you think we’ll look upon 50 years from now and think, “Wow, why were we doing that?” How do you get away with that, the way that we look back today on slavery and think, “Why the hell did that happen?”

Well, I will give an answer to that. And it’s not my own personal axe to grind. To be clear, I live in Austin, Texas. We have barbecue joints here in abundance, but I believe that we will learn to grow meat in a laboratory and it will be not only environmentally, massively better, but it will taste better, and be cheaper and healthier and everything.  And so I think we’re going to grow all of our meat and maybe even all of our vegetables, by the way. Why do you need sunlight and rain and all of that?  But put that aside for a minute, I think we’re going to grow all of our meat in the future and I don’t know if you grow it from a cell, if it’s still veganism to eat it. Maybe it is, I don’t know, like strictly speaking, but I think once the best steak you’ve ever had in your life is 99 cents, everybody’s just going to have that.

And then we’ll look back at how we treat animals with a sense of collective shame of that, because the question is, “Can they feel?” In the United States, up until the mid-90s, veterinarians were taught that animals couldn’t feel pain and so they didn’t anesthetize them. They also operated on babies at the same time because they couldn’t feel pain. Now I think people care whether the chicken that they’re eating was raised humanely. And so, I think that expansion of empathy to animals, who now I think most people believe they do feel pain, they do experience sadness or something that must feel like that, and the fact that we essentially keep them in abhorrent conditions and all of that.

And again, I’m not grinding my own axe here. This isn’t something that…I don’t think it’s going to come up with people, like overnight changing. I think what’s gonna happen is there’ll be an alternative. The alternative will be so much better, but then everybody would use it and look back and think, how in the world did we do that?

No, I agree with that.  As a matter of fact, we’ve invested in a company that’s trying to solve that problem, and I’m going to post in the show notes just because they’re in stealth right now, but by the time this interview goes to print, hopefully we’ll be able to talk about them. But yes, I agree with you entirely, and we put our money behind it. So, looking forward to that being one of the issues to be solved. Now another question is, what’s something that you used to strongly believe in, that now you think you were fundamentally misguided about?

Oh, that happens all the time. I didn’t write this book to start off by saying, “I will write a book that doesn’t really say what I think, it’ll just be this framework.” I wrote a book to try to figure out what I think, because I would hear all of these proclamations about these technologies and what they could do. And so, I think I used to be way more in the AGI camp, that this is something we’re going to build and we’re going to have those things, like on Westworld. This was before Westworld though. And I used to be much more in that, until I wrote the book, which changed me and I can’t say I disbelieve it, that would be the wrong way to say it, but I see no evidence for it. I think I used to buy that narrative a lot more and I didn’t realize it was less a technological opinion and more a metaphysical opinion. And so, like working through all of that and just understanding all of the biases and all of the debate. It’s very humbling because these are big issues and what I wanted to do, like I said, is make a book that helps other people work through them.

Well it is a great book. I’ve really enjoyed reading it. Thank you very much for writing it. Congratulations! You’re also the longest podcast we’ve ever recorded, but it’s a subject that is very dear to me, and one that is endlessly fascinating, and we could continue on, but we’re going to be respectful of your time, so thank you for joining us and for your thoughts.

Well, thank you. Anytime you want me back, I would love to continue the conversation.

Well, until next time guys. Bye. Thanks for listening. If you enjoyed the podcast, don’t forget to subscribe on iTunes and SoundCloud and leave us a review with your thoughts on our show.

Source : This Much I Know: Byron Reese on Conscious Computers and the Future of Humanity