We’ve spent the last few weeks exploring some of the foundational technologies that keep the internet up and running. However, now it’s time to dig deeper into meatier topics. This week, our tech history series continues with artificial intelligence.
Everyone have a seat and open your textbooks to chapter 7; tech history class is in session!
Polanyi’s paradox and Moravec’s problem
People have been trying to create artificial life long before computers entered the scene. Early automata sang and moved, powered by steam and water as early as the 9th century BCE in China. However, these robots were hardly independent thinking machines. They could not reason nor move past their rudimentary “programming”.
As time went on and processors improved, science fiction moved into the realm of reality. With all these processors at our disposal, we could finally create artificial life!
Unfortunately, our dreams were far higher than what was available at the time. Initial attempts into artificial intelligence inevitably run across the Polanyi paradox, which basically boils down to the fact that we can know more than we can tell.
Basically, this means that it’s more difficult to precisely explain how to achieve a task. Automation does well for strictly repeatable tasks in clearly defined area. Robots might be useful on the assembly line, but no one is really asking them to manage more difficult tasks.

XKCD #1425. In the 60s, Marvin Minsky assigned a couple of undergrads to spend the summer programming a computer to use a camera to identify objects in a scene. He figured they’d have the problem solved by the end of the summer. Half a century later, we’re still working on it.
It sounds flippant, but this is actually a crucial point in developing any kind of AI. Think about it: we can teach the average 16-year old to drive in less than six months. Thousands of scientists have spent decades and millions, if not billions, of dollars trying to create a self-driving car with mixed results.
As Moravec pointed out in the 1980s,
“It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”
Despite the wide range of applications for computers, it turns out they can be really bad for some things. AI trips over this low bar all the time.
SEE ALSO: Artificial intelligence is rewriting the book on IT operations
How do you solve a problem like AI?
Preferably without running into these kinds of problems? After all, early attempts at artificial intelligence couldn’t create enough directives to truly guide an AI through everyday problems. (Sorry, Asimov!)

GIPHY. Someone didn’t pay attention to the Three Laws.
It turned out all that amazing processing power just wasn’t enough to actually perform. This happens on a fairly regular basis when AI runs into the limitations of that era’s computing power.
The worst AI winter took place in the 1990s, as hype and inflationary promises ran hard into the wall that was early 90s memory limits. Investors and consumers alike were burned by products that failed to live up excited promises. Funding and interest dropped. Over time, different names emerged, like infomatics, cognitive systems, and even machine learning (which we’ll cover in next week’s lesson).
Currently, the dominant field of research within AI is deep learning or machine learning. These kinds of models represent a paradigmatic shift in how artificial intelligence is approach. Instead of handing down rules from on high, 10 Commandments style, computer scientists give the burgeoning neural nets large datasets to evaluate.
By sifting through the information on its own, the new AI is able to make its own value judgments. Unfortunately, those value judgments are really, really weird.
SEE ALSO: An interdisciplinary approach to artificial intelligence testing
I for one welcome our new robotic overlords
Okay, so AI doesn’t automatically mean that we’re going to face some kind of robotic uprising. (Although, we need to point out that our cultural narratives about artificial intelligence have some weird undertones.)
More specifically, we aren’t likely to see the kind of sci-fi future any time soon. In fact, some in the field are predicting that another AI winter is coming soon. Hyperbolic pronouncements from big names like Elon Musk and Andrew Ng harm the field in the long run.

GIPHY. Boston Dynamics is probably our best bet for a real life Cyberdyne. Also, who’s a good boy?!
This summer, Google Duplex wowed the entire tech industry with the assistant’s lifelike ability to make appointments and schedule a haircut. In particular, we were impressed and terrified with the very human “um” and “mm-hmm” of the Google Assistant.
However, as critics like Gary Marcus and Ernest Davis pointed out, it also showed the deep boundaries of the field:
“The limitations of Google Duplex are not just a result of its being announced prematurely and with too much fanfare; they are also a vivid reminder that genuine A.I. is far beyond the field’s current capabilities, even at a company with perhaps the largest collection of A.I. researchers in the world, vast amounts of computing power and enormous quantities of data.”
A real intelligence, it ain’t.
This is the real kicker for artificial intelligence. Right now, we can dream more than we can develop. Until we can figure a way around these limitations, artificial intelligence will stay within well policed, closed domains.
So, in conclusion, you’re not getting a JARVIS, but you’re also not getting a SKYNET. Swings and roundabouts, mate.
Miss a week of class? We’ve got your make-up work right here. Check out other chapters in our Know Your History series!
The post Know your history — Artificial intelligence appeared first on JAXenter.
Source : JAXenter