In an age when you can speak to your smartphone, verbally tell your TV to record a program and run daily tasks through "digital personal assistants" like Amazon's Echo, we're hearing a lot about artificial intelligence.
To hear futurologists tell it, we'll see Turing-compliant AI within a few years. Apps like Waze can guide us through traffic in real time, shaving minutes or hours off of trips short and long. And now ride-hailing service Uber has deployed a fleet of self-driving cars in Pittsburgh, where its engineers are testing the technology in preparation for a wider roll-out.
But to bring all the hype back down to Earth, it's worth heeding a warning from Michio Kaku, the theoretical physicist, futurist and science educator.
“Our most advanced robots have the intelligence of a cockroach," Kaku said. "A retarded, lobotomized cockroach."
Popular VideoThis young teenage singer was shocked when Keith Urban invited her on stage at his concert. A few moments later, he made her wildest dreams come true.
As companies like Google and Uber champion research into self-driving cars, it's crucial to make an important distinction about artificial intelligence. The AI we have, which is integrated into our phones, cars and apps, is not really artificial intelligence.
It is, as Kaku explains, akin to a sophisticated tape recorder. These "artificial intelligences" have been preprogrammed to accept certain input and respond with appropriate output. Often, the result is devilishly clever -- some machines can almost fool people into thinking they're holding a conversation with a conscious entity.
But it's all smoke and mirrors.
Popular VideoThis young teenage singer was shocked when Keith Urban invited her on stage at his concert. A few moments later, he made her wildest dreams come true:
When you tell Siri to Google the phone number of your local pizza place, Siri doesn't understand what you're asking her. She doesn't understand what she's saying in response to you. She doesn't know what a pizza is. She doesn't know who you are, even though she speaks your name, and she doesn't know who she is.
Siri is just a program that mimics human language and can perform a limited number of actions within set constraints. "She" may sound like she understands what you're saying, but really all she's doing is accepting input, running it through a series of "if, then" algorithms, and spitting out an appropriate preprogrammed response.
This is crucial to understand, because no one, not even the most brilliant scientists in the most advanced robotics labs, has been able to create a machine that can think or understand objects and ideas in the real world. Our most advanced machines "recognize" objects as abstract collections of ones and zeros thanks to meticulous code. They cannot learn or innovate.
This is why every self-driving car program is built on a foundation of road surveys and maps, as well as programmed routes and algorithms dictating responses to an almost limitless number of scenarios.
It's also why self-driving cars have difficulty navigating roads in rain and snow, reacting to the more "human" actions of other drivers, and improvising solutions in ambiguous situations.
"You don’t notice how many unexpected incidents occur during a routine drive until you ask a robot to take the wheel," Tech Crunch's Signe Brewster writes in a first-person account as a passenger in one of Uber's Pittsburgh test vehicles.
Brewster -- and other passengers who were given rides in Uber's test fleet -- had the safety net of two human engineers sitting in the driver and passenger seats. The passenger engineer was there to take notes, but the engineer in the driver's seat was there to take over when faced with situations the self-driving car couldn't handle, such as negotiating around other vehicles or obstacles in the road.
"The engineer in the driver’s seat spent the entire ride watching the road," Brewster wrote. "He hovered his hands over the wheel and foot over the pedal. Whenever a stopped vehicle blocked an entire lane, he toggled back into manual mode to switch lanes and drive around -- an action Uber’s self driving cars will not yet take."
Brewster wrote that she "came away from my ride trusting the technology," and said that after the initial butterflies and trepidation about trusting her life to a computer, the ride didn't feel much different from one with a human driver at the wheel.
But she also noted machines can't read the social cues humans use to interact when they're driving -- eye contact, waving another driver or pedestrian forward, easing back on the accelerator when we realize another driver isn't waiting his turn at a stop sign.
Autonomous car sensors are thrown off by inclement weather, a New York Times story notes, and in some cases they can lose track of lines and dividers. They have trouble telling puddles from potholes and they don't do well on routes that haven't been thoroughly mapped.
And then there are ethical issues and privacy concerns -- for autonomous cars to truly replace humans, they'll need to be networked with the ability to communicate with every other car on the road, which means some server somewhere will know where a person is going and everywhere that person has been.
That doesn't mean innovators should give up on the technology. On the contrary, it has the potential to make roads much safer by reducing human error, and could be a massive boon to the economy if autonomous cars become the norm and machines work to prevent traffic congestion. Pilot programs, like Uber's efforts in Pittsburgh, are necessary steps along the way, and the information they yield will inform the next generation of improved self-driving cars.
But for now, despite all the promise the future holds, autonomous cars just aren't there yet.