In 1933, the inventor and futurist Nikola Tesla told a newspaper that he was working on mind-reading technology.
"I expect to photograph thoughts," Tesla told the now-defunct Kansas City Journal-Post.
At the time, the idea seemed as far-fetched as fantasy. In 2016, most people probably feel the same way. But as Vox's Brian Resnick writes, scientists are making impressive strides in the realm of mind-reading, to the point where machines can now reconstruct crude images by "reading" the minds of test subjects.
“Some people use different definitions of mind reading, but certainly, that’s getting close,” the University of Oregon's Bruce Kuhl told Vox.
Kuhl's method involves putting test subjects in an MRI machine and using a primitive artificial intelligence to read blood flow in the brain. At the same time, the AI has a database that assigns mathematical values to differences in facial features.
Over time, the AI learns to match blood flow -- which illustrates brain activity -- with facial features seen by the subjects in the MRI machine, spitting out rough images of what the subjects are seeing.
The resulting images have a dream-like quality, with blurred edges and shifts in skin tone. The images look like the kind of imprecise renderings a person might see in their own brain. But the machine has a knack for getting the facial expressions right, and other features -- an upturned nose, an arched eyebrow -- seem to translate faithfully from mind to machine.
Kuhl told Resnick that the image fidelity can be improved with more sessions, as the AI collects more data and gains more experience matching brain activity to the details of images seen by test subjects.
“I don’t want to put a cap on it,” Kuhl said. “We can do better.”
He also made it clear that his machines can't read people's minds without their cooperation.
“You need someone to play ball,” Kuhl said. “You can’t extract someone’s memory if they are not remembering it, and people most of the time are in control of their memories.”
Kuhl's team isn't the only one looking at ways to decode brain signals. At the University of Washington, researchers took a different approach -- they studied patients who already had electrodes implanted in their heads for an unrelated epilepsy study.
The electrodes allowed the scientists to analyze brain signals by sampling and digitizing them, according to LiveScience. After a while, with enough data, the researchers' specialized software began to learn that certain brain signals corresponded to certain images the test subjects were seeing. The images in that study were simple and distinct, including photographs of people and houses.
That research could help scientists unlock the differences in how people perceive things, and could potentially help people with locked-in syndrome communicate with the outside world, according to LiveScience.
"Clinically, you could think of our result as a proof of concept toward building a communication mechanism for patients who are paralyzed or have had a stroke and are completely locked in," said Rajesh Rao, a neuroscientist at the University of Washington.