The Industrial Revolution replaced humans with machines that could perform tasks more efficiently, increasing manufacturing output and ultimately forming the basis of the modern world. In today's digital revolution, developers are creating machines that can effectively think better than humans and exert significant control over various aspects of human life. Some developers are beginning to worry what this could mean for the future.
The impact of constant exposure to digital technology and the computer-driven consumption of media is already beginning to change the way humans function and obtain information. According to The Guardian, one study by an app-based data sourcing company found that average users touch their phones 2,617 times per day.
Such heavy usage has led to a condition dubbed continuous partial attention -- the incapacity to fully detach from digital devices, even when they're turned off.
"Everyone is distracted," ex-Google executive Justin Rosenstein told The Guardian. "All of the time."
A 2016 study reported by The Washington Post found that married couples who engaged in "phubbing" -- the practice of snubbing one's partner in favor of looking at a phone -- had lower levels of marital satisfaction than those who did not. The results were corroborated by a follow-up study in China.
The irony of humans' attachment to digital technology is that people often use it as a means of connecting with the world. Social media sites help people communicate with others, share what they like and catch up on the news of the day.
The increasing use of these platforms creates both economic revenue and a new type of capital: data. Much like money in the industrial era could be used to invest in new infrastructure and industry, data can be used by companies and governments to gather information about citizens' personal lives.
A 2016 study caused controversy after it used public data about people's addresses and movements in an attempt to identify the elusive artist Banksy, Nature reports. In the U.S., public data -- which includes information taken from social media and cell phones -- can be used without the approval of an institutional review board because the data information is not considered private or identifiable. Therefore, the study does not count as human-subject research. Researchers are encouraged but not required to seek consent from individuals covered in public-data studies.
Rudimentary artificial intelligence units can learn human nature based on gathering data from internet activity, but some such attempts have spectacularly backfired. In 2016, Microsoft's AI chatbot Tay was corrupted by Twitter users into spewing anti-Semitic, sexist and other inflammatory remarks within a single day of being online, The Verge reports. In August 2017, a similar chatbot in China was taken offline for what were considered unpatriotic comments.
One of the biggest controversies in tech is the spread of fake news stories through platforms like Facebook and Google.
Following a Las Vegas mass shooting on Oct. 1, Google's algorithms mislabeled a fake news story from the notorious website 4chan, serving up the fake story to internet users who were searching for the identity of the shooter. The Atlantic reports that stories from non-journalistic sites can now appear on the search engine's "In the News" section due to a 2014 policy change.
On Facebook, a group titled "Las Vegas Shooting/Massacre" was maintained by a man who had previously been arrested for identity theft. The man falsely claimed to be an investigative journalist at Infowars and a male model. To Facebook's algorithms, however, the group was as legitimate as any other.
TechCrunch reports that AI research company DeepMind, which is owned by Google, announced in October that it had created a unit specifically dedicated to ethics research. The effectiveness of its work, and whether other companies follow suit, remains to be seen.
Sources: The Guardian, The Washington Post, Nature, The Verge (2), The Atlantic, Tech Crunch / Featured Image: Pixabay / Embedded Images: Hindustanilanguage/Wikimedia Commons, Noah Loverbear/Wikimedia Commons