Pantheism News

Is Artificial Intelligence (AI) a Threat to Humanity?

Zuckerberg recently called Musk's warnings about AI “irresponsible.” Musk responded that Mark’s “understanding of the subject is limited.” Are we in danger?
Profile photo of Charles Beebe

This past week, Mark Zuckerberg said warnings about artificial intelligence — such as those made recently by Elon Musk – were “pretty irresponsible.” In response, Musk tweeted that Zuckerberg’s “understanding of the subject is limited.” The debate begs the question: Are we really in danger of being eclipsed by AI machines?

Stephen Hawking has hailed Artificial Intelligence (AI) as “the best, or the worst thing, to ever happen to humanity.” AI could potentially assist humanity in eradicating disease, poverty, and the damage we have done to the natural world through industrialization. On the other hand, Hawking famously told the BBC in 2014 that “artificial intelligence could spell the end of the human race.” So which is it? To get at the answer, we first must define what it is that makes us unique. By defining the things we do better than other species, we have a better idea of how another intelligence could potentially surpass us.

The most common reason cited for our evolutionary ascendency is our ability to imagine alternative futures and make deliberate choices accordingly. This foresight distinguishes us from other animals. In fact, the three primary cognitive processes of ideation (imagination, creativity and innovation), are frequently held up as the core of our uniqueness. But what if computers begin to show these characteristics? And if these cognitive abilities can be superseded, are we destined to become future pets — or worse, to be eliminated as a pest? While this situation sounds fantastical to many, we may find out if they are plausible sooner rather than later. Some futurists, including Google’s resident seer, Ray Kurzweil, predict the time when AI eclipses our own abilities is near – potentially around 2045.

Paul Allen, the co-founder of Microsoft, notably disputed Mr. Kurzweil’s timing of this supersedence – known in futurist circles as “The Singularity”. In his 2011 MIT Technology Review article, Allen claims that smart machines have a much longer way to go before they outrun human capabilities. As Allen says, “Building the complex software that would allow The Singularity to happen requires us to first have a detailed scientific understanding of how the human brain works that we can use as an architectural guide, or else create it all de novo. This means not just knowing the physical structure of the brain, but also how the brain reacts and changes, and how billions of parallel neuron interactions can result in human consciousness and original thought.” Getting this knowledge, he goes on to say, is not impossible — but its acquisition runs into what he calls the “complexity brake”: the understanding of the detailed mechanisms of human cognition requires an understanding of natural systems, which typically require more and more specialized knowledge to characterize them. This forces scientific researchers to continuously expand their theories in more and more complex ways. It is this complexity brake and the arrival of powerful new theories (rather than the Law of Accelerating Returns that Kurzweil cites as reasoning for his 2045 prediction), that will govern the pace of scientific progress required to achieve The Singularity.

The ‘complexity brake’ notwithstanding, computers have made startling advances in the six years since Allen’s article. One area which until very recently was deemed a “safe haven” for intrusion by AI was human creativity. In a fascinating June 2017 study conducted by Rutgers University’s Art and Artificial Intelligence Laboratory, 18 art judges at the prestigious Art Basel in Switzerland ended up preferring machine-created artworks to those made by humans. Although they had no idea which pieces were machine-made, the AI-generated pieces were deemed “more communicative and inspiring,” compared to those made by humans. Some of the judges even thought that the majority of works at Art Basel were generated by the programmed system. This represents a significant achievement for AI. The Rutgers computer scientists had previously developed algorithms to study artistic influence and to measure creativity in art history, but with this project, the lab’s team used the algorithms to generate entirely new artworks using a new computational system that role plays as both an artist and a critic, attempting to demonstrate creativity without any need for a human mind.

Just a month earlier (May 2017), Google’s AlphaGo algorithm defeated the world’s top Go (an ancient Chinese board game) player, Ke Jie. While AI has been beating human players at various games for many years now (Deep Blue over Kasparov in a 1997 chess match, and Watson over top Jeopardy player Ken Jennings in 2011), AlphaGo’s success is considered the most significant yet for AI due to the complexity of Go, which has an incomputable number of move options and puts a premium on human-like “intuition”, instinct and the ability to learn.

In yet another development in June of 2017, Facebook’s chatbots developed their own language to communicate with one another. The researchers at Facebook’s Artificial Intelligence Research lab were using machine learning to train their “dialog agents” to negotiate when the bot-to-bot conversation “led to divergence from human language as the agents developed their own language for negotiating.” Although the bots were immediately shut down, the occurrence points to the speed at which computers are becoming autonomous. As The Atlantic’s article on the event states, “The larger point…is that bots can be pretty decent negotiators—they even use strategies like feigning interest in something valueless, so that it can later appear to “compromise” by conceding it.” In other words, AI is learning to strategize in very human ways.

It’s clear that the field of artificial intelligence is advancing dramatically. But on the question of whether robots will eventually take over, the future is less clear. Theoretical physicist and futurist Michio Kaku quoted Rodney A. Brooks (former director of the Artificial Intelligence Lab at the Massachusetts Institute of Technology and co-founder of iRobot) on the possibility: “(It) will probably not happen, for a variety of reasons. First, no one is going to accidentally build a robot that wants to rule the world. Creating a robot that can suddenly take over is like someone accidentally building a 747 jetliner. Plus, there will be plenty of time to stop this from happening. Before someone builds a ‘super-bad robot,’ someone has to build a ‘mildly bad robot,’ and before that a ‘not-so-bad robot.’”

Of course, this assumes a globally conscientious and concordant development protocol. Google attempted to create just a such a standard with their 2016 “Five Rules for AI Safety”, modeled on Isaac Asimov’s famous “Three Laws of Robotics”. But the stakes are too high to leave the standards to the AI developers alone. As Elon Musk recently clarified during a Tesla investor conference call, “The concern is more with how people use AI. I do think there are many great benefits to AI, we just need to make sure that they are indeed benefits and we don’t do something really dumb.” Given that our science fiction leans towards the dystopian in regards to AI, (Blade Runner, A.I., The Matrix, Terminator and I, Robot, among others), we owe it to ourselves to use our hallowed ability to imagine the future as one in which we live better as a result of AI.

,

No comments yet.

Leave a Reply

Skip to toolbar