The Human Brain Project, chosen in January as the EU's flagship program, is an initiative designed with the ultimate goal of designing a computer that functions the same way as a human brain.
The HBP, in addition to making strides in neuroscience to treat brain disease and to further progress in medicine, "can provide the key not only to a completely new category of hardware (Neuromorphic Computing Systems) but to a paradigm shift for computing as a whole," the HBP says on its site.
Fast-forward a few decades. Robots will most likely be doing housekeeping and repetitive tasks, says Danica Kragic, a robotics researcher and computer science professor at KTH Royal Institute of Technology. She tells ScienceDaily that she sees an overall positive effect with robots in our everyday lives.
“For robots to be integrated in unstructured or changing environments, such as a typical human household, they must develop the ability to learn from human experts and to even teach themselves,” Pieter Abbeel, an assistant professor in the Department of Electrical Engineering and Computer Sciences at UC Berkeley, said in a press release. He developed the concept of "apprenticeship learning," which allowed machines to learn by first observing humans demonstrate. He has since turned his research to medial and personal tasks, such as tying surgical knots and folding towels.
But what would protocol regarding social interactions with robots entail?
For one, they would need restrictions on behavior. Isaac Asimov's Three Laws of Robotics sound like a great jumping-off point to us, and would ensure a level of security for humans involved in dealings with intelligent machinery.
The laws are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
A fourth rule (or the zeroth) is basically the same as the first, but replace "human being" with "humanity."
Okay, so they're not allowed to hurt us or allow us to be hurt, but couldn't some of these laws be taken to some pretty scary extremes? Take the zeroth rule — couldn't an advanced computer theoretically calculate that humanity as a whole is hurting itself through this constant industrialization we've been seeing for the past 150 years? It isn't allowed to eradicate us, so the paradox might just cause it to destroy itself and billions of dollars would have gone to waste. That situation is preferable to the alternative, anyway, wherein a robot learns to reprogram itself.
One method of keeping rogue robots under control is one straight out of Marvel comics and it worked fine on Victor Mancha: ask it to solve a paradox, and hope that AI won't have evolved far enough that it can overcome logic.
Huw Price, a philosophy professor at Cambridge, acknowledged that some of the fears are far-fetched, but the potential for disaster is also too high to ignore. He told the Associated Press that when machines gain intelligence, "we're no longer the smartest thing around."
Price also mentioned the risk of putting humanity in the hands of "machines that are not malicious, but machines whose interests don't include us."
Rules will probably be effective for the most part but humans break laws, and programming a robot based on the human brain carries the risk that robots could end up just like people. "No one is 100 percent safe," Kragic says, "and the same can happen with machines."
So what about this classic sci-fi scenario? Computers and robots gain autonomy and decide to revolt.
First, two survival tips:
1. Treat your machines well (as in don't break them, neglect them or hit them excessively), and don't wait for them to gain sentience. Robots may be able to identify with toasters; we don't know yet. Giving them empathy and emotional responses could end up becoming a double-edged sword.
2. Consume lots of robot-related media, to prepare yourself. Reasonably advanced societies with a robust interest in this field (looking at you, USA and Japan) have explored the possibility of a robot uprising so aside from movies such as The Matrix or Wall-E, check out some of Asimov's work — much of what we know and are moving toward in terms of robotics is based off his books. Comic-book readers can pick up Vertigo's Ex Machina or Marvel's Runaways, and fans of anime can always give Ghost in the Shell a watch.
Most movies and shows end in favor of people or with robots living in comfortable harmony with humans, but how likely is a revolt?
A human uprising against robots is more likely, says Kragic.
Humans already have the ability to kill a robot when told to do so, despite it begging for its life (of course, people can do a lot of things when ordered to, as demonstrated by the Milgram experiment) and we have a hard enough time empathizing with each other based on the most insignificant differences and disagreements.
We didn't reach the top of the food chain by responding favorably to unknown and potentially dangerous factors, so what does that say about a robot's chances?
Unless that robot were shaped like a cat, nothing good.