Industrial robots have become a common occurrence in your modern manufacturing facilities. However, a new revolution is in making. Artificial Intelligence, something we become familiar from Sci-Fi movies, is not something futuristic anymore.
Humankind, while being a very innovative species, has always had fears when facing the newest technology. Regarding self-aware machines, we may fear robots' intent on world domination.
Even scientists and high-tech CEOs of top caliber like Stephen Hawking, Bill Gates, or Elon Musk have raised fears about AI, only serving to feed that sci-fi narrative. But the reality is that artificial intelligence surrounds us already, underling search engines, operating on the financial markets.
Artificial intelligence doesn't have any reasons to go apocalyptic, if we are wise to not make it so. However, its changes will be far-reaching. They will also potentially change our relationships with one another and will raise new ethical questions.
If it's true that artificial intelligence is already here, it is also true that it's only going to get smarter. But it won't necessarily be like our own intelligence, with the same hunger for power, jealousy, and greed.
Today, AI is capable already of many things. For instance, Google can recognize correctly human speech with an accuracy of 92%. Google's AI platform could learn how to play videogames on its own accord.
Meanwhile, the other information technology, Microsoft, designed a robot that can be taught how to recognize images with just 4.94% error rate. It's worth to mention that the average human has a higher error rate.
Google's driverless cars, which have already gone on California's public roads over 1,800,000 miles, were only involved in car accidents for 13 times in six years.
All these current developments in AI prove that in the nearest decades something of a kind will inevitably emerge. In order to prevent them from committing immoral or unethical acts these machines capable of flying and driving on their own will need reliable safety mechanisms in place. These "safety lock mechanisms" will probably resemble to Isaac Azimov's three laws of robotics.
The limitation policies for the actions of a full-fledged AI system will be defined in accordance with its understating of the meaning of everything it sees through its various sensors, creating a concept in its AI "brain". Today, artificial intelligence machines do not yet have "common sense". When this changes, they will learn to manipulate perceived actions and objects, and it will be too late to integrate any "safety locks" anymore.
As AI researches explain, there is a danger that a super-intelligent machine would be able to analyze data much faster than humans and starting to act on its own accord, finding ways of bypassing limitations and rules imposing by a human. In order to prevent an AI from harmful actions, it is still needed to find a meaningful limitation. The recent Hollywood Ex Machina sci-fi thriller presents such a scenario for the general public, dramatizing the problem.