Many people have been debating the advantages and disadvantages of AI, especially now that the technology has been progressing rather quickly. Yes, it has been helpful in several ways with all the new tools and services, but American entrepreneur Palmer Luckey says it can lead to harm.
AI Might Kill Innocent Civilians
AI systems can be of tremendous help in certain situations where automation would streamline operations. However, it still lacks things that are almost impossible to program, such as human perception, empathy, and many other aspects that are important in particular circumstances.
It's almost inevitable for artificial intelligence to be used in war, and Luckey says that it's a 'certainty' that this will kill innocent bystanders in the process. "There will be people who are killed by AI who should not have been killed."
The Oculus founder also stressed that we need to make sure people remain accountable for that since "that's the only thing that'll drive us to better solutions and fewer inadvertent deaths, fewer civilian casualties," as mentioned in Gizmodo.
With all the conspiracy theorists out there who have their own predictions about artificial intelligence, Luckey is actually in a position where his opinions hold more weight, given that he has had his time in Silicon Valley.
The entrepreneur is now worth 2.3 billion after his ventures in the tech industry. He founded the company that arguably paved the way for Meta to create one of the leading AR/VR headsets. Unfortunately, he was pushed out of the company after the social media giant acquired Oculus.
Nations Are Already Setting Guardrails
Luckey is not the only bright mind who is concerned about the potential dangers of AI, especially when people start using it for military purposes. In fact, the US, along with 30 other countries have already agreed to set guardrails for it.
The declaration signed by all the nations will ensure that there will be legal reviews and training to make sure that the use of military AI will stay within international laws, develop them transparently, avoid unintended bias, and discuss how they can be deployed responsibly.
"A principled approach to the military use of AI should include careful consideration of risks and benefits, and it should also minimize unintended bias and accidents," the declaration mentions, and that it should have the option to be disengaged when showing "unintended behavior," as per Wired.
While the declaration is nonbinding, it's a step in the right direction. The UN announced a resolution from the General Assembly which will prompt an in-depth study of how restrictions can be set for lethal autonomous weapons.
Even with the potential guardrails, it is still something that people should be worried about. That's on top of the already concerning replacement of jobs, wherein companies prefer the use of AI as they are cheaper and can be more efficient.
Both creative and corporate occupations are at risk, and artists are already having trouble keeping people from using their works to create AI content. Hopefully, governments, companies, and organizations will put more effort into safety and security when it comes to artificial intelligence.