AI Firms Agree to Build an AI 'Kill Switch' for Unsafe Techs

AI firms have vowed to improve the safety of their learning machines so they agreed to build an AI "kill switch" as a last resort to mitigate the technology's risks to humanity.

The so-called "kill switch" will supposedly halt development for their most advanced AIs "if mitigations cannot be applied to keep risks below the thresholds."

AI Firms Agree to Build an AI 'Kill Switch' for Unsafe Techs
Sean Gallup/Getty Images

To properly determine the severity of risks posed by the technology, the companies have established "thresholds" to provide a proper response to each threat.

These include misuse of AI as a tool for cyberattacks, building bioweapons, and enabling foreign adversaries.

Multiple tech giants from various countries have committed to the promise during the 2024 AI Safety Summit in Seoul, South Korea, as concerns and issues on the technology continue to mount up.

Industry leaders like Amazon, Anthropic, Microsoft, and OpenAI were among the 16 AI firms that signed the commitment.

The promise came just days after OpenAI, one of the leading AI firms in the industry today, dissolved its "Superalignment" safety team in favor of building more "shiny products."

OpenAI Safety Culture Scrutinized by Former Safety Team Leaders

Jan Leike, one of the "Superalignment" team leaders who recently left the company, highlighted concerns in a recent post on X (formerly Twitter) on how the company's "safety culture and processes have taken a backseat to shiny products."

Leike's fellow leader and one of the company's co-founders Ilya Sutskever has also announced his departure from the company following the release of the GPT-4o.

Sutskever was among the former board directors who voted to oust CEO Sam Altman amid growing concerns about the company's current safety processes in AI development and distribution.

AI experts have long criticized OpenAI for its safety lapses as severe "AI hallucinations" and image generations of political figures continue to plague its products despite the company's promises.

The "Superalignment" team, which OpenAI formed last year, was supposed to be the company's response to address safety concerns on its technologies.

AI Safety Branded as 'Worthy but Toothless'

Despite the vows the companies have taken, safety experts and advocates raised criticisms of the steps the companies are willing to take in the advent of the AI revolution.

AI Now Institute, one of the forum participants, criticized the tech giants for the lack of diverse perspectives on how the rapid rollout of AI technology affects the public, stating that "AI systems are already having significant harmful impacts on people's rights and daily lives."

The statement follows earlier comments from the Bletchley Park Summit last year, claiming that the Safety Summit's commitments are "worthy but toothless."

Microsoft, one of the commitment signatories, is facing scrutiny from the US government following multiple data breaches in the past 24 months.

The tech giant has also earlier reported that its chatbots, along with OpenAI's, were being used for cyberattack campaigns by state-sponsored threat actors.

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost

Real Time Analytics