OpenAI, Google, and other AI firms will soon be required to alert the US government on new AI breakthroughs as the country ramps up regulatory rules on the technology.
US President Joe Biden is reportedly leveraging the Defense Product Act to compel Silicon Valley to disclose key information regarding their latest projects and safety testing, according to Wired.
The regulation is expected to take effect in the following days, including additional details on how it will be implemented.
US Commerce Department Clamps Down on AI Firms
US Secretary of Commerce Gina Raimondo said the DPA will allow the government to conduct a "survey requiring companies to share with us every time" they train a new AI model.
The department will also start requiring companies to disclose information when a "non-US entity" uses its cloud to train large language models.
The application of the DPA into the AI market follows the result of the Commerce Department to Biden's order last October.
The executive order pertains to the "development of safe, secure, and trustworthy AI systems" 270 days after the proclamation was published.
Various government agencies, including the Federal Trade Commission,
AI Development, Testing Remains in Secrecy
The secrecy in the development and testing of new AI models has been a headache for regulators for a long time trying to keep track of all the issues the technology poses.
Most notorious was OpenAI which has kept its doors closed regarding information on the successor of its GPT-4 model.
Many of the concerns arise from the company's transparency on the data used to train their "open source" AI models.
Experts have long expressed worries about the potential danger brought by AI development with "effectively zero outside oversight or regulation."
Raimondo said that the department is now looking at new guidelines that will prevent the AI from being misused, including those that may use it to commit human rights abuses.
Related Article : Big Techs Disregarding Human Rights in AI Pursuit, Says UN Chief