Biden Administration Asks Public for AI Regulation Suggestions

Given that AI is now becoming more and more advanced, people are beginning to comprehend what it is capable of and its dangers when used wrong. US officials have turned to the public to ask for input when it comes to the limitations that need to be set.

President Biden Meets With Science And Technology Advisors On Advancing Innovation
WASHINGTON, DC - APRIL 04: U.S. President Joe Biden (C) holds a meeting with his science and technology advisors at the White House on April 04, 2023 in Washington, DC. Biden met with the group to discuss the advancement of American science, technology, and innovation, including artificial intelligence Kevin Dietsch/Getty Images

Creating Regulations for AI

The Biden administration aims to create rules and regulations for AI systems such as ChatGPT, as well as hold its creators accountable. The public is welcome to provide insights or inputs on the matter to help the administration with its efforts.

The National Telecommunications and Information Administration (NTIA) says that these measures will ensure that the AI models will work as intended by their creators without "causing harm" to their users, as mentioned in Engadget.

Anyone can suggest anything that comes to mind, but the NTIA gave contexts on ideas that they'll need such as incentives for AIs that can be trusted, testing methods that are safe, and the limit for accessing data to asses its systems.

There's also the use of AI in certain fields, seeing as it could have various functions for certain purposes. For instance, a different set of rules may be needed in healthcare since it needs access to private data such as patient records.

Suggestions are open until June 10th. The agency believes that creating rules for AI is "potentially vital." There have already been incidents linked to the use of AI tools which resulted in data leaks and copyright violations, which is what they mean to abolish.

The Biden administration discussed the matter regarding AI with advisors a week before, but they have not decided yet whether or not AI may pose a major problem, and the suggestions may contribute to the final decision.

Dangers of AI

We now know that AI is capable of creating content derived from what the users provide or what it can source from the Internet. While realistic results can be impressive, they could also be used to spread misinformation on a larger scale.

For instance, a photo of the Pope clad in a white Balenciaga puffer jacket has gone viral through social media. Many have been fooled by the photo, not knowing it was created using AI. Even television personality Chrissy Teigen said she believed it and didn't give it a second thought.

This might just be the biggest example of misinformation generated by AI yet, and it marks the beginning of potential misuse on a larger scale, as pointed out by Time. It's not just photos that people should be looking out for.

AI technology has progressed to the point that it can generate text, audio, and videos as well. AI tools are capable of creating audio using the voice of a known person, which has been of concern to experts since it can be used against candidates in the upcoming US elections.

AI tools have also been used by countless people to create worded content and claim them as their own. Since AI's source is what it can scan through the Internet, the content, more often than not, is written by someone else, which sometimes results in unintended plagiarism.

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost

Real Time Analytics