Microsoft has vowed to cut down on "harmful content" generated from its AI products, including Azure and Copilot AI, amid concerns about the safety and transparency of its technologies.
In a blog post on Wednesday, Microsoft promised to release annual reports of its AI development to address concerns about the safety of its products.
One of the issues raised in the report was the rampant abuse of its AI products to promote deepfakes of real people, particularly politicians, to further sow disinformation online.
Microsoft will also apply more content filters for Azure OpenAI Service and Copilot AI image generator to crack down on users intentionally abusing the system.
The filters are with content insights to help people align with the platform's safety goals.
Also Read : Tech Giants' Transparency Tools Resulted in 'Disappointment' Ahead of 2024 Elections, Research Says
Microsoft Refreshes Content Guideline Against AI Abuse
All of these will be in addition to revamped policies to assess and prevent users from generating hate, sexual, violent, and self-harmful content.
To do so, Microsoft will roll out content classifiers to "reduce risks by blocking harmful user inputs," including protections against prompt injections.
According to Microsoft, prompt injections are often the method of attack hackers use on large language models like Copilot to spread malware and access sensitive user data and server information.
Microsoft Under Fire for Security Risks
Microsoft's efforts to increase protective measures on its AI technology came amidst scrutiny of its technology for worsening the disinformation culture online ahead of the 2024 US Elections.
Several AI experts, including previous employees, have already warned that Microsoft's AI is generating violent and sexual images that violate protected copyrights despite the company's promises that it has red-teamed the AI before release.
Lawmakers have also moved to restrict the use of Microsoft AI-powered products and tools amid concerns about the lack of transparency on the vulnerabilities of its technology.
It does not help that the company already reported before that its chatbots, along with OpenAI, are being used by state-backed hackers to infiltrate institutions and businesses in the US.
Microsoft CEO Satya Nadella, during its earnings call, earlier vowed to shift the company's focus to improving safety and security systems amid increased cyberattacks on the company.
Related Article : State-Backed Hackers are Now Using AI Chatbots for Cyberattacks