After the algorithms were demonstrated to exhibit serious bias and inconsistencies, Microsoft decided to stop selling artificial intelligence (AI)-based facial-analysis software products that infer a subject's emotional state, gender, age, mood, and other personal attributes.
Microsoft's Limited Access Policy
Concerns over privacy and civil rights in relation to facial recognition technology are on the rise. Studies have revealed that the technology consistently misidentifies women and people with darker skin tones, which can have major repercussions when used to identify criminal suspects or in surveillance situations, according to CNET.
"AI is becoming more and more a part of our lives, and yet, our laws are lagging behind. They have not caught up with AI's unique risks or society's needs," Microrosoft said in a blog.
Microsoft said it acknowledges its "responsibility to act" as it sees indicators that government action on AI is growing. The company feels that it must make efforts to ensure that AI systems are responsible by design.
Because of this, the tech giant is limiting access to facial recognition tools in Azure Face API, Computer Vision and Video Indexer.
Issues With AI
According to Bloomberg, the changes come two years after sales of facial-recognition technology to US police agencies were halted by Microsoft and Amazon.com, whose cloud unit competes with Azure, after studies revealed that it performed badly on people with darker skin.
Some states, notably Washington, have enacted legislation limiting the use of such products.
People Can Still Use Microsoft's Facial Recognition Tools
In another blog, Microsoft said to use facial recognition features in Azure Face API, Computer Vision, and Video Indexer, new users must apply for access.
Existing clients have a year to submit an application and get permission to continue using the face recognition services depending on the use cases they've presented. If an existing customer's facial recognition application is not approved, they will no longer be able to access facial recognition features by June 30, 2023.
Other Measures Taken By Microsoft
CNET mentioned that facial recognition is just the beginning of Microsoft's standards for fair and responsible AI technology. They also apply to the speech-to-text technology that the business offers through Custom Neural Voice for Azure AI. After a March 2020 study revealed high error rates in speech-to-text technology when employed throughout African American and Black populations, Microsoft said it took action to improve the software.
On June 21, the tech company published a 27-page "Responsible AI Standard" that outlines its objectives for just and reliable AI. In fact, the company's decision to limit access to facial recognition features is done to align Microsoft's products to this new Standard.
Microsoft Will Not Abandon Its AI Projects
Bloomberg noted that Microsoft isn't fully abandoning the use of AI to assist in reading human reactions. The business keeps introducing new features that infer people's emotions or feelings.
In a new initiative for sales people that was unveiled last week, AI will be used to conduct sentiment analysis on customer interactions through Microsoft Teams teleconferences in order to determine how potential customers may be responding.