Is This Photo of a 'White Obama' A Product of AI's Racial Bias?

Did you know that an Artificial Intelligence software exists that is able to create realistic human faces by filling in the missing data in the pixelated face?

This software is called PULSE. And this software has been developed by researchers from Duke University using StyleGAN, an algorithm that was created by NVIDIA computer scientists as noted by Screen Rant.

The research team is located in Durham, North Carolina managed to create an algorithm that is capable of "imagining" realistic-looking faces just from the blurry and barely recognizable photos of people.

Researchers have used StyleGAN to upscale visual data, that is, to fill in the missing data in the inputted pixelated face and imagine a new high-resolution face that looks similar to the input image when pixelated.

But as magnificent as it seems, this Artificial Intelligence also makes mistakes.

A 'White' Barack Obama

Although the A.I. used in the tweet below didn't use PULSE, Twitter User @HotepJesus has posted a picture of a pixelated Barack Obama using a tool called Face Depixelizer that somehow made Barack Obama white.

There have been discussions of AI facial recognition tools, and examples previously found in other solutions since as we know, technology is capable of influencing society in more ways than one. You can check out the tweet below.

You may think that this is an isolated case, but it is not. There have been multiple cases already of A.I. being racially bias. Here are some examples that were posted on Twitter that used PULSE as the A.I. being used to de-pixelate these images:

How Racial Bias in AI have originated

According to the original creators of the software, they pointed out that it was a flaw that was inherited from datasets that were used to train StyleGAN when responding to the issue.

"It does appear that PULSE is producing white faces much more frequently than faces of people of color," wrote the algorithm's creators on Github. "This bias is likely inherited from the dataset StyleGAN was trained on [...] though there could be other factors that we are unaware of." according to the creators of the A.I. as reported by The Verge.

In other words, data used to train AI is often skewed toward a single demographic, white men, and when a program sees data not in that demographic it performs poorly.

Some of the examples of how racial bias by AI technologies employed by police departments can manifest is in the recognition of suspects and the predicting of 'at-risk' neighborhoods using previously collected data.

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost

Real Time Analytics