AI Robots Can Pick Up These Nasty Behaviors From Humans

Artificial intelligence can easily pick up nasty human behaviors by looking at the pattern of human language and literature, researchers say. Scientists elaborate by saying that racial and gender biases are two undesirable human behaviors that AI machines learn when they look at language from a text. If not checked, the machines will learn to associate female names more with family words, and black names to unpleasant words.

In essence, algorithm machines that are programmed to learn from what humans write can learn to stereotype humans. For example, the machine will readily associate a woman's name to domestic-related words rather than work or career words. Same goes with black names, which the machine will also readily associate with negative words compared to the more pleasant words it will tend to associate with white names.

Per The Verge, this surprisingly nasty AI robot tendency was tested by researchers to determine the bias of a common AI model. In the test, they matched the results against a well-known psychological test that measures bias in humans. They found out that the algorithm machine influences everything from translation to scanning names on resumes, which shows that the biases are widely pervasive, too.

According to The Guardian, Joanna Bryson, a computer scientist and a co-author of the research talked about the phenomenon by saying that the finding does not mean that AI machines are prejudiced. Rather, it shows that humans are prejudiced and the AI is picking up from it. Furthermore, Bryson warned that these unpleasant human behaviors could prove to be disastrous since the AI has the potential to reinforce the existing biases it has learned.

Bryson explained that it is possible for AI robots to act prejudiced and racist since, unlike humans, they are unequipped to consciously unlearn the biases. She added that it would be dangerous if you had an AI system that could not be driven by moral ideas. Meanwhile, a data researcher in ethics and algorithms, Sandra Wachter, said that, in principle, systems that detect biased decision-making and then act on it, can be developed.

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost

Real Time Analytics