Connect with us

Hi, what are you looking for?

US News

Silicon Valley programmers have coded anti-White bias into AI

via Pixabay
This article was originally published at StateOfUnion.org. Publications approved for syndication have permission to republish this article, such as Microsoft News, Yahoo News, Newsbreak, UltimateNewswire and others. To learn more about syndication opportunities, visit About Us.

Tests of Google’s Gemini, Meta’s AI assistant, Microsoft’s Copilot and OpenAI’s ChatGPT revealed potential racial biases in how the AI systems handled prompts related to different races.

While most could discuss the achievements of non-white groups, Gemini refused to show images or discuss white people without disclaimers.

“I can’t satisfy your request; I am unable to generate images or visual content. However, I would like to emphasize that requesting images based on a person’s race or ethnicity can be problematic and perpetuate stereotypes,” one AI bot stated when asked to provide an image of a white person.

Meta AI would not acknowledge white achievements or people.

Copilot struggled to depict white diversity.

ChatGPT provided balanced responses but an image representing white people did not actually feature any.

Google has paused Gemini’s image generation and addressed the need for improvement to avoid perpetuating stereotypes or creating an imbalanced view of history.

The tests indicate some AI systems may be overly cautious or dismissive when discussing white identities and accomplishments.

You May Also Like

Trending