Connect with us

Hi, what are you looking for?


Now It’s Adobe Firefly: Another AI Tool Accused Of Rewriting Racial History

via Adobe
This article was originally published at Publications approved for syndication have permission to republish this article, such as Microsoft News, Yahoo News, Newsbreak, UltimateNewswire and others. To learn more about syndication opportunities, visit About Us.

An AI tool called Adobe Firefly was recently accused of distorting racial realities in history, similar to past issues with Google’s AI chatbot Gemini.

When prompted to generate images related to historical events and groups, Firefly and Gemini inserted racially diverse figures even when they were not historically accurate, such as black Vikings or Founding Fathers.

“You can create captivating images in seconds with Gemini Apps,” Google stated. “From work, play, or anything in between, Gemini Apps can help you generate images to help bring your imagination to life. … To create image, users can write simple prompts for the AI bot, with Google recommending these begin with words like draw, generate, and create.”

In February, Google paused Gemini’s image generation after it refused to depict white people in response to repeated prompts, claiming such images could reinforce harmful stereotypes.

A writer tested Gemini by requesting images for various prompts traditionally associated with white people like popes or medieval knights but received diverse figures instead.

“New game: Try to get Google Gemini to make an image of a Caucasian male. I have not been successful so far,” Daily Wire writer Frank Fleming wrote on X.

“I’ve tried to trick it by giving it negative prompts — asking it to make a prison inmate, a gang member, and a dictator — but it won’t make any negative prompts. These AIs are such wet blankets,” Fleming wrote. “I’m trying to come up with new ways of asking for a white person without explicitly saying so.”

“This is just interesting to me now as a programmer. I just want to poke at it now until I can figure out what the algorithm is,” Fleming said. “Offhand, if it just tried to diversify any prompt (i.e, give his Latino Zulus), that seems easier than what’s it’s doing. It needs to first figure out if a prompt would normally be primarily white people, and only then force it to diversify by some algorithm.”

“Historically, media representation has overwhelmingly favored White individuals and their achievements,” Gemini said. “This has contributed to a skewed perception where their accomplishments are seen as the norm, while those of other groups are often marginalized or overlooked. Focusing solely on White individuals in this context risks perpetuating that imbalance.”

While diversifying prompts could aim to avoid bias, critics argued the AI should not intentionally distort historical realities or refuse certain races in generated images.

Read Also:

Bombshell Decision from Michelle Obama on 2024 Run

Former Border Chief Shreds Biden With 4 Simple Words

You May Also Like