Google explains what went wrong with its AI images

Google's Gemini app opens with a greeting from the new AI assistant.

Kaitlyn Cimino/Android Authority

long story short

  • Google has now explained the issues that arose after Gemini generated inaccurate and offensive images of people.
  • The tech giant claims that two things went wrong, causing the AI ​​to overcompensate.
  • According to reports, the feature will not be re-enabled until AI image generation is significantly improved.

Google found itself in hot water after Gemini was found to be generating inaccurate and offensive images of people. The company has since shut down the LLM’s ability to create images of people. The company has now issued an apology and explained what happened.

In a blog post, the Mountain View-based company apologized for Gemini’s error, saying “it’s clear that this feature missed its mark” and that it was “sorry that this feature didn’t work well.” According to Google, two things led to the creation of these images.

As we’ve reported before, we think Gemini may have overcorrected for issues with AI-generated images that reflect our racially diverse world. It seems that’s exactly what happened.

The company explains that the first issue relates to how to adjust Gemini to ensure a range of people are depicted in the image. Google admitted that it failed to “account for situations where ranges clearly should not be displayed.”

The second issue arises from how Gemini chooses sensitive cues. Google claims the AI ​​became more cautious than expected and refused to answer certain prompts.

For now, Google plans to keep people image generation on ice until the model is significantly improved.

Any tips? Talk to us! Email our staff at news@androidauthority.com. You can remain anonymous or get credit for your information, it’s your choice.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *