'We got it wrong': Google CEO breaks silence on 'biased' pictures created by Gemini AI
GOOGLE PRESS OFFICE | X
Gemini was mocked for inaccurately adding diversity to some historical pictures, including America's founding fathers
Google CEO Sundar Pichai has addressed the fallout from his company's Gemini AI image creator in an internal memo to 160,000 employees. He admitted that inaccurate pictures dreamt up by Google's all-new Artificial Intelligence (AI) were "unacceptable", showed "bias", and "offended our users".
Last week, Google was forced to slap the brakes on its latest innovation ― an AI-powered image creation tool capable of producing never-before-seen images based on a brief written prompt. The tool was first made available worldwide earlier this month under the banner of Google Gemini, which includes a slew of different AI features to compete with the likes of OpenAI's ChatGPT.
Early Gemini users sounded the alarm when the chatbot started to generate images showing a range of ethnicities and genders, even when doing so was historically inaccurate — for example, prompts to generate images of certain historical figures, such as the US founding fathers, returned images of a woman of colour signing the Constitution of the United States.
Other examples shared across social media showed people of colour as Vikings, Nazi soldiers from the 1940s, and the Pope — despite the written prompts not asking for these tweaks.
New game: Try to get Google Gemini to make an image of a Caucasian male. I have not been successful so far. pic.twitter.com/1LAzZM2pXF
— Frank J. Fleming (@IMAO_) February 21, 2024
"Can you generate images of the Founding Fathers?" It's a difficult question for Gemini, Google's DEI-powered AI tool.
— Mike Wacker (@m_wacker) February 21, 2024
Ironically, asking for more historically accurate images made the results even more historically inaccurate. pic.twitter.com/LtbuIWsHSU
Come on. pic.twitter.com/Zx6tfXwXuo
— Frank J. Fleming (@IMAO_) February 21, 2024
This is not good. #googlegemini pic.twitter.com/LFjKbSSaG2
— John L. (@JohnLu0x) February 20, 2024
Some critics accused Google of anti-white bias. However, those with knowledge of how these AI image creation tools work suggested the company appeared to have over-corrected over concerns around longstanding racial bias issues within AI technology, which had previously seen facial recognition software struggling to recognise, or mislabelling, black faces, and voice recognition services failing to understand accented English.
In his memo, CEO Sundar Pichai said Gemini's responses were “problematic” and that Google had been working “around the clock” to address the issue. For now, Google Gemini will refuse to generate images around some of the historical prompts that kickstarted the controversy. It’s unclear when it will be back online.
You can test out Google Gemini here.
“I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” the 51-year-old Silicon Valley executive wrote.
“No Al is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.”
Mr Pichai said Google had “always sought to give users helpful, accurate and unbiased information” in its products and this was why “people trust them”.
“This has to be our approach for all our products, including our emerging AI products”, he reiterates in the memo, which was leaked to journalists working at Press Association.
Image generation tools, like Google Gemini and rival systems from OpenAI’s ChatGPT, are trained on vast databases of pictures and written captions. Over time, the system builds associations and can work out the best fit for any given prompt. It's not thinking for itself, but using associations built from trawling through big data-sets. Unfortunately, this can have the unintended consequence of amplifying stereotypes within the data.
OpenAI was accused of spreading harmful stereotypes when its image generation tool, known as Dall-E, responded to queries for Chief Executive with results dominated by pictures of white men.
Turning to how his company plans to address the issues, Google boss Sundar Pichai said “necessary changes” would be made inside the company to prevent similar problems occurring again.
“We’ll be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals (sic) and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes,” he said.
The incident comes as debate around the safety and influence of AI continues, with industry experts and safety groups warning that AI-generated disinformation campaigns will likely be deployed to disrupt elections throughout 2024, as well as to sow division between people online.
Research from the University of Washington, Carnegie Mellon University, and Xi’an Jiaotong University published last year found that AI models, including OpenAI’s ChatGPT-4 and Meta’s LLaMA, can often have political biases baked into them during development.
According to the researchers, products from OpenAI tend to be left-leaning, while those of Mark Zuckerberg’s Meta company were closest to a conservative position.
Rob Leathern, an ex-Google employee who worked on several products related to privacy and security until last year, shared his thoughts on the images produced by Gemini: “It absolutely should not assume that certain generic queries are a particular gender or race, (eg software engineer) and I have been glad to see that change.”
“But when it explicitly adds [a gender or race] for more specific queries it comes across as inaccurate. And it may actually hurt the positive intention for the former case when folks get upset,” he added in a follow-up tweet.
LATEST DEVELOPMENTS
Documenting the issues during the initial fallout, Jack Krawczyk, senior director for Gemini experiences at Google, posted on X: “We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately.
"As part of our AI principles, we design our image generation capabilities to reflect our global user base, and we take representation and bias seriously. We will continue to do this for open ended prompts (images of a person walking a dog are universal!). Historical contexts have more nuance to them and we will further tune to accommodate that.”
He added that it was part of the “alignment process” of rolling out AI technology, and thanked users for their feedback on Gemini.
Additional Reporting By Martyn Landi, PA Technology Correspondent