Google has announced its decision to restrict the types of election-related questions users can ask its AI chatbot Gemini, in a move aimed at avoiding controversies over AI technology.
Google has announced its decision to restrict the types of election-related questions users can ask its Artificial Intelligence (AI) chatbot Gemini, in a move aimed at avoiding controversies over AI technology. The tech giant disclosed this policy update in a recent blog post, specifying its rollout in India, coinciding with the country’s upcoming elections slated to commence in April.
Gemini, Google’s equivalent of the popular chatbot ChatGPT, serves as an interactive tool capable of responding to queries in text format and generating images. A spokesperson for Google confirmed to the BBC that this restriction aligns with the company’s previously announced plans concerning its approach to elections, emphasizing a proactive stance taken since December.
“As we shared last December, in preparation for the many elections happening around the world in 2024 and out of an abundance of caution, we’re restricting the types of election-related queries for which Gemini will return responses,” the spokesperson stated.
In response to election-related inquiries, Gemini offers a standard reply urging users to resort to Google Search for information. Notably, the AI chatbot exhibited a more detailed response when questioned about Indian politics, underscoring information about the country’s major political parties.
The decision to limit election-related queries on Gemini chatbot follows heightened concerns surrounding misinformation facilitated by advancements in generative AI technology. Governments worldwide have increasingly moved towards regulating such technology to curb potential misuse.
India, in particular, has taken a proactive approach, mandating approval for the release of AI tools deemed “unreliable” or undergoing trials. Earlier this year, Google faced criticism and issued apologies after its AI image generator produced inaccurate depictions, including a portrayal of US Founding Fathers featuring a black man, and German soldiers from World War Two inaccurately incorporating individuals of different ethnicities.
Acknowledging the missteps, Google swiftly halted the tool, acknowledging in a blog post that it had “missed the mark” and necessitated immediate rectification.
With elections scheduled across various countries, including the United States, the United Kingdom, and South Africa, Google’s decision to restrict election-related queries on Gemini underscores its commitment to navigating the complex landscape of AI ethics and mitigating potential controversies surrounding its AI-powered platforms.