Google blocks gender-based pronouns from new AI tools to avoid offending users
Months after introducing new AI technology in Gmail that makes automatic suggestions to complete sentences for users as they type, Alphabet Inc’s Google has tweaked it to avoid suggesting gender-based pronouns to avoid offending users.
Google officials told Reuters that due to the highly sensitive debate currently raging on the issue of gender and gender identity, the risk of offending users with their “Smart Compose” technology by incorrectly predicting someone’s sex or gender identity was just too high.
Paul Lambert, a Gmail product manager, told the news agency that a company research scientist discovered the problem in January when he typed “I am meeting an investor next week,” and Smart Compose suggested a possible follow-up question: “Do you want to meet him?” instead of “her.”
Smart Compose was introduced by Google in May as an extension of its Smart Reply feature which was introduced last year. Because the technology operates in the background, users can write an email normally and Smart Compose will offer suggestions as they type. If they see a suggestion they like, they can simply click the “tab” button to use it.
Lambert said Smart Compose currently assists on 11 percent of messages worldwide sent from Gmail.com.
He noted that even though consumers have gotten used to embarrassing suggestions from technology like autocorrect on smartphones, Google wants to do things better.
“Not all ‘screw ups’ are equal,” Lambert told Reuters. Gender is “a big, big thing” to get wrong.
Investing in more culturally sensitive technology, he said, is part of the company’s plan to build support for their brand and draw customers to their AI-powered cloud computing tools, advertising services and hardware. Gmail currently has 1.5 billion users.
Not everyone, like Sean Davis, co-founder of the conservative online magazine The Federalist, thinks Google’s move on blocking gender-based pronouns is a step in the right direction.
“Banning entirely value-neutral words that describe objective reality definitely isn't biased at all,” he wrote on Twitter.
The company is being overly cautious on the issue of gender, explained Reuters, due to a number of high-profile embarrassments the resulted from their predictive technologies in the past.
In 2015, for example, Google’s image recognition feature from its photo service labeled a black couple as gorillas. In 2016, the company was forced to change its search engine’s autocomplete function after it suggested the anti-Semitic query “are Jews evil” when users sought information about Jews. Expletives and racial slurs are currently banned for Google’s predictive technologies as well.
Google’s announcement comes on the heels of social media giant Twitter's updates to the company’s “hateful conduct” policy, banning users from tweeting the birth names of trans-identified persons or referring to them by using biologically appropriate pronouns.
“We prohibit targeting individuals with repeated slurs, tropes or other content that intends to dehumanize, degrade or reinforce negative or harmful stereotypes about a protected category,” the policy notes. “This includes targeted misgendering or deadnaming of transgender individuals.”
“Deadnaming” refers to calling a person by their legal name given at birth when that person has selected a new name consistent with the gender identity they have chosen. “Misgendering” refers to the use of a person’s biological pronoun rather than their preferred trans-affirming pronoun.