Stymied by AI, Google Removes Biased Smart Compose Suggestions

Smart Compose for Gmail has made messages easier to send in recent months. But it hasn’t done so without moments of bias. Google is taking action in one of the tool’s blind spots.
Access exclusive SMW+ content by marketers whose careers you can emulate with a free 30-day trial!
Earlier this year, a Google research scientist sent a message with the assistance of Gmail’s Smart Compose software: “I am meeting with an investor next week.”
The software responded, with no additional prompts: “Do you want to meet him?”
Smart Compose, at its best, is supposed to simplify the composition of emails by auto-completing sentences and sentiments expressed as the writer creates a new message. Smart Reply, a companion feature, offers a few possible responses to messages you receive. Both are powered by Natural Language Generation, an AI technology that takes in language relationships and patterns from emails, literature, web copy, and other information it’s fed to provide a type of predictive text. But as the example above shows, these processes are not immune to bias by any means. Now, while the features will stay on, neither will suggest pronouns for your messages—no matter what cues are presented in the text.
This assumption that an investor is male is based in large part on the data that its AI technology processes, but can present challenges when this data affirms bias. Even if finance, technology and engineering fields are predominantly male, the AI developed to communicate in these fields shouldn’t make assumptions about who is being spoken to or spoken of. The Next Web encapsulated the issue simply: “AI is only as fair as the data it learns from.” And recognizing the inherent unfairness from how the tool has learned, Google took the step of blocking pronoun suggestions outright.
“Not all ‘screw-ups’ are equal,” Gmail product manager Paul Lambert has said in regards to the announced change. “Gender is a big, big thing to get wrong.”
Reuters reports that eliminating this sort of suggestion wasn’t the company’s first choice to resolve the challenge. “The SmartCompose team of about 15 engineers and designers tried several workarounds,” they reported, “but none proved bias-free or worthwhile.” And yet, an elimination strategy has become relatively common when AI has failed or displayed bias.
In 2015, a photo-identification app incorrectly identified photos of a Black couple as gorillas; the company’s solution was to block the app from identifying gorillas in any form. And in 2012, women who searched topics like Computers and Engineering or Parenting were finding themselves identified as middle-aged men in their Ad Preferences. So although it is appreciated that Google understands the damage that incorrect pronouns can cause, simply disabling the functionality feels like an incomplete and short-sighted solution to a problem with far deeper roots.
Work continues, both at Google and at other companies, to identify potential points of bias and to combat them through more thoughtful machine learning. Google’s AI ethics team has aimed to start with the most hyperbolic examples. “A spam and abuse team pokes at systems, trying to find ‘juicy’ gaffes by thinking as hackers or journalists might,” Reuters reported. But the ease with which AI can deliver biased or potentially offensive predictive text proves we have a long way to go before these technologies can be deployed in an unassisted manner.
“The end goal is a fully machine-generated system where it magically knows what to write,” says Automated Insights’ John Hegele, whose company auto-generates news items from statistics. But this latest gaffe from Google proves there’s still a great deal of room for learning—for AI, but also (and perhaps especially) for the engineers that coordinate machine learning. “There’s been a ton of advances but we’re not there yet.”
Write for Us
Interested in sharing your ideas and insights with the world? Become a SMW News contributor and reach 300k readers each month.