Google today announced it will be rolling out improvements to its A.I. model to make Google Search a safer experience and one that’s better at handling sensitive queries, including those around topics like suicide, sexual assault, substance abuse, and domestic violence. It’s also using other A.I. technologies to improve its ability to remove unwanted explicit or suggestive content from Search results when people aren’t specifically seeking it out.
Currently, when people search for sensitive information — like suicide, abuse or other topics — Google will display the contact information for the relevant national hotlines above its search results. But the company explains that people who are in crisis situations may search in all kinds of ways, and it’s not always obvious to a search engine that they’re in need, even if it would raise flags if a human saw their search queries. With machine learning and the latest improvements to Google’s A.I. model called MUM (Multitask Unified Model), Google says it will be able to automatically and more accurately detect a wider range of personal crisis searches because of how MUM is able to better understand the intent behind people’s questions and queries.
The company last year introduced its plan to redesign Search using A.I. technologies at its Search On event, but it hadn’t addressed this specific use case. Instead, Google then had focused on how MUM’s better understanding of user intent could be leveraged to help web searchers unlock deeper insights into the topic they’re researching, and lead users down new search paths. For example, if a user had searched for “acrylic painting,” Google could suggest “things to know” about acrylic painting, like different techniques and styles, tips on how to paint, cleaning tips, and more. It could also point users to other queries they may not have thought to search for, like “how to make acrylic paintings with household items.” In this one example, Google said it could identify over 350 different topics related to acrylic paintings.
In a somewhat similar way, MUM will now be used to help better understand the sort of topics that someone in crisis might search for, which aren’t always as obvious as typing in a direct cry for help.
“…if we can’t accurately recognize that, we can’t code our systems to show the most helpful search results. That’s why using machine learning to understand language is so important,” explained Google in a blog post.
For example, if a user searched for “Sydney suicide hot spots,” Google’s previous systems would understand the query to be one of information-seeking because that’s how the term “hots spots” is often used, including in travel search queries. But MUM understands the query is related to people trying to find a jumping spot for suicide in Sydney and would identify this search as potentialy being from someone in crisis, allowing it to show actionable information like suicide hotlines. Another suicide query that could see improvements from MUM is “most common ways suicide is completed, which, again, Google would have previously understood only as an information-seeking search.
MUM also better understands longer search queries where the context is obvious to humans, but not necessarily to machines. For instance, a query like “why did he attack me when i said i dont love him” implies a domestic violence situation. But long, natural-language queries have been difficult for Google’s systems without the use of advanced A.I.
In addition, Google notes that MUM can transfer its knowledge across the 75 languages it’s been trained on, which helps it to more quickly scale A.I. improvements like this to worldwide users. That means it will be able to display the actionable information from trusted partners, like local hotlines, for these types of personal crisis searches to a broader audience.
This isn’t the first time MUM had been put to work to help direct Google Searches. MUM had previously been used to improve searches for Covid-19 vaccine information, the company said. In the coming months, Google says it will use MUM to improve its spam protection features and will expand those to languages where it has little training data. Other MUM improvements will roll out soon, as well.
Another area getting a boost from A.I. technology is Google’s ability to filter explicit content from search results. Even when Google’s SafeSearch filtering technology is turned off, Google still attempts to reduce unwanted explicit content from those searches where finding racy content wasn’t the goal. And today, its algorithms improve on this ability as users conduct hundreds of millions of searches globally.
But the A.I. technology known as BERT now works to help Google better understand if people were seeking out explicit content. The company says, over the past year, BERT has reduced unwanted shocking search results by 30%, based on an analysis conducted by “Search Raters” who measured oversexualized results across random samples of queries for web and image search. The technology has also been particularly effective in reducing explicit content for searches related to “ethnicity, sexual orientation and gender,” which Google says disproportionately impacts women and especially women of color, the analysis found.
Google says the MUM A.I. improvements will begin to roll out to Serach in the coming weeks.