Google has launched a new safety feature that lets under-18s request the removal of images of themselves from the company’s search results. The feature was originally announced in August (along with new restrictions of ad targeting of minors) but is now widely available.
Anyone can start the removal process from this help page. Applicants will need to supply the URLs of images they want removed from search results, the search terms that surface those images, the name and age of the minor, and the name and relationship of the individual that might be acting on their behalf — a parent or guardian, for example.
Google will make exceptions for “cases of compelling public interest or newsworthiness”
As ever with these sorts of removal requests, it’s hard to say exactly what criteria Google will be applying in its judgements. The company notes it will remove images of any minors “with the exception of cases of compelling public interest or newsworthiness.” Interpreting how these terms apply in different situations is difficult territory, as we’ve seen with controversial cases involving the EU’s “right to be forgotten” law.
It also seems from Google’s language that it won’t comply with requests unless the person in the image is currently under 18. So, if you’re 30, you can’t apply to remove pictures of you when you were 15. That limits the scope of the tool to prevent abuse or harassment, but it presumably makes the process of verification much easier. It’s hard to prove what age you are in any given photo as opposed to proving how old you are right now.
Google also stresses that removing an image from its search results does not, of course, remove it from the web. The company encourages those going through the application process to contact the webmaster directly. Though in cases where doing so has been unsuccessful, removing information from Google’s index is certainly the next best thing.
In addition to these new removal options for images of minors, Google already offers other avenues for requesting the removal of specific types of harmful content. These include non-consensual explicit imagery, fake pornography, financial or medical information, and “doxxing” information including home addresses and phone numbers.