Skip to main content

Google pitches for user trust with expanded privacy controls

Google pitches for user trust with expanded privacy controls


The company also announced a host of new security features at its I/O event

Share this story

A stock privacy image of an eye.
Illustration by Alex Castro / The Verge

For Google, a company that built its reputation on organizing the world’s information, the latest sales pitch to users is that it will try to do more with less of it.

At its I/O 2022 developer conference on May 11th, the tech giant announced a range of privacy measures that it says will help users retain more control over how their data is used by Google applications and displayed to the world through search.

One new change introduced at the conference is the My Ad Center interface: a hub that will let users customize the types of ads they see by selecting from a range of topics they are interested in or opt to see fewer ads on a given topic.

Screenshot of the My Ad Center interface, via Google
Screenshot of the My Ad Center interface, via Google.

Google says that My Ad Center will help to give users control not just over how their data is used but also over how this affects their experience of the web.

In another announcement unveiled at the conference, Google said that users would be able to request that personal information such as email or address details be removed from search results through a new tool that will be accessible from a user’s Google profile page.

Perhaps expectedly for a conference geared toward developers, some of Google’s most significant privacy announcements involved changing approaches to software engineering. The safety and security segment of the event, led by Jen Fitzpatrick, Google’s SVP for core systems and experiences, emphasized the concept of “protected computing”: a set of technologies that Google says represent a transformed approach to where and how data is processed.

In summary, protected computing means that more data will be processed on devices (e.g., Android phones) without being sent to Google’s cloud servers. And when user information is sent to Google’s servers, more of it will be anonymized through techniques like the use of differential privacy and edge computing.

Fitzpatrick said that the changes were about justifying the trust that users put in Google to keep them secure.

“Protecting your privacy requires us to be rigorous in building products that are private by design,” she said.

The safety and security presentation included an acknowledgment that users’ expectations of privacy are changing and that the company has a need to recognize and adapt to them. It’s notable that Google is increasingly trying to prove to users that it can keep at least some of their data out of the hands of the advertisers that bring in the vast majority of the company’s revenue.

And under the guiding statement, “secure by default, private by design,” Google is also pushing to boost user safety across its products by implementing additional security measures out of the box.

Security announcements made at the I/O event included a number of measures meant to increase user protections across a range of Google products. For one, a new account safety status icon will show a warning on a user’s profile across all Google apps when any security issues are identified and direct the user toward recommended actions to correct the issue.

And the company will expand two-step verification for accounts by sending an “Is this you?” notification to phones when a user tries to sign in to a Google account elsewhere on the web.

Phishing protection will also be coming to the Google Workspace suite, with the Docs, Sheets, and Slides applications soon to display warning notifications about malicious links in documents.

Overall, Google’s safety announcements suggest a company that wants to be seen as centering users’ security concerns. At an I/O event full of new and creative uses of user data, it’s heartening to see that, on the face of things, privacy was by no means forgotten.