Skip to main content

The Senate’s secret algorithms bill doesn’t actually fight secret algorithms

The Senate’s secret algorithms bill doesn’t actually fight secret algorithms

/

It’s a lot more targeted than it sounds

Share this story

Boeing Hearing
Photo By Tom Williams / CQ-Roll Call, Inc via Getty Images

Politicians sometimes exaggerate the laws they’re proposing. But when they start making up new sections of a bill from whole cloth, something has gone wrong. In the case of the Filter Bubble Transparency Act, it’s not just spin; it’s an example of how badly defined buzzwords can make it impossible to address the internet’s problems.

The Filter Bubble Transparency Act (FBTA) is sponsored by some of the Senate’s most prominent tech industry critics, including Sens. Mark Warner (D-VA) and John Thune (R-SD). Introduced last week, the bill is named after Eli Pariser’s 2011 book The Filter Bubble, which argues that companies like Facebook create digital echo chambers by optimizing content for what each person already engages with.

The FBTA aims to let people opt out of those echo chambers. Large companies would have to notify users if they’re delivering content — like search results or a news feed — based on personal information that the user didn’t explicitly provide. That could include a user’s search history, their location, or information about their device. Sites would also need to let users turn off this personalization, although the rules don’t apply to “user-supplied” data like search terms, saved preferences, or an explicitly entered geographical location.

Limiting personalization isn’t the same thing as ensuring transparency

The rules are supposed to offer users more options and make them more aware of how web platforms work. A spokesperson for Warner offered one example: if you look for “pizza delivery” on Google search, you’ll normally get results for nearby businesses based on your location data, a kind of personalization that the bill refers to as an “opaque algorithm.” Under the proposed rules, Google would need to provide a generic version that didn’t rely on that data, which the bill calls an “input-transparent algorithm.”

Limiting personalization sounds like a straightforward goal, but the FBTA’s sponsors have made it surprisingly hard to understand, starting with the term “opaque algorithm.” The phrase sort of makes sense in context. An algorithm (a word that broadly refers to flowchart-style rule sets) is considered opaque if it uses a certain kind of data that some people don’t realize they’re providing. It’s considered transparent if it doesn’t.

On a larger scale, though, these terms are so misleading that even the bill’s sponsors can’t keep things straight. The FBTA doesn’t make platforms explain exactly how their algorithms work. It doesn’t prevent them from using arcane and manipulative rules, as long as those rules aren’t built around certain kinds of personal data. And removing or disclosing a few factors in an algorithm doesn’t make the overall algorithm transparent. This bill isn’t aimed at systems like the “black box” algorithms used in criminal sentencing, for example, where transparency is a key issue.

Despite this, a press release repeatedly claims the bill fights “secret algorithms” rather than micro-targeting or invasive data mining. Here’s a supposed summary of the FBTA’s rules:

Clearly notify [big web platform] users that their platform creates a filter bubble that uses secret algorithms (computer-generated filters) to determine the order or manner in which information is delivered to users; and

Provide [big web platform] users with the option of a filter bubble-free view of the information they provide. The bill would enable users to transition between a customized, filter bubble-generated version of information and a non-filter bubble version (for example, the “sparkle icon” option that is currently offered by Twitter that allows users to toggle between a personalized timeline and a purely chronological timeline).

If you’ve read the bill, this is baffling. For one thing, virtually all big recommendation and search systems are “secret algorithms” on some level, and the bill doesn’t ask companies to disclose their code or rule sets. For another, Twitter’s “sparkle icon” doesn’t just remove personalization; it removes algorithmic sorting in general. Sen. Marsha Blackburn (R-TN), another sponsor, explicitly claims this is part of the FBTA:

“When individuals log onto a website, they are not expecting the platform to have chosen for them what information is most important,” said Senator Blackburn. “Algorithms directly influence what content users see first, in turn shaping their worldview. This legislation would give consumers the choice to decide whether they want to use the algorithm or view content in the order it was posted.”

That’s just not true. Sen. Thune did float the idea of an “algorithm-free” Facebook and Twitter this summer. But this bill never mentions viewing content “in the order it was posted” — a fact I confirmed with Warner’s office. (Blackburn’s office didn’t return a request for clarification.)

This confusion has carried over into press coverage of the bill. The Wall Street Journal says the FBTA would “require big online search engines and platforms to disclose that they are using algorithms to sort the information that users are requesting or are being encouraged to view.” Again, nothing in this bill requires companies to disclose the use of algorithms. They just have to disclose when those algorithms use personal information for customized results. And that makes sense because algorithms are a basic building block of web services. Search engines couldn’t exist without them.

The FBTA’s sponsors are using “algorithm” to mean “sorting program” and “bad, manipulative social media recommendation tool” and “social media personalization system.” In the process, they vastly overstate the bill’s goals.

Everyone seems confused about what this bill actually does

It’s not clear whether the lawmakers are intentionally exaggerating this fact or simply got it wrong. The press release claims the bill will let consumers “control their own online experiences instead of being manipulated by Big Tech’s algorithms and analytics.” Co-sponsor Jerry Moran (R-KS) says it would make companies “offer certain products and services to consumers free of manipulation.”

But there’s lots of room for manipulation without hyper-personalized search or feed results. Even without targeting, nothing stops companies from delivering inflammatory content that encourages negative engagement, one of the biggest criticisms of Facebook and YouTube. The bill also allows personalization based on users’ friends lists, video channel subscriptions, or other knowingly provided preferences, which would allow for a pretty significant echo chamber. As for “analytics,” the bill doesn’t say anything about whether companies are allowed to mine personal data for purposes like secret consumer scores.

The proposal still raises interesting questions. If an “input-transparent” sorting system can’t incorporate users’ search histories, would it require platforms like YouTube to turn off “watch next” recommendations since your viewing history might include the video you’re already watching? Would Uber have to disclose if it charges higher fares when your phone battery is low? Companies use personalization in bizarre ways, and a bill requiring them to disclose those methods could be fascinating.

But those issues are hard to discuss when they’re cloaked in the blanket shorthand of “algorithms.” If Congress wants to help people understand the web better, members could start by actually explaining what they’re doing instead of scoring rhetorical points with buzzwords.