clock menu more-arrow no yes

Filed under:

A new bill would force companies to check their algorithms for bias

New, 21 comments

US lawmakers have introduced a bill that would require large companies to audit machine learning-powered systems — like facial recognition or ad targeting algorithms — for bias. The Algorithmic Accountability Act is sponsored by Senators Cory Booker (D-NJ) and Ron Wyden (D-OR), with a House equivalent sponsored by Rep. Yvette Clarke (D-NY). If passed, it would ask the Federal Trade Commission to create rules for evaluating “highly sensitive” automated systems. Companies would have to assess whether the algorithms powering these tools are biased or discriminatory, as well as whether they pose a privacy or security risk to consumers.

The Algorithmic Accountability Act is aimed at major companies with access to large amounts of information. It would apply to companies that make over $50 million per year, hold information on at least 1 million people or devices, or primarily act as data brokers that buy and sell consumer data.

These companies would have to evaluate a broad range of algorithms — including anything that affects consumers’ legal rights, attempts to predict and analyze their behavior, involves large amounts of sensitive data, or “systematically monitors a large, publicly accessible physical place.” That would theoretically cover a huge swath of the tech economy, and if a report turns up major risks of discrimination, privacy problems, or other issues, the company is supposed to address them within a timely manner.

The bill is being introduced just a few weeks after Facebook was sued by the Department of Housing and Urban Development, which alleges its ad targeting system unfairly limits who sees housing ads. The sponsors mention this lawsuit in a press release, as well as an alleged Amazon AI recruiting tool that discriminated against women.

And the bill seems designed to cover countless other controversial AI tools — as well as the training data that can produce biased outcomes in the first place. A facial recognition algorithm trained mostly on white subjects, for example, can misidentify people of other races. (Another group of senators introduced regulation specifically for facial recognition last month.)

In a statement, Wyden noted that “computers are increasingly involved in the most important decisions affecting Americans’ lives — whether or not someone can buy a home, get a job or even go to jail. But instead of eliminating bias, too often these algorithms depend on biased assumptions or data that can actually reinforce discrimination against women and people of color.”

A couple of local governments have made their own attempts at regulating automated decision-making. The New York City Council became the first US legislature to pass an algorithmic transparency bill in 2017, and Washington state held hearings for a similar measure in February.