After accusations, Twitter will pay hackers to find biases in its automatic image crops

Illustration by Alex Castro / The Verge

Twitter is holding a competition in hopes that hackers and researchers will be able to identify biases in its image cropping algorithm — and it’s going to be handing out cash prizes to winning teams (via Engadget). Twitter is hoping that giving teams access to its code and image cropping model will let them find ways that the algorithm could be harmful (such as it cropping in a way that stereotypes or erases the image’s subject).

Those competing will have to submit a description of their findings, and a dataset that can be run through the algorithm to demonstrate the issue. Twitter will then assign points based on what kind of harms are found, how much it could potentially affect people, and more.

The winning team will be awarded $3,500, and there are separate $1,000 prizes for the most innovative and most generalizable findings. That amount has caused a bit of a stir on Twitter, with a few users saying it should have an extra zero. For context, Twitter’s normal bug bounty program would pay you $2,940 if you found a bug that let you perform actions for someone else (like retweeting a tweet or image) using cross-site scripting. Finding an OAuth issue that lets you take over someone’s Twitter account would net you $7,700.

Twitter has done its own research into its image-cropping algorithm before — in May, it published a paper investigating how the algorithm was biased, after accusations that its previews crops were racist. Twitter’s mostly done away with algorithmically cropping previews since then, but it’s still used on desktop and a good cropping algorithm is a handy thing for a company like Twitter to have.

Opening up a competition lets Twitter get feedback from a much broader range of perspectives. For example, the Twitter team held a space to discuss the competition during which a team member mentioned getting questions about caste-based biases in the algorithm, something that may not be noticeable to software developers in California.

It’s also not just subconscious algorithmic bias Twitter is looking for. The rubric has point values for both intentional and unintentional harms. Twitter defines unintentional harms as crops that could result from a “well-intentioned” user posting a regular image on the platform, whereas intentional harms are problematic cropping behaviors that could be exploited by someone posting maliciously designed images.

Twitter says in its announcement blog that the competition is separate from its bug bounty program — if you submit a report about algorithmic biases to Twitter outside of the competition, the company says your report will be closed and marked as not applicable. If you’re interested in joining, you can head over to the competition’s HackerOne page to see the rules, criteria, and more. Submissions are open until August 6th at 11:59PM PT, and the winners of the challenge will be announced at the Def Con AI Village on August 9th.

Comments

This "problem" is way overblown; like goddman this is a first world problem if I’ve ever seen one. This click on the image and see it with no crop! Problem solved!

BREAKING Local man unaffected by issue claims issue absent

How could someone be possibly affected by a slightly off image crop?

By being the one cut off

So…? Click on the image and everyone will be shown!

Group A is correctly recognized 99% of the time and inconvenienced very slightly. Group B, through no fault of their own, is only recognized correctly 30% of the time; a full two thirds of their uploads/tags require a manual edit. It doesn’t even matter whether or not you care about basic human rights (although that would really be lovely too) —just from the bottom line of the platform, it’s bad business to, even if unintentionally, blanket hamstring a potentially significant portion of your market/user base.

Just… no. Go read the paper they published themselves on the issue. The error margins are minuscule, all under 10% and a lot are under 5%. It’s just not possible to make a perfect algorithm when it comes to finding the focus point of an image, if you have multiple people in an image there will always be one that the computer think is more "important" because they have more facial features or other patterns for the algorithm to recognize, just having a tattoo will skew the weight enough to change the crop.

Computers rely on contrast to find patterns and recognize objects, so for example when you deal with people of color there is often less contrast between their bodies and the background (if the background is dark which is the most common occurrence according to the paper) and less contrast in their facial features. For the same reason white women seem to actually have an advantage in the algorithm because on top of being white (making it easier for the computer to find more facial features), makeup (which according to the paper is worn by enough women in Twitter images to be statistically significant) adds to the facial features, thus increasing their "importance".

You can improve the algorithm by giving it more diversified images to train on, but it will never be perfect and a 5% error is already incredibly good considering that it deals with people.

I, too, can randomly make up numbers!

So why don’t you go ahead and solve poverty, world peace etc…
I can only imagine how you spend all your time solving the profound issues of our society.

Those dollar amounts seem like they should be missing a zero…

To the people screaming "Twitter cropping is racist!!!", go read the actual paper first.

[…] chose to crop to white women over black women 7 percent of the time, and white men over black men 2 percent of the time, with an overall 4 percent preference for white individuals.

The paper shows that the issue is down to how the Twitter algorithm (like every other machine vision algorithm) uses contrast to find features and recognize objects. Because there is a prevalence of dark backgrounds in Twitter images, black people tend to have on average less contrast between their bodies and the background, and dark skin makes it harder for the algorithm to find facial features, again because of the reduced contrast.

It found that the algorithm favored women 8 percent of the time, but didn’t seem to crop them in a perv-y manner: in the approximately 3 percent of cases where it didn’t crop to a woman’s face, it was focused on things like a sports jersey’s number.

The same contrast bias seems to actually favor women: because it’s likelier for a woman to wear jewelry or makeup, the algorithm thinks their are of higher importance and thus gives me focus.

In both racial and gender cases we are still talking about an 8% error rate in the worst case. It’s not ideal but when you remember that we are talking about an algorithm that needs to find the focus point of an image containing people, it’s actually incredible that it works so well.

$3,500 is a pretty lame amount for a solution that will solve a problem for such a large company. Twitter can do better than that.

View All Comments
Back to top ↑