Twitter is using machine learning to crop photos to the most interesting part

The lure of machine learning isn’t always about big new features; often, what it does best are small tweaks that subtly improve user experience. So it is with Twitter’s use of neural networks to automatically crop picture previews to their most interesting part.

The company’s been working on this tool for a while, but described its methods in detail in a blog post yesterday. It’s an interesting little read, with ML researcher Lucas Theis and ML lead Zehan Wang explaining how they started just using facial recognition to crop images to faces, but found that this method didn’t work with pictures of scenery, objects, and, most importantly, cats.

Their solution was “cropping using saliency” (saliency here meaning whatever’s most interesting in a picture — faces or not). To define this they used data from academic studies into eye-tracking, which record what areas of images people look at first. “This data can be used to train neural networks and other algorithms to predict what people might want to look at,” write Theis and Wang.

More examples of before and after using Twitter’s ML-powered auto-cropper.
Image: Twitter

Once they’d trained a neural network to identify these areas, they needed to optimize it to work in real time on the site. Luckily for them, the cropping needed for a photo preview is pretty broad — you’re only narrowing down an image to maybe its most interesting third. You don’t need to target in on specifics. That means Twitter could pare down and simplify the criteria the neural network was judging using a technique called “knowledge distillation.”

The end result was a neural network ten times faster than its original design. “This lets us perform saliency detection on all images as soon as they are uploaded and crop them in real-time,” write Theis and Wang.

This new feature is currently being rolled out on desktop, iOS, and Android apps to all users says the company. So next time you see a photo preview on Twitter that invites you to click remember to thank a neural network.


The left wing crop is better, IMO. The right crop has more wing, but the point of the photo isn’t to show a big ol’ wing.

Then she should have pointed her camera further up

Shame there can’t be "focus" metadata so that apps know what you tapped on/intended to be the focus of the photo.

True, but no one said the machine will have taste.

Good thing all those "open for a surprise" tweets came and went, this pretty much voids em.

Wonder if it’ll be a tits or ass bot.

Hard to get both in one shot though

I know the ‘before’ pictures are meant to showcase how good this feature could be, but what idiots would frame pictures that way in the first place? I guess if you took a vertical photo and it’s being cropped to a horizontal photo (why god?!) then this would be useful, but otherwise it should be up to the photographer to make these decisions. Even still it should be up the the photographer to decide how best to crop. Maybe this could be a good starting point for the photographer to make that decision.

That’s just the auto-crop thumbnail you see when you’re scrolling through the feed. User can still click on it to see the full version.

This all seems well and good, but why not just let users crop them themselves? Or at least give the option.

Yeah but, surely you’re more likely to open an image if you see a crop that interests you… that’s the point of this whole thing anyway, it’s just giving the user an option.

View All Comments
Back to top ↑