Skip to main content

How the Pixel's software helped make Google's best camera yet

How the Pixel's software helped make Google's best camera yet

/

Why Google thinks computational photography is the future

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

The verdict is in on Google's impressive new Pixel and Pixel XL phones, and one of the bright spots is the camera. We have an in-depth comparison between the Pixel, the Galaxy S7 Edge, and the iPhone 7 available for your perusal here, and in our full review Dieter Bohn says "if you wanted to agree with Google and call this the best smartphone camera, I wouldn't argue with you."

"The results on the Pixel are very, very good," says Dieter. "I put it in the same ballpark as the iPhone 7 and the Galaxy S7 in most situations, which is not something I expected to say going in."

Clearly, this is by far the most competitive Google has ever been in mobile photography. But the Pixel phones, on paper, don't have cutting-edge hardware, relying on an f/2.0 lens without optical image stabilization. Instead, and in typical Google fashion, Google has turned to complex software smarts in order to power the Pixel camera. I spoke with Marc Levoy, a renowned computer graphics researcher who now leads a computational photography team at Google Research, about how software helps make the Pixel camera as good as it is.

"I can't think of any reason to switch HDR+ off."

Levoy's team has worked on projects as diverse as the 360-degree Jump camera rig for VR and burst mode photography for Google Glass. On the Pixel, the most prominent place you'll see its work is in the HDR+ mode that has been deployed on Nexus devices over the past few years. Apple popularized mobile HDR, or high dynamic range photography, back in 2010 with the iPhone 4, but Google's approach differs dramatically in both implementation and technique.

For one thing, you're supposed to leave it on all the time, and it's switched on by default. "I never switch it off," says Levoy. "I can't think of any reason to switch it off." You can, of course, and there's another slightly higher quality mode called HDR On that works similarly to previous Nexus phones, which is to say slowly. But for general photography, Google thinks you should be using HDR+ for each shot.

Read more: Google Pixel and Pixel XL review

This no-compromise approach to HDR photography has partly been made possible by new hardware. The Hexagon digital signal processor in Qualcomm's Snapdragon 821 chip gives Google the bandwidth to capture RAW imagery with zero shutter lag from a continuous stream that starts as soon as you open the app. "The moment you press the shutter it's not actually taking a shot — it already took the shot," says Levoy. "It took lots of shots! What happens when you press the shutter button is it just marks the time when you pressed it, uses the images it's already captured, and combines them together."

It's a major usability improvement on the HDR+ mode in last year's Nexus 6P and 5X. "What used to happen last year is you'd press the shutter button and you'd get this little circle going around while it captured the images you need for the burst; now it's already captured those," says Levoy. "And that's big, because that means that you can capture the moment you want."

Though Google has certainly made massive strides in speed, based on our testing of the Pixel we're not sure we'd agree that we'd never want to turn HDR+ off. It does generally produce great results, but there has been the odd image like this that reminds us a little too much of mid-2000s Flickr shots with overzealous HDR processing — check the unusual colors in the sky around the edge of the building. These examples are rare, however, and Google's atypical approach manages to avoid many of the pitfalls of conventional HDR imagery on phones.

The traditional way to produce an HDR image is to bracket: you take the same image multiple times while exposing different parts of the scene, which lets you merge the shots together to create a final photograph where nothing is too blown-out or noisy. Google's method is very different — HDR+ also takes multiple images at once, but they're all underexposed. This preserves highlights, but what about the noise in the shadows? Just leave it to math.

"Mathematically speaking, take a picture of a shadowed area — it's got the right color, it's just very noisy because not many photons landed in those pixels," says Levoy. "But the way the mathematics works, if I take nine shots, the noise will go down by a factor of three — by the square root of the number of shots that I take. And so just taking more shots will make that shot look fine. Maybe it's still dark, maybe I want to boost it with tone mapping, but it won't be noisy." Why take this approach? It makes it easier to align the shots without leaving artifacts of the merge, according to Levoy. "One of the design principles we wanted to adhere to was no. ghosts. ever." he says, pausing between each word for emphasis. "Every shot looks the same except for object motion. Nothing is blown out in one shot and not in the other, nothing is noisier in one shot and not in the other. That makes alignment really robust."

Underexposing each shot produces better low-light results, counterintuitively

Google also claims that, counterintuitively, underexposing each HDR shot actually frees the camera up to produce better low-light results. "Because we can denoise very well by taking multiple images and aligning them, we can afford to keep the colors saturated in low light," says Levoy. "Most other manufacturers don't trust their colors in low light, and so they desaturate, and you'll see that very clearly on a lot of phones — the colors will be muted in low light, and our colors will not be as muted." But the aim isn't to get rid of noise entirely at the expense of detail; Levoy says "we like preserving texture, and we're willing to accept a little bit of noise in order to preserve texture."

As Levoy alludes to, mobile image processing is a matter of taste. Some people will like the Pixel's results, others may not. But if you're the kind of person who follows phone announcements and scours spec sheets, you'll probably wonder whether the Pixel's lack of optical image stabilization sets it back. Not so, says Levoy. "HDR+ needs that less than other techniques because we don't have to take a single long exposure, we can take a number of shorter exposures and merge them... it's less important to have optical image stabilization if you're taking shorter exposures. We have had it in some years and not in other years. The decisions are complicated — they have to do with the build materials and other things that we're trying to optimize on the platform."

"We definitely want to take over more of the camera stack."

The Pixel phones are the first to be fully designed by Google, which means such decisions can be made with a more holistic view toward the final product. "There's now this hardware org headed by Rick Osterloh, and one of the goals of that was to pivot to a more premium experience for our phones and also to have more vertical integration," says Levoy. "[Our team is] a part of that effort, so we definitely want to take over more of the camera stack."

What might that involve in the future? "The notion of a software-defined camera or computational photography camera is a very promising direction and I think we're just beginning to scratch the surface," says Levoy, citing experimental research he's conducted into extreme low-light photography. "I think the excitement is actually just starting in this area, as we move away from single-shot hardware-dominated photography to this new area of software-defined computational photography."

Read more: Google Pixel takes on the iPhone 7 and Galaxy S7 Edge in a smartphone camera shootout


Google Pixel phone review