Today at its Pixel Fall Launch event, after many pre-announcements and leaks, Google finally announced all the details of the Pixel 6 and Pixel 6 Pro. Some of the biggest changes to these new flagship phones from Google are updated camera modules. Pixel phones were long the champions of smartphone photography, but Google has been resting on its laurels for a while, using the same 12.2-megapixel Sony IMX363 sensor from Pixel 3 through Pixels 5 and 5A.
Now, the main cameras of the Pixel 6 and 6 Pro house a 50-megapixel sensor that is larger than its predecessors and bins the images down to a 12.5-megapixel output. Google claims it can capture 150 percent more light than the Pixel 5 now with its 1/1.31” size image sensor and f/1.85 aperture. Following the iPhone playbook, both Pixels feature ultrawide lenses with 12-megapixel sensors, while the 6 Pro adds a third telephoto lens with 4x optical zoom coupled to 48 megapixels of resolution. On the front, the Pixel 6’s selfie cam is 8 megapixels with an 84-degree field of view, while the 6 Pro is 11.1 megapixels and 94 degrees for easier group selfies.
The rear camera setup is quite the departure from prior Pixels, where the commodity hardware of Pixel cameras allowed Google to focus squarely on software optimization. Some Pixel iterations added or removed an extra lens or two, but each relied mostly on computational features.
Those software developments have brought new features to the Pixel 6 and 6 Pro as well. Google claims its reengineered Portrait Mode can better render different skin tones of wider, more diverse groups of people, naming the feature Real Tone. Google coordinated with photographers and cinematographers of color to build more diverse portraits into the image dataset of its camera models. Google says it went to the lengths of even correcting some aberrations that affect the image rendering of those with darker skin more harshly, such as implementing a new algorithm to decrease stray light, which washes out dark skin. Google says it is dedicated to building a more equitable experience across its camera and imaging products, which will hopefully be a welcome departure from past omissions in its algorithms.
What Google calls Face Unblur promises to use the multiple images recorded with each shutter press to keep faces sharp, even when photographing movement. It begins this process before a picture is taken by preparing the second camera to shoot with a faster shutter speed once it sees a blurry face in the frame. It then matches the pixels and combines the images for sharper faces.
Motion Mode is designed for capturing action shots and long exposures for aesthetics like streaking car lights across a nightscape or panning with a moving subject to blur the background and give a sense of speed. Google is using machine learning to detect the content of the frame and blur a background behind a subject in motion as they move, or to keep everything sharp as you hold still and just key elements will blur. It is once again using multiple shutter speeds from the different camera modules and combining the images. Google showed off examples of a waterfall blurring into silky wisps and a subway train blurring behind a static person on the platform. All of these are shots that conventionally require a tripod or other specialized photographic equipment, though Google is promising you can do this handheld with the Pixel 6.
Lastly, Magic Eraser will attempt to remove distracting objects and photo bombers in Google Photos. After a picture is taken, Google Photos may make suggestions to automatically remove undesirable subjects from the image. Also, users can review the shot and select distracting objects in the frame to be removed. Aside from photobombers, other examples given by Google were location scouting and removing obtrusive content from a background or foreground for a cleaned look. Since it is baked into Google Photos for the Pixel, Google says the eraser tool can be used on photos taken now or photos from years ago.
Magic Eraser sounds a bit like the long-abandoned chain-link fence concept, now resurrected in a more achievable form. Google showed off automatic object removal at Google I/O 2017, six months before the Pixel 2 came out, but it never made it to a device. It was seemingly forgotten and quietly left behind by Google. Though it has since been a case study for Google’s software-first mentality, it looks like a version of it is alive now in the Pixel 6 and Google Photos.
Google also partnered with Snap to create Quick Tap to Snap in the Pixel 6. Double-tap the back of the phone to go directly into Snapchat’s camera app. Google will get exclusive AR filters and transitions from Snap. This partnership may indicate a stronger attention to Android from Snapchat, which has historically had a subpar experience for non-iOS users. Now Snap will get native support of the Google camera app.