Skip to main content

The Pixel 8 Pro’s videos get a whole lot brighter with video boost — if you use it right

The Pixel 8 Pro’s videos get a whole lot brighter with video boost — if you use it right

/

It’s not the ‘aha’ moment that Night Sight was for still photos, but Video Boost is a promising glimpse of things to come.

Share this story

Image of Pixel 8 Pro lying on a pool table with the rear panel facing up and light reflecting on the camera lenses.
Only the Pixel 8 Pro gets Video Boost, one of the AI camera features announced at its launch.
Photo by Vjeran Pavic / The Verge

When Google introduced Night Sight on the Pixel 3, it was a revelation. 

It was as if someone had literally turned on the lights in your low-light photos. Previously impossible shots became possible — no tripod or deer-in-the-headlights flash needed.

Five years later and taking photos in the dark is old hat — just about every phone up and down the price spectrum comes with some kind of night mode. Video, though, is a different story. Night modes for still photos capture multiple frames to create one brighter image, and it’s just not possible to copy and paste the mechanics of that feature to video which, by its nature, is already a series of images. The answer, as it seems to be lately, is to call on AI.

When the Pixel 8 Pro launched this fall, Google announced a feature called Video Boost with Night Sight, which would arrive in a future software update. It uses AI to process your videos — bringing out more detail and enhancing color, which is especially helpful for low-light clips. There’s just one catch: this processing takes place in the cloud on Google’s servers, not on your phone.

As promised, Video Boost started arriving on devices a couple of weeks ago with December’s Pixel update, including my Pixel 8 Pro review unit. And it’s good! But it’s not quite the watershed moment that the original Night Sight was. That speaks both to how impressive Night Sight was when it debuted, as well as the particular challenges that video presents to a smartphone camera system.

Original video on the left, boosted video on the right. There’s much more detail and color in the dark pergola pre-boost.

Video Boost works like this: first, and crucially, you need to have a Pixel 8 Pro, not a regular Pixel 8 — Google hasn’t responded to my question about why that is. You turn it on in your camera settings when you want to use it and then start recording your video. Once you’re done, the video needs to be backed up to your Google Photos account, either automatically or manually. Then you wait. And wait. And in some cases, keep waiting — Video Boost works on videos up to ten minutes long, but even a clip that’s just a couple of minutes in length can take hours to process.

Depending on the type of video you’re recording, that wait may or may not be worth it. Google’s support documentation says that it’s designed to let you “make videos on your Pixel phone in higher quality and with better lighting, colors, and details,” in any lighting. But the main thing that Video Boost is in service of is better low-light video — that’s what group product manager Isaac Reynolds tells me. “Think about it as Night Sight Video, because all of the tweaks to the other algorithms are all in pursuit of Night Sight.” 

All of the processes that make our videos in good lighting look better — stabilization, tone mapping — stop working when you try to record video in very low light. Reynolds explains that even the kind of blur you get in low light video is different. “OIS [optical image stabilization] can stabilize a frame, but only of a certain length.” Low light video requires longer frames, and that’s a big challenge for stabilization. “When you start walking in low light, with frames that are that long you can get a particular kind of intraframe blur which is just the residual that the OIS can compensate for.” In other words, it’s hella complicated. 

This all helps explain what I’m seeing in my own Video Boost clips. In good lighting, I don’t see much of a difference. Some colors pop a little more, but I don’t see anything that would compel me to use it regularly when available light is plentiful. In extremely low light Video Boost can retrieve some color and detail that’s totally lost in a standard video clip. But it’s not nearly as dramatic as the difference between a regular photo and a Night Sight photo in the same conditions.

There’s a real sweet spot between these extremes, though, where I can see Video Boost really coming in handy. In one clip where I’m walking down a path at dusk into a dark pergola housing the Kobe Bell, there’s a noticeable improvement to the shadow detail and stabilization post-Boost. The more I used Video Boost in regular, medium-low indoor lighting, the more I saw the case for it. You start to see how washed out standard videos look in these conditions — like my son playing with trucks on the dining room floor. Turning on Video Boost restored some of the vibrancy that I forgot I was missing. 

Video Boost is limited to the Pixel 8 Pro’s main rear camera, and it records at either 4K (the default) or 1080p at 30fps. Using Video Boost results in two clips — an initial “preview” file that hasn’t been boosted and is immediately available to share, and eventually, the second “boosted” file. Under the hood though, there’s a lot more going on. 

Reynolds explained to me that Video Boost uses an entirely different processing pipeline that holds on to a lot more of the captured image data that’s typically discarded when you’re recording a standard video file — sort of like the relationship between RAW and JPEG files. A temporary file holds this information on your device until it’s been sent to the cloud; after that, it’s deleted. That’s a good thing, because the temporary files can be massive — several gigabytes for longer clips. The final boosted videos, however, are much more reasonably sized — 513MB for a three-minute clip I recorded versus 6GB for the temporary file. 

My initial reaction to Video Boost was that it seemed like a stopgap — a feature demo of something that needs the cloud to function right now, but would move on-device in the future. Qualcomm showed off an on-device version of something similar just this fall, so that must be the end game, right? Reynolds says that’s not how he thinks about it. “The things you can do in the cloud are always going to be more impressive than the things you can do on a phone.” 

The distinction between what your phone can do and what a cloud server can do will fade into the background

Case in point: he says that right now, Pixel phones run various smaller, optimized versions of Google’s HDR Plus model on-device. But the full “parent” HDR Plus model that Google has been developing over the past decade for its Pixel phones is too big to realistically run on any phone. And on-device AI capabilities will improve over time, so it’s likely that some things that could only be done in the cloud will move onto our devices. But equally, what’s possible in the cloud will change, too. Reynolds says he thinks of the cloud as just “another component” of Tensor’s capabilities.

In that sense, Video Boost is a glimpse of the future — it’s just a future where the AI on your phone works hand-in-hand with the AI in the cloud. More functions will be handled by a combination of on and off-device AI, and the distinction between what your phone can do and what a cloud server can do will fade into the background. It’s hardly the “aha” moment that Night Sight was, but it’s going to be a significant shift in how we think about our phone’s capabilities all the same.