Skip to main content

Samsung responds to fake Moon controversy

Samsung responds to fake Moon controversy

/

A blog post provides the most detailed English-language account of how Samsung’s Moon-photography process works, but won’t satisfy everyone.

Share this story

A relatively detailed photo of the Moon.
A Samsung smartphone identified a blurry photo of the Moon and added detail to create the above image.
Image: u/ibreakphotos

Samsung has published an English-language blog post explaining the techniques used by its phones to photograph the Moon. The post’s content isn’t exactly new — it appears to be a lightly edited translation of an article posted in Korean last year — and doesn’t offer much new detail on the process. But, because it’s an official translation, we can more closely scrutinize its explanation of what Samsung’s image processing technology is doing.

The explanation is a response to a viral Reddit post that showed in stark terms just how much extra detail Samsung’s camera software is adding to images when taking a photo of what appears to be the Moon. These criticisms aren’t new (Input published a lengthy piece about Samsung’s moon photography in 2021) but the simplicity of the test brought the issue greater attention: Reddit user ibreakphotos simply snapped a photo of an artificially blurred image of the Moon using a Samsung phone, which added in extra detail that didn’t exist in the original. You can see the difference for yourself below:

Samsung’s blog post today explains that its “Scene Optimizer” feature (which has supported Moon photography since the Galaxy S21 series) combines several techniques to generate better photos of the Moon. To start with, the company’s Super Resolution feature kicks in at zoom levels of 25x and higher, and uses multi-frame processing to combine over 10 images to reduce noise and enhance clarity. It also optimizes its exposure so the Moon doesn’t appear blown-out in the dark sky, and uses a “Zoom Lock” feature that combines optical and digital image stabilization to reduce image blur.

Actually identifying the Moon in the first place is done with an “AI deep learning model” that’s been “built based on a variety of moon shapes and details, from full through to crescent moons, and is based on images taken from our view from the Earth.”

But the key step, and the one that’s generated all the controversy, appears to be the use of an under-explained “AI detail enhancement engine.” Here’s how Samsung’s blog post describes the process:

”After Multi-frame Processing has taken place, Galaxy camera further harnesses Scene Optimizer’s deep-learning-based AI detail enhancement engine to effectively eliminate remaining noise and enhance the image details even further.”

And here’s Samsung’s flow chart of the process, which describes the Detail Enhancement Engine as a convolution neural network (a type of machine learning model commonly used to process imagery) that ultimately compares the result with enhanced detail against a “Reference with high resolution.”

Flow chart showing Samsung moon photography processing pipeline.
Samsung’s flow chart shows how the moon is identified, and then its “Detail Enhancement Engine” gets to work.
Image: Samsung

It seems to be this stage that’s adding detail that wasn’t present when the photo was originally taken, and could explain why ibreakphotos’ followup test — inserting a plain gray square onto a blurry photo of the Moon — resulted in the blank square being given a Moon-like texture by Samsung’s camera software. 

While this new blog post offers more details in English compared to what Samsung has said publicly before, it’s unlikely to satisfy those who see any software capable of generating a realistic image of the Moon from a blurry photo as essentially faking the whole thing. And when these AI-powered capabilities are used to advertise phones, Samsung risks misleading customers about what the zoom features of its phones are capable of.

But, as my colleague Allison wrote yesterday, Samsung’s camera software isn’t a million miles away from what smartphone computational photography has been doing for years to get increasingly crisp and vibrant photographs out of relatively small image sensors. “Year after year, smartphone cameras go a step further, trying to make smarter guesses about the scene you’re photographing and how you want it to look,” Allison wrote. “These things all happen in the background, and generally, we like them.”

Samsung’s blog post ends with a telling line: “Samsung continues to improve Scene Optimizer to reduce any potential confusion that may occur between the act of taking a picture of the real moon and an image of the moon.” (Our emphasis.)

On one level, Samsung is essentially saying: “We don’t want to get fooled by any more creative Redditors who take pictures of images of the Moon that our camera thinks is the Moon itself.” But on another the company is also highlighting just how much computational work goes into producing these pictures, and will continue to in future. In other words, we’re left asking the same question: “what is a photograph anyway?”