Skip to main content

Apple is trying to turn the iPhone into a DSLR using artificial intelligence

Apple is trying to turn the iPhone into a DSLR using artificial intelligence

/

Bokeh for the soul

Share this story

If you buy something from a Verge link, Vox Media may earn a commission. See our ethics statement.

When Apple unveiled the seventh iteration of the iPhone yesterday, it made sure to play up the camera. After all, the company has a small army working on the iPhone's ability to take photos. The device's camera is also often touted as one of its most cherished features, keeping Apple's smartphone ahead of the competition. Yet in recent years, competition from Samsung and others has caught up to Apple’s imaging lead.

The newest Apple devices, the iPhone 7 and iPhone 7 Plus, are naturally more capable in the photo department than their predecessors. But the company is stepping up the game with what it's calling a machine learning-enhanced image signal processor (ISP). Marketing chief Phil Schiller says this AI-powered ISP performs as many as 100 billion operations in just 25 milliseconds. This takes some unpacking and demystifying, and we should start with the image Apple used to promote the iPhone event. The invite said "See you on 7th," accompanied by an artful arrangement of colorful dots that were blurred out using a popular camera technique.

The effect there has a name: bokeh. The term comes from the Japanese word "boke," which means to blur or haze, or more specifically "boke-aji." As you might guess, that second phrase means the quality of said blur, and it was popularized by Photo Technique magazine editor Mike Johnston in 1997, who suggested English speakers use the short version and pronounce it, "Boh-kay." It's a fancy photographer term used to analyze and weigh the artistic properties of blurring out the background of images and, to a greater extent, sources of light behind the subject of a photo. It's how lights at night can turn into fuzzy, grainy orbs, as seen on Apple's event invite.

Smartphones have been very bad at producing bokeh

This effect is best achieved by using a shallow depth of field. A standard DSLR camera can do this easily by way of a wide aperture, which increases the amount of light the lens allows in when you're snapping a shot. (You also typically need a lens capable of producing a wide aperture, expressed in lower f-stops.) It's something that is very easy to achieve on a professional camera, as well with many mirrorless and lower-cost point-and-shoots these days. To achieve the effect with a smartphone is more difficult. In the past, you could do so with a ton of tinkering, some generous light, and very careful focusing with the tap of your finger. Still, because you don't have control over the size of the opening of the lens on your phone, it's hard to blur out the background on a mobile shot.

This is where AI comes in. Now, we're not discussing your standard voice AI like Siri or Cortana, or the kind of natural language software employed by Google to autocomplete a search result or scan your email messages. This is a computer vision AI that aims to understand the contents of photos. This variety can be used for sophisticated tasks, like when Facebook auto-tags your friends' faces or when Google teaches an algorithm to identify cats on the internet. A more simple, but still challenging, problem is to determine what the subject of a photo is, and where that subject begins blending in with the background.

This is very difficult for machines. Software only understands a photo as a series of numeric values pertaining to changes in color. Algorithms have no concept of the subject, foreground, or background of an image. They cannot delineate between a dog or a cat or a cloud in the sky. So AI researchers use machine learning to train these programs. By feeding it thousands upon thousands of examples, a software program can begin to understand and make sense of the contents of a photo. It can start to determine where the sky breaks with the treeline, and when two distinct objects happen to overlap, like an owner and their dog.

These programs are often referred to as neural nets, because they process these examples in ways similar to the human brain, but with more emphasis on probability. So give software enough photos of cats, for instance, and the machine will begin determining with high accuracy whether a photo contains a cat-like image. Facebook is using this style of machine learning to help turn the contents of a photo into a spoken description for blind users. Google does it within its Google Photos app, where you can search for "mountains" or "beach" and find photos without ever having tagged it or ascribed it a location.

Using machine learning, tech companies can "train" software by feeding it examples

For Apple, it sounds a little more basic and a lot more practical. The new iPhone 7's camera and the dual-camera lenses on the iPhone 7 Plus are powered by software that aims to understand the contents of an image. Once it identifies people and objects and backgrounds, the phone can automatically perform a number of tasks. Those include automatically setting exposure, focus, and white balance. (Notably, Apple purchased a startup last year called Perceptio that focused on doing this kind of advanced image recognition at higher speeds, without relying on huge stores of data.)

A more advanced feature, for the iPhone 7 Plus specifically, allows it to blur it out the background in real time with a new Portrait setting. This works because both lenses work together to capture nine layers of depth and create a so-called depth map, a process described by a number of patents Apple was granted in the last year. With Portrait mode, you can get crisp and tight DSLR-style images with the kind of bokeh once reserved for only pro-grade shots. This is all aided by the device's new f/1.8 aperture setting, which lets in way more light and helps amplify the shallow depth of field.

It's a neat trick, for sure, but it also helps Apple bolster its argument that the iPhone 7 is the best smartphone camera on the market. Regardless of whether that claim holds up, Schiller made the point that the camera may "probably be the best camera they've [consumers] ever owned," simply because the ubiquity of smartphones trumps how many DSLR cameras are out there.

So Schiller may be throwing out overly grand statements here, as per usual, but it is a distinct possibility the iPhone 7 could help a lot of smartphone owners take better pictures than ever. In true Apple fashion, you won’t need to do too much yourself. You'll select your mode, frame the shot, and let the phone do the heavy lifting. This time around, smarter software behind the scenes will pull more of the weight.


Apple's iPhone 7 event in 9 minutes