I've been reading about Gcam, the Google X project that was first sparked by the need for a tiny camera to fit inside Google Glass, before evolving to power the world-beating camera of the Google Pixel. Gcam embodies an atypical approach to photography in seeking to find software solutions for what have traditionally been hardware problems. Well, others have tried, but those have always seemed like inchoate gimmicks, so I guess the unprecedented thing about Gcam is that it actually works. But the most exciting thing is what it portends.
I think we'll one day be able to capture images without any photographic equipment at all.
Now I know this sounds preposterous, but I don't think it's any more so than the internet or human flight might have once seemed. Let's consider what happens when we tap the shutter button on our cameraphones: light information is collected and focused by a lens onto a digital sensor, which converts the photons it receives into data that the phone can understand, and the phone then converts that into an image on its display. So we're really just feeding information into a computer.
Photographic memory is just a myth among humans, but what about machines?
What I envision, and the direction where Google says its computational photography efforts are headed, is the effect of applying machine learning to the task. What if your phone knew when, where, and what you're pointing it at; what if it had a library of trillions of images; and what if it could intelligently account for things like weather and time of day? Would it need to have eyes to see the scene you're trying to capture?
This may be distant futurism, but its foundations are already falling into place. Google already applies machine learning to its Google Photos service, which automatically labels and arranges users' pictures according to its constantly improving understanding and recognition of objects, faces, and scenes. Europe’s recently launched Galileo satellite navigation system can provide real-time positioning accuracy "down to the metre range." Tell your phone how tall you are and, with the help of the same orientation sensors it uses to know its place on a map, the device will be able to calculate both your subject and your point of view when you want to shoot a photo. And if it's something as ubiquitously photographed as, say, the Roman Forum, all a future Googlephone would need to know are the cloud formations and Sun position at the particular time of the shot, and it'd have enough data to synthesize a photo.
If you insist on inserting yourself or family members into the happy tourist snap, that shouldn't be too much of an imposition, either: just take a bunch of selfies in advance and the software will stitch the two images together for you. Other people plugged into the same connected ecosystem, as well as connected cars and buses, can also be optionally added into an image by tapping into their location data.
Adobe's image manipulation algorithms are the closest I've come to experiencing true artificial intelligence
Adobe is even more impressive with its Content-Aware Fill in Photoshop. This algorithmic miracle is probably the closest I've come to experiencing real artificial intelligence, as the tool uses information from the whole photo to reconstruct missing or obscured elements within it. Need to extend a sky, remove a road, or clean up unsightly dust from complex objects? Not a problem when your image editor has context awareness. It's guesswork taken to its logical extreme.
And honestly, that's all I'm proposing: really high-level guesswork, only without the photonic information we currently rely on. The world's most sophisticated image search, with a measure of context awareness thrown in. Sure, it'll depersonalize the experience of capturing an image, and I don't have an easy answer for how this would work in private spaces, should we still have any of those left in the future. But it's certainly a capability that we're building toward, whether we're conscious of it or not, and whether we choose to ever rely on it or not.
The trouble with light is that there's rarely ever enough of it, and capturing it requires battery power and complex physical components. It used to be the only source of information for photographs, but in the digital age we have countless others. They don't strictly need to replace cameras altogether, but they can certainly assist, improve, and in some circumstances supplant the traditional method of taking a photo. If cameras are to maintain their quality while shrinking down to the microscopic levels where they can fit inside a Google Glass or other wearable technology, they might need to not be cameras at all.