You might remember Google’s Quick, Draw! project from 2016 — a web-based Pictionary game which asked users to doodle familiar objects while Google’s AI guessed what they were. Well, a clever soul named Dan Macnish has turned data from this game into a AI-powered camera, which takes pictures and prints them out as crude doodles.
Macnish describes how he created the camera (which he calls “Draw This”) on his blog. It’s powered by a Raspberry Pi and uses an off-the-shelf object recognition system to identify objects within each picture. It then looks up these items in the dataset produced by Google’s Quick, Draw! project and prints them out using a thermal printer.
It’s very cool, but has a big caveat: the camera’s ability to capture scenes is limited by what’s in the Quick, Draw! dataset. That means you can’t snap a picture of just anything and expect a drawing. Instead, the camera will interpret whatever it sees as one of 345 doodles; which cover a range of items, from aircraft carriers and alarm clocks to zebras and zigzags.
Macnish notes that this uncertainty is part of what makes the camera so fun. You never “get to see the original image,” he says, and instead just get the AI’s nearest doodle. “The result is always a surprise,” writes Macnish. “A food selfie of a healthy salad might turn into a enormous hotdog, or a photo with friends might be photobombed by a goat.”
This is because not only is the camera’s output limited by the doodle data, but because that data is itself only a visual shorthand. When we draw the ocean, for example, we tend to draw a bunch of squiggly lines, not huge, crashing waves. And a neural network might find those squiggly lines in unexpected places — in someone’s haircut, for example, or a rumpled blanket. In other words, the camera is looking with the imprecise eye of a doodle.