Google’s machine learning software can now categorize ramen by shop

Google

Looks like it’s humans: 0, computers: 1 again. If you’re a big enough fan of ramen, maybe you can look at a photo of a tonkotsu bowl on Instagram and immediately recognize which restaurant it’s from. But computers have us beat, as they can now identify the exact shop a menu item came from, out of 41 seemingly identical bowls of ramen from the same restaurant franchise.

Data scientist Kenji Doi did the delicious research, using Google’s AutoML Vision to classify every menu item from Ramen Jiro, a Tokyo-based chain of ramen shops. He gathered about 1,170 photos from each of the 41 shops, and fed the dataset of 48,000 ramen photos to the software. It took AutoML about 24 hours (18 minutes, in a less accurate Basic mode) to finish training the data, and the model was able to predict which shop the ramen came from with a 95 percent accuracy.

The row shows the actual shop, while the column shows the predicted shop. You can see where AutoML incorrectly identified the shop where boxes are labeled 1.
Google

Doi first hypothesized that the model was looking at the color and shape of the bowl or table in the photo, but this was disproven as the model was able to identify specific ramen shops even from photos with the same bowl and table design. Doi now believes that the model is accurate enough to be able to distinguish between cuts of meat and the placement of the toppings.

Google introduced its Cloud AutoML software to developers earlier this year, which lets users create machine learning software through a simplified, drag-and-drop process. The goal is to take the pain out of AI coding, and instead train custom vision models through an image recognition tool. Brands like Urban Outfitters and Disney are already using Cloud AutoML technology to improve the e-commerce shopping experience. Products are now being categorized into more detailed characteristics to help customers find exactly what they’re looking for.

Image: Google

So Google’s Cloud AutoML was no doubt created with companies in mind, and will probably be used to zero in on how best to sell products to customers. But Doi’s ramen experiment is a nice change from all that, and implores us to think more openly about creative use cases for data training. Hopefully one day, you can use this software to find out where that location tag-less ramen you saw on Instagram is from.

Comments

Now if only Google would add in the object removal they announced last I/O

If it’s really identifying the shop when they all have the same bowls and menus, it seems like it’d depend on who was in the kitchen on the particular day the photos were taken. Which probably says as much about human consistency as it does about machine visual recognition (which is still quite impressive).

I always thought the Japanese prided themselves on consistency…guess this says otherwise!

But in all seriousness, this is very cool and very scary at the same time.

Ramen API next?

welp at least the robots will appreciate ramen after they takeover the world and extinguish human life

View All Comments
Back to top ↑