Last November, Google opened up its in-house machine learning software TensorFlow, making the program that powers its translation services and photo analytics (among many other things) open-source and free to download. Now, the company is giving TensorFlow the machine learning equivalent of smart pills, releasing a distributed version of the software that allows it to run across multiple machines — up to hundreds at a time.
This sounds like an obvious way to improve TensorFlow, and, well, it is. Machine learning software only gets to be clever by analyzing large amounts of data; looking for common properties and trends like facial features in photographs, for example. Letting TensorFlow run these sorts of operations on networks of computers simultaneously rather than individual machines means users can make smarter systems, and improve them faster.
TensorFlow is the most popular machine learning framework on Github
Along with the release of TensorFlow 0.8, Google highlights how enthusiastic the public reaction to its software has been. TensorFlow is the most popular machine learning framework on code repository GitHub, and the most forked project on the site in 2015, despite only being released in November of that year. (Google's DeepDream, a neural network which overanalyze images to psychedelic effect, is also among the most popular GitHub projects.)
TensorFlow has already been used in a number of interesting projects, including a program that learns how to play Pong (which also borrows machine learning elements from DeepMind's famous Atari-playing AI), and a neural network that invents fake but realistic-looking Chinese characters. This last project is the creation of designer David Ha, who told The Verge that TensorFlow made it easier for "the broader [AI] community to participate in deep learning."
He says that Google's platform is consistently designed and the results are easy to share, making it more inviting for novices. Ha adds that TensorFlow is not unique, though, and that there alternative services out there. "Once you are familiar with how neural networks and deep learning work, it is not difficult to switch from one library to another," says Ha. "It's kind of like switching from Python to Javascript — [it] takes a bit of time to understand the quirks of each language."
Examples of Ha's fake kanji. The colors indicate the order of the strokes. (Image credit: David Ha)
If you're curious about machine learning programs like TensorFlow work, you can also check out this great in-browser neural network, powered, funnily enough, by TensorFlow itself. You select the dummy data you want to analyze on the left, pick and combine the properties to look for in the middle, and see how the resulting output matches up with the starting data. It's pretty tricky at first, but it's a great way to understand, at an abstract level, how neural network systems sort through information. Before you know it, you'll be building the next AlphaGo.