The original version of the app was really just a fun little experiment that could be used to classify visual data from your webcam. (I taught it to identify my houseplants!) But Google has added new modes to the system along with the option to export trained models, making Teachable Machine 2.0 a more functional system for building actual AI tools.
Along with image data, Teachable Machine now works with audio and body pose input. Users can upload their own pre-collected datasets, sort data into more than three categories, and download and deploy their models locally or host them in the cloud. That means you could train a basic system using Teachable Machine and get it running on a website or app.
Google already offers a no-coding AI trainer called Cloud AutoML, but this is a much more professional tool, with a greater scope for customization, scaling, and customer support.
Teachable Machine 2.0, by comparison, is quick and dirty: it’s an on-ramp for new ML practitioners and something that will let users quickly prototype an AI solution. Google notes that Teachable Machine is run entirely on the user’s computer, meaning training data never leaves your device (a reassurance for those who are worried about privacy).
In the years since the latest wave of machine learning systems took off, there have been plenty of no-coding AI tools around. Experts are sometimes skeptical about the quality of AI models they produce, noting they can be inefficient and sloppy, and that, without proper programming skills, the people building these tools won’t really get the best out of them.
But there’s no denying that visual interfaces take away a lot of the intimidating factors of machine learning and can tempt more people to experiment and tinker with these tools. The hard work of finessing and refining a model can come later if necessary; it’s just good to get people started. You can try playing with Teachable Machine for yourself here.