Google search is launching a new feature that will let users check their pronunciation of unfamiliar words with the help of machine learning.
When searching for a pronunciation guide, a user will be able to speak into their microphone, and Google will use AI to analyze how they pronounce the word. They’ll then receive feedback on how each syllable matches Google’s expected pronunciation.
“For example, if you’re practicing how to say “asterisk,” the speech recognition technology analyzes how you said the word and then, it recognizes that the last soundbite was pronounced ‘rict’ instead of ‘uhsk,’” Google says. “Based on this, you will receive feedback on how you can improve next time.”
The feature launches today, but Google says it’s “experimental” and only available on mobile to begin. The new guides also only work for American English words, though Google says Spanish pronunciations are “soon to follow.”
Google is also improving its word translations and definitions with visual prompts. So if you’re trying to translate “naranja” from Spanish to English, for example, you’ll also see pictures of oranges along with the translated word. If you’re looking up the meaning of the word “seal,” you’ll be shown pictures of mechanical seals, embossed pieces of wax, and the semiaquatic marine mammal. No confusion there.
Google says these picture translations will only work initially in English and for the most easily visualized type of word: nouns. But it plans to expand the coverage in the future.
Both features sound like useful additions to Google’s already impressive linguistic skillset, turning its simple search function into a more well-rounded language coach. With machine learning and the data Google will presumably get from people using this feature, it’ll likely only improve.