If you own new Android device or a Home smart speaker, then there is a good chance that you’ve encountered a voice that is capable of speaking to you and answers your questions whenever you say “OK Google.” This voice is known as the Google Assistant. The G Assistant is a virtual personal assistant capable of engaging in two-way conversations with almost anyone.
It is considered to be an upgrade or an extension of Google Now as well as an expansion of the voice controls featured in OK Google. The assistant is activated when responding to the words “OK Google” on Android phones such as the Pixel and responds to both “Hey” and “OK Google” on the Home smart speaker. Once these words have been said, the assistant can be commanded to perform various functions.
Earlier this week at the Developer Days event in Poland, engineer Behshad Behzadi gave a quick demonstration of some of the improved natural language, understanding and computer vision of the Assistant technology. He said that his demonstration was “a mixture of things that are life and launched” and also added that each of the features would become available within a few months. Below are some of the features that we should expect to see in Google Assistant for 2017 and beyond.
If you’re not in a reading mood, here is a video demonstrating all the new features we will mention below. To focus on the Google Assistant, you can fast forward till around the 38-minute mark.
Lens Computer Vision
The Google Lens feature is capable of identifying objects, buildings, and texts whenever a user points a camera at them. The Lens feature gives the G Assistant the capability of seeing and thereby allowing it to talk to a user about what it identifies via the camera.
On the Spot Translation
To be able to showcase this feature correctly, Behzadi gave the Assistant an instruction to become his Vietnamese translator, and the assistant was able to perform a real time translation through voice and also with text on the phone. No specific details were given about the languages that would be available for on the spot translation.
Better Contextual Understanding
Google Assistant is also capable of learning how to focus on the intention of your first question and then proceed to answer any issues that follow concerning the initial topic. Once the feature is available, it will enable users to go deeper into understanding a particular subject without necessarily requiring restating their intent after each question.
Vague and longer questions will also be read more efficiently with improved natural language understanding. Behzadi said that this was mainly made possible by merging the power of the signals that come from the search engine with machine learning.
Currently, users can tell Google Assistant to remember your favorite sports team’s name. In months to come, users will be able to ask more personalized questions such as ‘How is my team doing?’ to acquire current stats. Also, in future updates, users will also be able to teach the Assistant more about their personal preferences.
Better Understanding in Loud Environments
While no specific details concerning improved speech recognition were given during the event, Behzadi told viewers that Google Assistant continues to get better when it comes to understanding the user’s voice, even in loud environments.
For more cool features about Google be sure to visit this page.