For the past three years, the Google Assistant has been helping people around the world get things done. The Assistant is now on over one billion devices, available in over 30 languages across 80 countries, and works with over 30,000 unique connected devices for the home from more than 3,500 brands globally. We’ve been working to make your Assistant the fastest, most natural way to get things done, and today at Google I/O we’re sharing our vision for the future.
The next generation Assistant
To power the Google Assistant, we rely on the full computing power of our data centers to support speech transcription and language understanding models. We challenged ourselves to re-invent these models, making them light enough to run on a phone.
Today, we’ve reached a new milestone. Building upon advancements in recurrent neural networks, we developed completely new speech recognition and language understanding models, bringing 100GB of models in the cloud down to less than half a gigabyte. With these new models, the AI that powers the Assistant can now run locally on your phone. This breakthrough enabled us to create a next generation Assistant that processes speech on-device at nearly zero latency, with transcription that happens in real-time, even when you have no network connection.
Running on-device, the next generation Assistant can process and understand your requests as you make them, and deliver the answers up to 10 times faster. You can multitask across apps—so creating a calendar invite, finding and sharing a photo with your friends, or dictating an email is faster than ever before. And with The Assistant’s new driving mode features a voice-forward dashboard that brings your most relevant activities—like navigation, messaging, calling and media—front and center.