Google unveils its next-generation Google Assistant milestone features that empower mobile devices with new speech recognition and language understanding capabilities.
The new Google Assistant is leveraging the full computing power of its data centers to support speech transcription and language understanding models on a phone, Google revealed at its I/O 2019 developers conference.
The artificial intelligence technology that drives the Assistant can now run locally on mobile devices, which processes speech on-device at nearly zero latency, with transcription that happens in real-time, even without going online, Google said.
With this new feature, the Assistant can deliver answers up to 10 times faster than its current iteration, according to Google.
The Assistant can also multi-task across apps such as creating a Calendar invite, and allowing users to find and share a photo with friends or dictate an email at a faster than ever before speed, Google said.
The tech giant based in Mountain View, California disclosed that the Assistants has been installed on over 1 billion devices, available in over 30 languages across 80 countries, and adopted by more than 3,500 brands across the globe.
Google said the new-generation Assistants will become available first on its new Pixel phones, which are set to be released later this year.