We’re living in an age of unprecedented change. We experience with Oculus Rift, invest with Acorns, consume video through Hyper, tune into Pandora and navigate with Waze.
Android phones may soon get a whole lot smarter thanks to a new partnership between Google and chip maker Movidius that promises to bring machine intelligence directly into mobile devices.
Movidius specializes in machine vision, and it has already worked with Google on the Project Tango computer-vision platform. Now, through the new collaboration, Google will use Movidius' flagship MA2450 chip to bring deep learning to Android handsets.
Deep learning is a branch of machine learning often applied to image recognition that uses algorithms to learn in multiple levels corresponding to different levels of abstraction. It typically relies on complex neural networks.
Movidius' MA2450 chip is built for extreme power efficiency, making it eminently well-suited for running neural-network computations locally on smartphones. By deploying its advanced neural computation engine on those chips, Google could give devices the ability to recognize images such as faces and street signs in real time, without relying on an Internet connection and algorithms in the cloud.
Such capabilities could be particularly valuable for vision-impaired users, for example.
"Our collaboration with Movidius is enabling new categories of products to be built that people haven't seen before," said Blaise Agϋera y Arcas, head of Google’s machine intelligence group.
Financial terms of the deal weren't disclosed, nor were details about any specific product plans.
"Google is rapidly expanding their smartphone business into new areas -- this is just one of them," said wireless and telecom analyst Jeff Kagan.
The potential is exciting, but "Google typically throws ideas against the wall all the time," Kagan added. "They wait to see what sticks and then build on that. Everything Google does is not successful."