Google is about to make image and audio recognition truly awesome
Google has already made significant strides in bringing the benefits of avanced deep learning and image recognition algorithms to average consumers with services such as Now and Photos. But when it comes to pulling off its software engineering mojo, so far, big G had only been relying on the exponential increases in computing power achieved by the top chipmakers in the industry.
For the time being, this has proven itself enough in both theory and practice, but there's nothing like a dedicated chip designed from the ground-up to facilitate a very specific task! Google knows engineering like we know our five fingers, so its latest move seems all sorts of appropriate. American firm Movidius, a "leader in low-power machine vision for connected devices" announced a working partnership with Google. As part of the deal, the latter is going to integrate the former's processors and software development environment in "a new generation of devices" that will launch "in the not-too-distant future". Google, meanwhile, will aid Movidius' neural network technology roadmap.
More specifically, Google will make use of Movidius' MA2450 chip. It is designed to exhibit the performance and power efficiency to carry out "complex neural network computations in an ultra-compact form factor". The chip should resolve the challenges involved in embedding deep learning tech inside consumer devices, such as the need for extreme power efficiency.
source: Market Wired via Yahoo Finance
Movidius' products will enable Google's machine intelligence systems to run natively, utilising locally available computational power, rather than using data centers for number crunching. User data remains on the device, while the algorithms function without an internet connection and with fewer latency issues. This will improve the speed and accuracy of image and audio recognition, enabling a "more personal and contextualized" computing experience.
More specifically, Google will make use of Movidius' MA2450 chip. It is designed to exhibit the performance and power efficiency to carry out "complex neural network computations in an ultra-compact form factor". The chip should resolve the challenges involved in embedding deep learning tech inside consumer devices, such as the need for extreme power efficiency.
Things that are NOT allowed: