Contextual object recognition on steroids: Google Lens makes phones' cameras "smart"
Google's keynote at Google I/O 2017 is going full speed ahead as we speak, and one of the first intriguing new things unveiled on stage was Google Lens.
No, Google is not reanimating the failed Glass wearable, but supercharging the object recognition capabilities of its Assistant and Photos apps. Google lens is a set of vision-based computing capabilities that will also let you take action based on the subject. You will be able to point your phone's camera at an object and it will automatically scour the web for any information that pertains to the object you want to learn more about.
For example, point your phone at a flower and Google Lens will do its best to identify the exact plant species. It will also allow users to point their camera at a venue and any relevant data from Google Maps will immediately pop up on your screen. Pointing your phone at text in a foreign language will allow Google Assistant to immediately translate that into your mother language. You will also be able to automatically connect to a Wi-Fi network by simply pointing your camera at the network's SSID and password. Neat!
No, Google is not reanimating the failed Glass wearable, but supercharging the object recognition capabilities of its Assistant and Photos apps. Google lens is a set of vision-based computing capabilities that will also let you take action based on the subject. You will be able to point your phone's camera at an object and it will automatically scour the web for any information that pertains to the object you want to learn more about.
Basically, it will use your camera as a search box, which will deliver any relevant data to you courtesy of Big G's vast object-recognizing neural networks.
Google lens will first arrive on the Google Assistant and Google Photos, but will get more intertwined with Google's app ecosystem as time goes by.
Things that are NOT allowed: