Saturday, 20 May 2017

Google Lens is Google’s future

Google Lens is Google’s future

CEO Sundar Pichai introduces Google Lens in his 2017 Google I/O keynote Google
  • If you want to know where Google is headed, look through Google Lens.
  • The artificially intelligent, augmented reality feature seemed to generate the most interest at Google’s developer conference that wrapped up Friday. Of all announcements, it best encapsulated what Google’s transition to an “AI first” company means.
  • Google CEO Sundar Pichai underscored the tool as a key reflection of Google’s direction, highlighting it in his Google I/O keynote as an example of Google being at an "inflection point with vision.”
  • "All of Google was built because we started understanding text and web pages. So the fact that computers can understand images and videos has profound implications for our core mission," he said in his introduction of Lens.
  • The feature is first being added to Google Photos and the personalized AI software Assistant, which is available on an increasing number of devices. Lens uses machine learning to examine photos viewed through your phone’s camera, or on saved photos on your phone, and can use the images to complete tasks.

A few things Lens can do:

Tell you what species a flower is just by viewing the flower through your phone’s camera;
Read a complicated Wi-Fi password through your phone’s camera and automatically log you into the network;
Offer you reviews and other information about the restaurant or retail store across the street, by you just flashing your camera over the physical place.
Photo by Justin Sullivan/Getty Images
Lens was a favorite of several Google I/O attendees for its clear utility. It’s the kind of feature that could make the apps that contain it more uniquely useful. One developer commented to me it was “the first time AI is more than a gimmick.”
Typing a question into Assistant, for example, can feel like just using Google Search in a separate window. Add this new computer vision capability, though, and you have something a browser search box can’t do.
Lens brings Google’s use of AI into the physical world. It effectively acts as a search box, and shows Google’s adaption to the move amongst younger users toward visual media. That preference has made social network Snap a magnet for younger users, who prefer to communicate with pictures over text.
Lens affirms a consistency of focus for Google. Here is augmented reality at work doing exactly what people know Google can do, which is retrieve information from the web.

But in considering how this new visual search option may play out, compare it to voice search, which for now often returns read-outs of whatever would appear at the top of search, sometimes resulting in answers that are inaccurate, offensive or lacking in context.
Google also cleverly incorporated Lens into one of the company’s most-used apps, Photos, which has gained half a billion users in the two years since its launch. The incorporation could help Google become more essential to mobile users by making its mobile apps more essential. That means Google will have a place on users’ phones even if its own hardware like the Pixel phone fails to catch on.
Pichai said in his founders’ letter a year ago that part of this shift to being an AI first company meant computing would be less device-centric. Lens is an example of being less device-centric, on mobile.
The technology behind Lens is essentially nothing new, and that also tells us something about where Google is going. This is not to say that Google is done coming up with new technologies, but that there are a lot of capabilities the company is still putting together into useful products.

0 comments:

Post a Comment