Google introduced some news related to its visual search tool during the I/O 2019 conference promising dozens of integrated features, augmented reality functions, and artificial intelligence even more active.
In addition to the new tools announced, Mountain View tech giant also introduced important visual modifications to make it easier to use Google Lens, offering new modes integrated into the application camera, including not only Search, but also Translator, Text, Shopping, and Food.
Now, these changes are rolling out. When you open Google Lens and point the camera to the main subject, artificial intelligence and machine learning works to automatically identify the scenario. If the person is pointing at something in another language, Lens activates the translation mode, if it detects a restaurant menu, the Food mode is automatically activated.
As you can see above, the new modes appear on a carousel at the bottom of the screen, displaying the search mode first and then allowing the user to modify the scene manually or the system itself does so automatically.
In addition to the above, a new trim feature allows users to specify a particular part of the photographed or archived image if they are interested in an even more refined and specific search, allowing Lens to analyze and offer more detailed options.
The new version of Google Lens is currently appearing for users using the Google app beta 9.91 version, with Pixel line smartphones and Samsung devices. The update may be gradually being released, so soon more users will have the new features.