Google Search Adds Multisearch Capabilities for Visual Queries


Google Search Adds Multisearch Capabilities for Visual Queries

(Google Search Adds Multisearch Capabilities for Visual Queries)

MOUNTAIN VIEW, Calif. – Google announced a significant update to its search functionality today. The company is introducing multisearch capabilities for visual queries. This new feature allows users to combine text with images in their searches. It builds upon the existing Google Lens technology.

Users can now take a picture or use a screenshot. Then they can add text to ask questions about the image. For example, someone might snap a photo of a dining table. They can then add text asking “coffee table like this but round”. Google will understand both the visual input and the text request. It will deliver relevant search results.

This method aims to make searches more intuitive. People often see things in the real world they want to learn about. Now they can point their camera and ask naturally. It helps when users don’t know the exact name of an item. They can describe it visually and textually together.

The multisearch feature is rolling out globally. It will be available on the Google app for Android and iOS devices. The company stated this is part of ongoing efforts to improve visual search. They want searching to be as easy as pointing and asking.

Google believes this tool is useful for shopping and learning. Shoppers can find similar products by taking photos and adding preferences. Students can identify plants or solve homework problems visually. It provides a new way to explore information.

The technology relies on advanced machine learning models. These models understand both the image content and the accompanying text. They connect the visual elements with the user’s specific query. This provides more precise answers than image search alone.


Google Search Adds Multisearch Capabilities for Visual Queries

(Google Search Adds Multisearch Capabilities for Visual Queries)

Google encourages users to try the feature. They suggest experimenting with different combinations of pictures and text. The goal is to solve everyday problems with greater ease. This update represents another step towards more natural search interactions.

By admin

Related Post