TECHNOLOGY

Soon your Google searches will be able to combine text and images


In May, Google Executives unveiled experimental new artificial intelligence trained with text and images that they said would make Internet search more intuitive. On Wednesday, Google gave a glimpse of how technology will change the way people search the web.

From next year, the Multitask Unified Model, or MEM, will enable Google users to combine text and image search using lenses, a smartphone application that is also included in Google Search and other products. So you could, for example, take a picture of a shirt with lenses, then search for “these patterned socks”. If you search for “how to fix” in the picture of the bike part, you will see instructional video or blog post.

Google will include MUM in search results to suggest additional ways for users to explore. If you ask Google how to paint, for example, MUM can explain in detail step-by-step instructions, style tutorials, or how to use homemade materials. Google plans to bring MUM to search on YouTube videos next week, where AI will suggest search below videos based on video transcripts.

MUM trained to form ideas about text and images. Integrating MUM into Google search results also represents a continuing advance towards the use of language models that rely on a lot of text scraped from the web and a kind of neural network architecture called Transformers. One of the first such attempts came in 2019, when Google changed the ranking of the web for injecting a language model called BERT into search results and shortened the text below the results.

The new Google technology will strengthen web search that starts as a picture or screenshot and continues as a text query.

Photo: Google

Pandu Nayak, vice president of Google, said BERT search results represent the biggest change in a decade but MEM takes Google search results to the next level of understanding of applied language.

For example, MUM uses data from 75 languages ​​instead of English only, and it is trained on images and text instead of text only. It is 1000 times larger than BERT when measured by the number of parameters or connections between artificial neurons in a deep learning method.

Saying the protagonist MUM is a big milestone in language comprehension, he further acknowledges that big language models bring known challenges and risks.

BERT and other transformer-based models have been shown to exploit the biases found in the data used for their training. In some cases, researchers have found that the larger the language model, the worse the bias and toxic text spread. People working to identify and modify problematic outputs for racist, sexist, and large language models say that the text verification used to train these models is important to minimize damage and that the way the data is filtered can have a negative impact. In April, the Allen Institute for AI reported that block lists used in a popular data set used by Google to train its T5 language models could exclude groups, such as people who identify as queries, making language models difficult to understand by that group or their Text.

The search results YouTube video will soon recommend additional search ideas based on the content of the transcript.

Courtesy Google

Over the past year, multiple AI researchers at Google, including former ethical AI team colloid Timnit Gabru and Margaret Mitchell, said they had faced opposition from executives to their work, which showed that big language models could harm people. Among Google employees, Gebru’s ouster after a dispute over a paper critical of the environmental and social costs of large language models has led to allegations of racism, calls for unions, and strong whistleblower protection for AI ethics researchers.

In June, five U.S. senators cited multiple cases of algorithmic bias in the alphabet and reasons for removing the zebra because of whether Google products like Google or Google’s workplace are safe for blacks. In a letter to executives, the senators wrote, “We are concerned that algorithms will rely on data that reinforces negative stereotypes and either excludes people from seeing ads for housing, employment, credit and education, or only shows predatory opportunities.”



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button