The way the Internet searches has been for decades has not changed. Finally, Google researchers have introduced a new concept for search engines. They will work on the basis of language models, and the search process itself will be similar to a conversation with a human expert.
In 1998, two graduate students at Stanford University published a paper describing a new type of search engine called Google that used a hypertext structure. Google, it was claimed, was "designed to efficiently crawl and index the Internet, and to provide more relevant results than existing systems."
The innovation was based on the PageRank algorithm. It ranked search results by how relevant they were to a user’s query, based on the number of links to a page. With PageRank, Google opened the gates to the internet, and Sergey Brin and Larry Page built the company of the same name, which became one of the largest in the world.
A team of Google researchers recently published a proposal to modernize the way search engines work. They propose replacing the ranking method with a single, large, AI-powered language model—a future version of BERT or GPT-3. Users will no longer have to search for information on their own in a large list of web pages. Instead, they will ask questions, and the language model will answer them directly. This approach could change not only search engines, but also the way we interact with them.
But first, we need to fix the problems with current language models. For example, as researchers at Google and other companies point out, AI sometimes generates biased and offensive responses.
Modern search engines
Search engines have gotten faster and more accurate over the years. Results are now ranked using artificial intelligence, and Google uses the BERT language model to better understand search queries. But beyond these jamaica number data innovations, all the major search engines work the same way they did 20 years ago.
scans the Internet and maintains a list of all the results it finds) indexes web pages.
Results that match the user's query are collected from this index and ranked.
Even the best modern search engines still return a list of documents that contain the information you are looking for, but they don't provide the answer itself. They also don't do well with queries that require answers from multiple sources. Imagine asking a doctor for advice, and instead of a direct answer, he gives you a list of articles to study.
The concept of search engines of the future
Donald Metzler and his colleagues at Google Research want to create a search engine that acts as a human expert, providing answers in natural language, gleaned from multiple documents, and supporting them with links to data that supports the answer, much like Wikipedia articles do.
Large language models partially meet these requirements. For example, GPT-3 is trained on a large amount of data from the Internet and hundreds of books. It pulls information from several places and provides answers in natural language. However, GPT-3 does not track the sources it uses and cannot provide evidence for its answers. As a result, it is impossible to say how accurate the information it provides is.
Metzler and his colleagues call language models amateurs: “They are supposed to be smart, but their knowledge is superficial.” In their opinion, this problem can be solved by teaching future versions of BERT and GPT-3 to store source records. At the moment, none of these models are capable of this. But in general, it is possible, and research in this direction has already begun.
Will and how will search engines change in the future? 25.05.2021
-
- Posts: 337
- Joined: Tue Jan 07, 2025 4:50 am