RAG is unable to resolve the hallucination issue

Why RAG is unable to resolve the hallucination issue . 24/7with generative AI

Businesses aiming to incorporate generative AI models into their operations face significant challenges due to hallucinations and RAG is unable to resolve the hallucination issue , which are essentially falsehoods propagated by the models.

 RAG is unable to resolve the hallucination issue
Models occasionally make mistakes since they are merely anticipating words, images, voice, music, and other data based on a private schema and lack true intelligence. terribly incorrect. A source relates an incident where Microsoft’s generative AI created meeting attendees and suggested that conference calls were on topics that weren’t really discussed on the call in a recent Wall Street Journal article.

As I mentioned previously, hallucinations might be an intractable issue with the transformer-based model architectures of today. However, some providers of generative AI contend that they may be largely eliminated by a technical RAG stands for retrieval augmented generation.

This is how Squirro, one merchant, presents it:
The idea of Retrieval Augmented LLMs, or Retrieval Augmented Generation (RAG) integrated in the solution, is at the heart of the offering. Zero hallucinations is a unique guarantee made by [our generative AI]. It is credible because each item of information it produces can be linked back to its original source.
This is a SiftHub pitch that is comparable:
With the use of RAG technology and huge language models that have been refined via training in industry-specific knowledge, SiftHub enables businesses to produce customized responses with no hallucinations. This encourages complete trust to use AI for all of their needs while ensuring greater transparency and decreased risk.

Why RAG is unable to resolve the hallucination issue

Data scientist Patrick Lewis, a researcher at Meta and University College London, was a pioneer in RAG and the primary author of the word was first used in this 2020 study. When applied to a model, RAG uses simply a keyword search to find articles that might be related to a question (like a Wikipedia entry about the Super Bowl), and then it asks the model to provide answers in light of the new context.

“The default response from a generative AI model, such as ChatGPT or Llama, when you ask a question is from its ‘parametric memory’ β€” that is, from the knowledge that is stored in its parameters as a result of training on massive amounts of web data,” said David Wadden, a research scientist at AI2, the nonprofit Allen Institute’s AI-focused research division. However, just as you’re probably going to provide more precise responses if you have

a reference [such as a book or file] in front of you; models can also serve as references in some situations.

Unquestionably helpful, RAG enables one to link items generated by a model to papers that have been collected in order to confirm their veracity (as an added bonus, it helps prevent possible copyright-infringing regurgitation). Additionally, RAG enables models to access documents more securely and transiently for businesses who would prefer not to have their papers utilized to train a model, such as healthcare and legal firms.

However, RAG cannot prevent a model from experiencing hallucinations. Additionally, many sellers ignore its limits.

According to Wadden, “knowledge-intensive” situations where a user wishes to apply a model are where RAG works best.

to satisfy a “information need”; for instance, to learn the previous year’s Super Bowl winner. In many cases, it is likely that the document providing the answer shares many of the same keywords as the inquiry (such as “Super Bowl,” “last year”), making it reasonably simple to locate through keyword search.

It becomes more difficult to explain the concepts required to answer a request in a keyword-based search query for “reasoning-intensive” tasks like coding and math, much alone determine which pages could be relevant.

Even when asked simple questions, models may become “distracted” by unrelated information in papers, especially if the material is lengthy and the solution isn’t immediately apparent. Alternatively, they may choose to disregard the contents of the retrieved documents for reasons that are yet unknown.

choosing to rely on their parametric memory in its place.

In addition, the gear required to implement RAG widely is costly.

This is due to the fact that documents that are retrieved, whether from the internet, an internal database, or some other location, must be temporarily kept in memory in order for the model to refer to them later. Compensating for the additional context a model must process before producing a response is an additional expense. This is a significant consideration for a technology that is already well-known for the massive amount of computing power and electricity it needs, even for simple functions.

That is not to say that RAG cannot be made better. Wadden mentioned numerous current initiatives to train models for improved utilization of documents recovered by RAG.

Models are used in some of these initiatives. that have the ability to “decide” when to use the documents, or models that have the option to forego retrieval altogether if they believe it is not necessary. Others concentrate on methods for indexing large document datasets more effectively and on enhancing search through improved document representations – representations that go beyond keywords.

“We’re not so good at retrieving documents based on more abstract concepts, like a proof technique needed to solve a math problem,” Wadden said, adding that “we’re pretty good at retrieving documents based on keywords.” To create document representations and search strategies that can locate pertinent materials for activities involving the creation of more abstracts, research is required. At this point, I believe that much of the question is open.

Thus, RAG can aid in lowering a model’s hallucinations, but it’s not a panacea for AI’s hallucinogenic issues. Any merchant who makes such a claim should be avoided.

1 Comment

Comments are closed