What is Retrieval-Augmented Generation (RAG)?

Large language models usually give great answers, but because they're limited to the training data used to create the model, over time they can become incomplete--or worse, generate answers that are just plain wrong.

Large language models usually give great answers, but because they're limited to the training data used to create the model, over time they can become incomplete--or worse, generate answers that are just plain wrong. One way of improving the LLM results is called "retrieval-augmented generation" or RAG. In this video, IBM Senior Research Scientist Marina Danilevsky explains the LLM/RAG framework and how this combination delivers two big advantages, namely: the model gets the most up-to-date and trustworthy facts, and you can see where the model got its info, lending more credibility to what it generates.