If playback doesn't begin shortly, try restarting your device.
•
You're signed out
Videos you watch may be added to the TV's watch history and influence TV recommendations. To avoid this, cancel and sign in to YouTube on your computer.
CancelConfirm
Share
An error occurred while retrieving sharing information. Please try again later.
RAG is a powerful technique that supplies your agent with external information, and can improve agent performance. However, relying on RAG alone, agents can often pull in irrelevant documents to the user question.
What if you could catch and fix that before generating an answer? In this video, we’ll show how to use in-the-loop evaluations to filter out noisy results and boost answer quality.
OpenEvals: https://github.com/langchain-ai/opene...
Corrective RAG agent repo: https://github.com/jacoblee93/correct...0:00 – Intro: What we’re building today
0:28 – What is RAG? (Concept overview)
1:45 – Evaluating RAG with OpenEvals
2:35 – Baseline Agent (No Reflection)
3:26 – Improved Architecture: Reflection Steps
4:09 – Code Walkthrough
6:18 – Live Demo & Trace in LangSmith…...more
RAG is a powerful technique that supplies your agent with external information, and can improve agent performance. However, relying on RAG alone, agents can often pull in irrelevant documents to the user question.
What if you could catch and fix that before generating an answer? In this video, we’ll show how to use in-the-loop evaluations to filter out noisy results and boost answer quality.
OpenEvals: https://github.com/langchain-ai/opene...
Corrective RAG agent repo: https://github.com/jacoblee93/correct...0:00 – Intro: What we’re building today
0:28 – What is RAG? (Concept overview)
1:45 – Evaluating RAG with OpenEvals
2:35 – Baseline Agent (No Reflection)
3:26 – Improved Architecture: Reflection Steps
4:09 – Code Walkthrough
6:18 – Live Demo & Trace in LangSmith…...more