Agentic AI Apps
AI Solutions
Pre-built Applications

Ready-to-deploy applications across industries and functions.

AI for Banking
AI for Healthcare
AI for Retail
AI for IT
AI for HR
AI for Recruiting
Application Accelerators

Leverage pre-built AI agents, templates, and integrations from the Kore.ai Marketplace.

Kore.ai Marketplace
Pre-built agents
Templates
Integrations
Tailored Applications

Design and build applications on our Agent Platform using our enteprise modules.

Platform
Agent Platform

Your strategic enabler for enterprise AI transformation.

Learn more
Enterprise Modules
AI for Work
AI for Service
AI for Process
Top Resources
Scaling AI: practical insights
from AI leaders
AI use cases: insights from AI's leading decision makers
Beyond AI islands: how to fully build an enterwise-wide AI workforce
QUICK LINKS
About Kore.aiCustomer StoriesPartnersResourcesBlogWhitepapersDocumentationAnalyst RecognitionGet supportCommunityAcademyCareersContact Us
Agent Platform
Agent Platform
Agent Platform

Your strategic enabler for enterprise AI transformation.

learn more
PLATFORM MODULES
Multi-Agent Orchestration
AI Engineering Tools
Search + Data AI
AI Security + Governance
No-Code + Pro-Code Tools
Observability
Integrations
Enterprise Modules
For Service
AI AgentsAgent AI AssistanceAgentic Contact CenterQuality AssuranceProactive Outreach
For Work
Modules
Enterprise SearchIntelligent OrchestratorPre-Built AI AgentsAdmin ControlsAI Agent Builder
Departments
SalesMarketingEngineeringLegalFinance
For Process
Process AutomationAI Analytics + MonitoringPre-built Process Templates
upcoming event

Join the first generation of leaders who are designing, governing, and leading the truly intelligent organization.

Orlando
12 May
register
Recent AI Insights
What's new in AI for Work: features that drive enterprise productivity
What's new in AI for Work: features that drive enterprise productivity
AI INSIGHT
20 Feb 2026
Parallel Agent Processing
Parallel Agent Processing
AI INSIGHT
16 Jan 2026
The AI productivity paradox: why employees are moving faster than enterprises
The AI productivity paradox: why employees are moving faster than enterprises
AI INSIGHT
12 Jan 2026
Agent Marketplace
More
More
Resources
Resource Hub
Blog
Whitepapers
Webinars
AI Research Reports
AI Glossary
Videos
AI Pulse
Generative AI 101
Responsive AI Framework
CXO Toolkit
Private equity
support
Documentation
Get support
Submit RFP
Academy
Community
COMPANY
About us
Leadership
Customer Stories
Partners
Analyst Recognition
Newsroom
Events
Careers
Contact us
Agentic AI Guides
forrester cx wave 2024 Kore at top
Kore.ai named a leader in The Forrester Wave™: Conversational AI for Customer Service, Q2 2024
Generative AI 101
CXO AI toolkit for enterprise AI success
upcoming event

Join the first generation of leaders who are designing, governing, and leading the truly intelligent organization.

Orlando
12 May
register
Talk to an expert
Not sure which product is right for you or have questions? Schedule a call with our experts.
Request a Demo
Double click on what's possible with Kore.ai
Sign in
Get in touch
Background Image 1
Blog
Conversational AI
DRAGIN: Dynamic RAG based on real-time information needs of LLMs

DRAGIN: Dynamic RAG based on real-time information needs of LLMs

Published Date:
October 27, 2024
Last Updated ON:
February 18, 2026
LLM query solving
LLM solving a query using RIND and QFS

A study introduced a novel approach to RAG but more importantly the study highlighted a number of shortcomings of RAG as we know it. A number of shortcomings of traditional RAG were highlighted in the study. The strategies used for deciding when to make use of RAG as apposed to other methods often rely on a set of static rules. The conversational UI usually has a number of parameters which triggers a call to the RAG implementation. The strategies for deciding what to retrieve typically limit themselves to the LLM’s most recent sentence or the last few tokens. With the lookup trigger not being optimised, unnecessary retrievals take place. These unnecessary and unoptimized retrievals can add unwanted noise, where the retrieved data is not optimised. The overhead in text adds additional cost and inference wait time, potentially timeouts. RAG does not necessarily maintain and take into consideration the complete contextual span of the conversation.

Dynamic RAG

Dynamic Retrieval Augmented Generation actively decides when and what to retrieve during the text generation process.

The two key elements of Dynamic RAG are:

  1. Identifying the optimal moment to activate the retrieval module (when to retrieve) and
  2. Crafting the appropriate query once retrieval is triggered (what to retrieve)

The proposed framework consists of two components, RIND and QFS.

Shortcomings of Traditional RAG
Uncertainty in T-RAG

RIND

Real-time Information Needs Detection which takes into consideration:

  1. The LLM’s uncertainty about its own generated content,
  2. The importance of each token, and the semantic significance of each token.

QFS

For the formulation of retrieval queries, a framework QFS is created: Query Formulation based on Self-Attention. QFS reimagines query formulation by leveraging the LLM’s self-attention across the entire context.

DRAGIN

The framework is specifically designed to make decisions on when and what to retrieve, based on the LLM’s real-time information needs during the text generation process. DRAGIN is described as a lightweight RAG framework that can be incorporated into any Transformer-based LLMs without further training, fine-tuning, or prompt engineering.

Single-Round Retrieval-Augmented LLM

Language models (LLMs) have proven highly effective across various tasks. Nevertheless, their internal knowledge is often insufficient for tasks demanding extensive knowledge. To tackle this issue, Retrieval-Augmented Generation (RAG) strategies are frequently used to boost LLM performance. A straightforward approach involves single-round retrieval augmentation, which is the most common approach.

Multi-Round Retrieval-Augmented LLM

While single-round retrieval suffices for straightforward tasks or instances with clear user information needs, it falls short for complex tasks like long-form question answering, open-domain summarisation, and chain-of-thought reasoning. Relying solely on the user’s initial input for retrieval may fail to encompass all the external knowledge necessary for the model’s requirements. Consequently, researchers have initiated investigations into multi-round retrieval augmentation.

Conclusion

As I have mentioned before, complexity is being added to LLM integrations and applications. Added to this, a multi-round querying approach is being taken. This adds additional cost, latency and dependancy on one or more LLMs; which should encourage enterprises to use open-sourced, locally hosted LLMs. With the increase in RAG complexity, consideration should be given to RAG Agents, or as LlamaIndex refers to it, Agentic RAG.

Find the full study here
Talk to an expert
Share
Link copied
authors
Cobus Greyling
Cobus Greyling
Chief Evangelist
Forrester logo at display.
Kore.ai named a leader in the Forrester Wave™ Cognitive Search Platforms, Q4 2025
Access Report
Gartner logo in display.
Kore.ai named a leader in the Gartner® Magic Quadrant™ for Conversational AI Platforms, 2025
Access Report
Stay in touch with the pace of the AI industry with the latest resources from Kore.ai

Get updates when new insights, blogs, and other resources are published, directly in your inbox.

Subscribe
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Recent Blogs

View all
AI agents in retail: 12 proven use cases & examples (2026)
March 5, 2026
AI agents in retail: 12 proven use cases & examples (2026)
The end of manual AP: Zero-Touch invoice processing with AI for Process
February 20, 2026
The end of manual AP: Zero-Touch invoice processing with AI for Process
AI Agent governance: A practical guide to risk, trust, and compliance
February 20, 2026
AI Agent governance: A practical guide to risk, trust, and compliance
Accelerate time-to-value from AI

Find out how Kore.ai can help

Talk to an expert
Start using an AI agent today

Browse and deploy our pre-built templates

Marketplace
Background Image 4
Background Image 9
You are now leaving Kore.ai’s website.

‍

Kore.ai does not endorse, has not verified, and is not responsible for, any content, views, products, services, or policies of any third-party websites, or for any verification or updates of such websites. Third-party websites may also include "forward-looking statements" which are inherently subject to risks and uncertainties, some of which cannot be predicted or quantified. Actual results could differ materially from those indicated in such forward-looking statements.



Click ‘Continue’ to acknowledge the above and leave Kore.ai’s website. If you don’t want to leave Kore.ai’s website, simply click ‘Back’.

CONTINUEGO BACK
Agentic AI applications for the enterprise
English
Spanish
Spanish
Spanish
Spanish
Pre-Built Applications
BankingHealthcareRetailRecruitingHRIT
Kore.ai agent platform
Platform OverviewMulti-Agent OrchestrationAI Engineering ToolsSearch and Data AIAI Security and GovernanceNo-Code and Pro-Code ToolsIntegrations
 
AI for WorkAI for ServiceAI for ProcessAgent Marketplace
company
About Kore.aiLeadershipCustomer StoriesPartnersAnalyst RecognitionNewsroom
resources
DocumentationBlogWhitepapersWebinarsAI Research ReportsAI GlossaryVideosGenerative AI 101Responsive AI frameworkCXO Toolkit
GET INVOLVED
EventsSupportAcademyCommunityCareers

Let’s work together

Get answers and a customized quote for your projects

Submit RFP
Follow us on
© 2026 Kore.ai Inc. All trademarks are property of their respective owners.
Privacy PolicyTerms of ServiceAcceptable Use PolicyCookie PolicyIntellectual Property Rights
|
×