All Articles

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Oct 27, 2024
DRAGIN: Dynamic RAG Based On Real-Time Information Needs Of LLMs
DRAGIN: Dynamic RAG Based On Real-Time Information Needs Of LLMs A Recent study introduced a new approach to RAG but more importantly the study highlighted a number of shortcomings of RAG as we know it.
Cobus Greyling
Cobus Greyling
Oct 24, 2024
A Study Comparing RAG & Fine-Tuning For Knowledge Base Use-Cases
A New Study Compares RAG & Fine-Tuning For Knowledge Base Use-Cases This study illustrates again that the use-case informs & dictates the technology.
Cobus Greyling
Cobus Greyling
Oct 22, 2024
Get Started Using a Lead Capture Template with a HubSpot Integration
Revolutionize lead capture with Kore.ai's AI chatbot in HubSpot. Automate collection, enhance segmentation, and gain real-time insights to boost sales.
Alenis Fiallo
Alenis Fiallo
Oct 21, 2024
RAT — Retrieval Augmented Thoughts
Boost LLM reasoning with RAT — a smart blend of retrieval + chain-of-thought to improve accuracy.
Cobus Greyling
Cobus Greyling
Oct 16, 2024
Chain-of-Instructions (CoI) Fine-Tuning
This approach draws inspiration from Chain-of-Thought (CoT) prompting which generates step-by-step rationales from LLMs.
Cobus Greyling
Cobus Greyling
Oct 15, 2024
Performing Multiple LLM Calls & Voting On The Best Result Are Subject To Scaling Laws
Performing Multiple LLM Calls & Voting On The Best Result Are Subject To Scaling Laws More LLM calls enhance performance on easy queries but diminish it on hard ones. So what scaling laws can be discovered & leveraged? What is the optimal number of LLM Calls?
Cobus Greyling
Cobus Greyling
Subscribe to stay ahead and receive exclusive updates from our resource center before anyone else!