Blog
All Articles
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

A Study Comparing RAG & Fine-Tuning For Knowledge Base Use-Cases
A New Study Compares RAG & Fine-Tuning For Knowledge Base Use-Cases
This study illustrates again that the use-case informs & dictates the technology.

Get Started Using a Lead Capture Template with a HubSpot Integration
Revolutionize lead capture with Kore.ai's AI chatbot in HubSpot. Automate collection, enhance segmentation, and gain real-time insights to boost sales.

RAT — Retrieval Augmented Thoughts
RAT — Retrieval Augmented Thoughts
Synergising RAG With Sophisticated Long-Horizon Reasoning

Chain-of-Instructions (CoI) Fine-Tuning
This approach draws inspiration from Chain-of-Thought (CoT) prompting which generates step-by-step rationales from LLMs.

Performing Multiple LLM Calls & Voting On The Best Result Are Subject To Scaling Laws
Performing Multiple LLM Calls & Voting On The Best Result Are Subject To Scaling Laws
More LLM calls enhance performance on easy queries but diminish it on hard ones. So what scaling laws can be discovered & leveraged? What is the optimal number of LLM Calls?

Exploring the Purpose, Power & Potential of Small Language Models (SLMs)
Exploring the Purpose, Power & Potential of Small Language Models (SLMs)
A few months ago Microsoft open-sourced the Orca Small Language Model. In this article you will find some background and practical ways of experimenting with SLMs.
Get the latest resources straight to your inbox
Subscribe to stay ahead and receive exclusive updates from our resource center before anyone else!










.webp)
