Blog
All Articles
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Comparing human, LLM & LLM-RAG responses
A recent study, focusing on the healthcare & preoperative medicine compared expert human feedback with LLM generation and RAG enhanced responses.

Designing conversational UIs that match user intent
In this article I illustrate how to achieve intent alignment by making use of the Kore.ai XO Platform Intent Discovery Tool.

A benchmark for verifying chain-of-thought
A Chain-of-Thought is only as strong as its weakest link; a recent study from Google Research created a benchmark for Verifiers of Reasoning Chains

Seven RAG engineering failure points
Explore the seven key failure points in engineering RAG systems, from missing content and retrieval errors to wrong formats and incomplete responses.
Get the latest resources straight to your inbox
Subscribe to stay ahead and receive exclusive updates from our resource center before anyone else!












.webp)




