Agentic AI Apps
AI Solutions
Pre-built Applications

Ready-to-deploy applications across industries and functions.

AI for Banking
AI for Healthcare
AI for Retail
AI for IT
AI for HR
AI for Recruiting
Application Accelerators

Leverage pre-built AI agents, templates, and integrations from the Kore.ai Marketplace.

Kore.ai Marketplace
Pre-built agents
Templates
Integrations
Tailored Applications

Design and build applications on our Agent Platform using our enteprise modules.

Platform
Agent Platform

Your strategic enabler for enterprise AI transformation.

Learn more
Enterprise Modules
AI for Work
AI for Service
AI for Process
Top Resources
Scaling AI: practical insights
from AI leaders
AI use cases: insights from AI's leading decision makers
Beyond AI islands: how to fully build an enterwise-wide AI workforce
QUICK LINKS
About Kore.aiCustomer StoriesPartnersResourcesBlogWhitepapersDocumentationAnalyst RecognitionGet supportCommunityAcademyCareersContact Us
Agent Platform
Agent Platform
Agent Platform

Your strategic enabler for enterprise AI transformation.

learn more
PLATFORM MODULES
Multi-Agent Orchestration
AI Engineering Tools
Search + Data AI
AI Security + Governance
No-Code + Pro-Code Tools
Observability
Integrations
Enterprise Modules
For Service
AI AgentsAgent AI AssistanceAgentic Contact CenterQuality AssuranceProactive Outreach
For Work
Modules
Enterprise SearchIntelligent OrchestratorPre-Built AI AgentsAdmin ControlsAI Agent Builder
Departments
SalesMarketingEngineeringLegalFinance
For Process
Process AutomationAI Analytics + MonitoringPre-built Process Templates
upcoming event

Join the first generation of leaders who are designing, governing, and leading the truly intelligent organization.

Orlando
12 May
register
Recent AI Insights
What's new in AI for Work: features that drive enterprise productivity
What's new in AI for Work: features that drive enterprise productivity
AI INSIGHT
20 Feb 2026
Parallel Agent Processing
Parallel Agent Processing
AI INSIGHT
16 Jan 2026
The AI productivity paradox: why employees are moving faster than enterprises
The AI productivity paradox: why employees are moving faster than enterprises
AI INSIGHT
12 Jan 2026
Agent Marketplace
More
More
Resources
Resource Hub
Blog
Whitepapers
Webinars
AI Research Reports
AI Glossary
Videos
AI Pulse
Generative AI 101
Responsive AI Framework
CXO Toolkit
Private equity
support
Documentation
Get support
Submit RFP
Academy
Community
COMPANY
About us
Leadership
Customer Stories
Partners
Analyst Recognition
Newsroom
Events
Careers
Contact us
Agentic AI Guides
forrester cx wave 2024 Kore at top
Kore.ai named a leader in The Forrester Wave™: Conversational AI for Customer Service, Q2 2024
Generative AI 101
CXO AI toolkit for enterprise AI success
upcoming event

Join the first generation of leaders who are designing, governing, and leading the truly intelligent organization.

Orlando
12 May
register
Talk to an expert
Not sure which product is right for you or have questions? Schedule a call with our experts.
Request a Demo
Double click on what's possible with Kore.ai
Sign in
Get in touch
Background Image 1
Blog
Conversational AI
Improving text embeddings with LLM generated synthetic data

Improving text embeddings with LLM generated synthetic data

Published Date:
June 12, 2024
Last Updated ON:
February 18, 2026

Value Discovered

Text embeddings are really playing a pivotal role in retrieving semantically similar text for RAG implementations and in-context learning at inference.

This study focusses on creating high quality text embeddings by only using synthetic data and only executing 1,000 training iterations.

Generally available LLMs were used to generate synthetic data for 100,000+ of text embeddings across a multitude of tasks and languages.

Experiments demonstrate that our method achieves strong performance on highly competitive text embedding benchmarks without using any labelled data. — Source

Visual representation of a computer's components, featuring the CPU, RAM, motherboard, power supply, and storage.
LLM‑synthetic pairs improve embedding quality at scale

Solved Challenge

Embedding-based retrieval is the foundational component of retrieval-augmented generation (RAG). RAG has been proven as a highly effective approach to allow LLMs to reference dynamic external knowledge at inference without training the base model.

The study wanted to solve for the challenge of creating large amounts of weak-supervised text pairs, building complex training pipelines or manual collection of datasets which need to be annotated.

The two overheads to collecting data for large RAG implementations are:

  • Complex multi-stage training pipeline that demands substantial engineering efforts to curate large amounts of relevant pairs.
  • Manually collected datasets that are often constrained by the diversity of tasks and the coverage of languages.
  • This new approach does not rely on any unlabelled documents or queries and thus can generate more diverse synthetic data.

Considerations

There a few considerations to keep in mind…

  1. Due to commercial interests and product marketing and fit, certain approaches are favoured and pushed above others. This study is again a reminder that the tools and methods at our disposal are growing and enterprises will have to select the right technology, processes and people for each task.
  2. This study addresses the challenges when semantic search and similarity tasks are implemented on a very wide range of data. For instance, the study considered 93 different human languages and ranging over 100,000s of embedding tasks. Hence the sheer scale of the endeavour should be taken into account.
  3. I say this, because there are many very capable no-code to low-code semantic / text embedding tools available where reference data can be uploaded via a web interface. And subsequently data is automatically chunked in vector representations are created. But I get the sense this study is focussing on very large scale implementations.
  4. There is also a requirement from most organisations to understand and gain insights from existing customer conversations. And being able to create user and conversation intents from past conversations; hence solving for the long-tail of intent distribution.
  5. Hence synthetic data is not be seen as the silver bullet while neglecting Reinforcement learning from human feedback, or any form of weak supervision.
  6. Considering the image below, it is clear that this approach follows a two-step approach and the templates are shown below. Synthetic data was created using GPT-4.
  7. The first prompt is used to brainstorm a list of potential retrieval tasks and then generate (query, positive, hard negative) triplets for each task.
  8. The templated process makes for ease of automation, but cost will be high for generating vast amounts of training data.
  9. I guess there is a case to be made, to take existing vetted and annotated data and augment this data. Especially based on the same user intent and possibly branching out into other languages.
A document featuring the phrase "what is a business plan" prominently displayed.
Source

Creating Diverse Synthetic Data

To generate diverse synthetic data, the study developed a taxonomy that categorises embedding tasks into several groups, and then apply different prompt templates to each group.

As illustrated below…

Chart displaying types of treatments available, featuring categories like medication, therapy, and lifestyle adjustments.
Specific tasks and their functions

Below the statistics of the synthetic data. The task type and language representation of the generated synthetic data are shown.

A pie chart displaying the percentage breakdown of languages spoken worldwide.
Source

To again emphasise the scale of this exercise, with 500,000 examples and 150,000 unique instructions using Azure OpenAI Service, among which 25% were generated by GPT-35-Turbo and others are generated by GPT-4.

The total token consumption was about 180,000,000.

In terms of data quality, the study found that portions of GPT-35-Turbo outputs did not strictly follow the guidelines specified in the prompt templates.

Nevertheless, the overall quality remained acceptable, and preliminary experiments have demonstrated the benefits of incorporating this data subset.

Conclusion

The paper highlights the significant improvement in text embeddings’ quality by leveraging Large Language Models (LLMs), particularly proprietary ones like GPT-4.

The taxonomy to generate diverse synthetic data across multiple languages is insightful, and can be useful on a smaller scale.

Future direction includes enhancing multilingual performance, exploring open-source LLMs for synthetic data generation, and investigating methods to improve inference efficiency and reduce storage costs for LLM-based text embeddings.

Previously published on Medium. 

Talk to an expert
Share
Link copied
authors
Cobus Greyling
Cobus Greyling
Chief Evangelist
Forrester logo at display.
Kore.ai named a leader in the Forrester Wave™ Cognitive Search Platforms, Q4 2025
Access Report
Gartner logo in display.
Kore.ai named a leader in the Gartner® Magic Quadrant™ for Conversational AI Platforms, 2025
Access Report
Stay in touch with the pace of the AI industry with the latest resources from Kore.ai

Get updates when new insights, blogs, and other resources are published, directly in your inbox.

Subscribe
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Recent Blogs

View all
AI agents in retail: 12 proven use cases & examples (2026)
March 5, 2026
AI agents in retail: 12 proven use cases & examples (2026)
The end of manual AP: Zero-Touch invoice processing with AI for Process
February 20, 2026
The end of manual AP: Zero-Touch invoice processing with AI for Process
AI Agent governance: A practical guide to risk, trust, and compliance
February 20, 2026
AI Agent governance: A practical guide to risk, trust, and compliance
Accelerate time-to-value from AI

Find out how Kore.ai can help

Talk to an expert
Start using an AI agent today

Browse and deploy our pre-built templates

Marketplace
Background Image 4
Background Image 9
You are now leaving Kore.ai’s website.

‍

Kore.ai does not endorse, has not verified, and is not responsible for, any content, views, products, services, or policies of any third-party websites, or for any verification or updates of such websites. Third-party websites may also include "forward-looking statements" which are inherently subject to risks and uncertainties, some of which cannot be predicted or quantified. Actual results could differ materially from those indicated in such forward-looking statements.



Click ‘Continue’ to acknowledge the above and leave Kore.ai’s website. If you don’t want to leave Kore.ai’s website, simply click ‘Back’.

CONTINUEGO BACK
Agentic AI applications for the enterprise
English
Spanish
Spanish
Spanish
Spanish
Pre-Built Applications
BankingHealthcareRetailRecruitingHRIT
Kore.ai agent platform
Platform OverviewMulti-Agent OrchestrationAI Engineering ToolsSearch and Data AIAI Security and GovernanceNo-Code and Pro-Code ToolsIntegrations
 
AI for WorkAI for ServiceAI for ProcessAgent Marketplace
company
About Kore.aiLeadershipCustomer StoriesPartnersAnalyst RecognitionNewsroom
resources
DocumentationBlogWhitepapersWebinarsAI Research ReportsAI GlossaryVideosGenerative AI 101Responsive AI frameworkCXO Toolkit
GET INVOLVED
EventsSupportAcademyCommunityCareers

Let’s work together

Get answers and a customized quote for your projects

Submit RFP
Follow us on
© 2026 Kore.ai Inc. All trademarks are property of their respective owners.
Privacy PolicyTerms of ServiceAcceptable Use PolicyCookie PolicyIntellectual Property Rights
|
×