AI Solutions
AI Solutions
AI for Work

Search across silos. Automate workflows. Orchestrate AI agents. Govern with confidence.

learn more
features
Enterprise SearchIntelligent OrchestratorPre-Built AI AgentsAdmin ControlsAI Agent Builder
Departments
SalesMarketingEngineeringLegalFinance
PRE-BUILT accelerators
HRITRecruiting
AI for Service

Leverage Agentic capabilities to empower customers and create personalized experiences.

learn more
features
AI agentsAgent AI AssistanceAgentic Contact CenterQuality AssuranceProactive Outreach
PRE-BUILT accelerators
RetailBankingHealthcare
AI for Process

Streamline knowledge-intensive business processes with autonomous AI agents.

learn more
features
Process AutomationAI Analytics + MonitoringPre-built Process Templates
Use Cases
Zero-Touch IT Operations Management
Top Resources
Scaling AI: practical insights
from AI leaders
AI use cases: insights from AI's leading decision makers
Beyond AI islands: how to fully build an enterwise-wide AI workforce
QUICK LINKS
About Kore.aiCustomer StoriesPartnersResourcesBlogWhitepapersDocumentationAnalyst RecognitionGet supportCommunityAcademyCareersContact Us
Agent Platform
Agent Platform
Agent Platform

Your strategic enabler for enterprise AI transformation.

learn more
FEATURES
Multi-Agent Orchestration
AI Engineering Tools
Search + Data AI
AI Security + Governance
No-Code + Pro-Code Tools
Integrations
GET STARTED
AI for WorkAI for ServiceAI for ProcessAgent Marketplace
LEARN + DISCOVER
About Kore.aiCustomer StoriesPartnersResource HubBlogWhitepapersAI Research ReportsNewsroomAnalyst RecognitionDocumentationGet supportAcademy
GET INVOLVED
AI PulseEventsCommunityCareersContact Us
upcoming event

CCW Berlin brings together international experts, visionary speakers, and leading companies to explore the future of customer experience, AI, and digital transformation in a dynamic blend of congress and exhibition

Berlin
4 Feb
register
Recent AI Insights
The AI productivity paradox: why employees are moving faster than enterprises
The AI productivity paradox: why employees are moving faster than enterprises
AI INSIGHT
12 Jan 2026
The Decline of AI Agents and Rise of Agentic Workflows
The Decline of AI Agents and Rise of Agentic Workflows
AI INSIGHT
01 Dec 2025
AI agents and tools: Empowering intelligent systems for real world impact
AI agents and tools: Empowering intelligent systems for real world impact
AI INSIGHT
12 Nov 2025
Agent Marketplace
More
More
Resources
Resource Hub
Blog
Whitepapers
Webinars
AI Research Reports
AI Glossary
Videos
AI Pulse
Generative AI 101
Responsive AI Framework
CXO Toolkit
support
Documentation
Get support
Submit RFP
Academy
Community
COMPANY
About us
Leadership
Customer Stories
Partners
Analyst Recognition
Newsroom
Events
Careers
Contact us
Agentic AI Guides
forrester cx wave 2024 Kore at top
Kore.ai named a leader in The Forrester Wave™: Conversational AI for Customer Service, Q2 2024
Generative AI 101
CXO AI toolkit for enterprise AI success
upcoming event

CCW Berlin brings together international experts, visionary speakers, and leading companies to explore the future of customer experience, AI, and digital transformation in a dynamic blend of congress and exhibition

Berlin
4 Feb
register
Talk to an expert
Not sure which product is right for you or have questions? Schedule a call with our experts.
Request a Demo
Double click on what's possible with Kore.ai
Sign in
Get in touch
Background Image 1
Blog
Conversational AI
Improving Text Embeddings with LLM Generated Synthetic Data

Improving Text Embeddings with LLM Generated Synthetic Data

Published Date:
June 12, 2024
Last Updated ON:
November 17, 2025

Value Discovered

Text embeddings are really playing a pivotal role in retrieving semantically similar text for RAG implementations and in-context learning at inference.

This study focusses on creating high quality text embeddings by only using synthetic data and only executing 1,000 training iterations.

Generally available LLMs were used to generate synthetic data for 100,000+ of text embeddings across a multitude of tasks and languages.

Experiments demonstrate that our method achieves strong performance on highly competitive text embedding benchmarks without using any labelled data. — Source

Visual representation of a computer's components, featuring the CPU, RAM, motherboard, power supply, and storage.
LLM‑synthetic pairs improve embedding quality at scale

Solved Challenge

Embedding-based retrieval is the foundational component of retrieval-augmented generation (RAG). RAG has been proven as a highly effective approach to allow LLMs to reference dynamic external knowledge at inference without training the base model.

The study wanted to solve for the challenge of creating large amounts of weak-supervised text pairs, building complex training pipelines or manual collection of datasets which need to be annotated.

The two overheads to collecting data for large RAG implementations are:

  • Complex multi-stage training pipeline that demands substantial engineering efforts to curate large amounts of relevant pairs.
  • Manually collected datasets that are often constrained by the diversity of tasks and the coverage of languages.
  • This new approach does not rely on any unlabelled documents or queries and thus can generate more diverse synthetic data.

Considerations

There a few considerations to keep in mind…

  1. Due to commercial interests and product marketing and fit, certain approaches are favoured and pushed above others. This study is again a reminder that the tools and methods at our disposal are growing and enterprises will have to select the right technology, processes and people for each task.
  2. This study addresses the challenges when semantic search and similarity tasks are implemented on a very wide range of data. For instance, the study considered 93 different human languages and ranging over 100,000s of embedding tasks. Hence the sheer scale of the endeavour should be taken into account.
  3. I say this, because there are many very capable no-code to low-code semantic / text embedding tools available where reference data can be uploaded via a web interface. And subsequently data is automatically chunked in vector representations are created. But I get the sense this study is focussing on very large scale implementations.
  4. There is also a requirement from most organisations to understand and gain insights from existing customer conversations. And being able to create user and conversation intents from past conversations; hence solving for the long-tail of intent distribution.
  5. Hence synthetic data is not be seen as the silver bullet while neglecting Reinforcement learning from human feedback, or any form of weak supervision.
  6. Considering the image below, it is clear that this approach follows a two-step approach and the templates are shown below. Synthetic data was created using GPT-4.
  7. The first prompt is used to brainstorm a list of potential retrieval tasks and then generate (query, positive, hard negative) triplets for each task.
  8. The templated process makes for ease of automation, but cost will be high for generating vast amounts of training data.
  9. I guess there is a case to be made, to take existing vetted and annotated data and augment this data. Especially based on the same user intent and possibly branching out into other languages.
A document featuring the phrase "what is a business plan" prominently displayed.
Source

Creating Diverse Synthetic Data

To generate diverse synthetic data, the study developed a taxonomy that categorises embedding tasks into several groups, and then apply different prompt templates to each group.

As illustrated below…

Chart displaying types of treatments available, featuring categories like medication, therapy, and lifestyle adjustments.
Specific tasks and their functions

Below the statistics of the synthetic data. The task type and language representation of the generated synthetic data are shown.

A pie chart displaying the percentage breakdown of languages spoken worldwide.
Source

To again emphasise the scale of this exercise, with 500,000 examples and 150,000 unique instructions using Azure OpenAI Service, among which 25% were generated by GPT-35-Turbo and others are generated by GPT-4.

The total token consumption was about 180,000,000.

In terms of data quality, the study found that portions of GPT-35-Turbo outputs did not strictly follow the guidelines specified in the prompt templates.

Nevertheless, the overall quality remained acceptable, and preliminary experiments have demonstrated the benefits of incorporating this data subset.

Conclusion

The paper highlights the significant improvement in text embeddings’ quality by leveraging Large Language Models (LLMs), particularly proprietary ones like GPT-4.

The taxonomy to generate diverse synthetic data across multiple languages is insightful, and can be useful on a smaller scale.

Future direction includes enhancing multilingual performance, exploring open-source LLMs for synthetic data generation, and investigating methods to improve inference efficiency and reduce storage costs for LLM-based text embeddings.

Previously published on Medium. 

Talk to an expert
Share
Link copied
authors
Cobus Greyling
Cobus Greyling
Chief Evangelist
Forrester logo at display.
Kore.ai named a leader in the Forrester Wave™ Cognitive Search Platforms, Q4 2025
Access Report
Gartner logo in display.
Kore.ai named a leader in the Gartner® Magic Quadrant™ for Conversational AI Platforms, 2025
Access Report
Stay in touch with the pace of the AI industry with the latest resources from Kore.ai

Get updates when new insights, blogs, and other resources are published, directly in your inbox.

Subscribe
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Recent Blogs

View all
Agentic AI in Retail: Transforming Customer Experience & Operations 
January 23, 2026
Agentic AI in Retail: Transforming Customer Experience & Operations 
Top Glean Alternatives (2026 Guide)
January 23, 2026
Top Glean Alternatives (2026 Guide)
AI Agents in 2026: From Hype to Enterprise Reality
January 16, 2026
AI Agents in 2026: From Hype to Enterprise Reality
Start using an AI agent today

Browse and deploy our pre-built templates

Marketplace
Reimagine your business

Find out how Kore.ai can help you today.

Talk to an expert
Background Image 4
Background Image 9
You are now leaving Kore.ai’s website.

‍

Kore.ai does not endorse, has not verified, and is not responsible for, any content, views, products, services, or policies of any third-party websites, or for any verification or updates of such websites. Third-party websites may also include "forward-looking statements" which are inherently subject to risks and uncertainties, some of which cannot be predicted or quantified. Actual results could differ materially from those indicated in such forward-looking statements.



Click ‘Continue’ to acknowledge the above and leave Kore.ai’s website. If you don’t want to leave Kore.ai’s website, simply click ‘Back’.

CONTINUEGO BACK
Reimagine your enterprise with Kore.ai
English
Spanish
Spanish
Spanish
Spanish
Get Started
AI for WorkAI for ServiceAI for ProcessAgent Marketplace
Kore.ai agent platform
Platform OverviewMulti-Agent OrchestrationAI Engineering ToolsSearch and Data AIAI Security and GovernanceNo-Code and Pro-Code ToolsIntegrations
ACCELERATORS
BankingHealthcareRetailRecruitingHRIT
company
About Kore.aiLeadershipCustomer StoriesPartnersAnalyst RecognitionNewsroom
resources
DocumentationBlogWhitepapersWebinarsAI Research ReportsAI GlossaryVideosGenerative AI 101Responsive AI frameworkCXO Toolkit
GET INVOLVED
EventsSupportAcademyCommunityCareers

Let’s work together

Get answers and a customized quote for your projects

Submit RFP
Follow us on
© 2026 Kore.ai Inc. All trademarks are property of their respective owners.
Privacy PolicyTerms of ServiceAcceptable Use PolicyCookie PolicyIntellectual Property Rights
|
×