https://store-images.s-microsoft.com/image/apps.37332.80e35683-d7b1-4934-a220-c23a2e6f1084.dfb5fef4-e6e7-4b61-8a43-0006f52b3a7e.c4d4fbe2-37b1-497a-b7d1-6886ad672ff1
voyage-3-lite Embedding Model
Voyage AI Innovations Inc द्वारा
Just a moment, logging you in...
Text embedding model optimized for retrieval quality, latency, and cost. 32K context length.
Text embedding models are neural networks that transform texts into numerical vectors. They are a crucial building block for semantic search/retrieval systems and retrieval-augmented generation (RAG) and are responsible for the retrieval quality. voyage-3-lite is a lightweight general-purpose embedding model optimized for latency and cost, which: [1] outperforms OpenAI v3 large and small by 3.82% and 7.58% on average across the domains, respectively, [2] has a 6-8x smaller embedding dimension (512) compared to OpenAI (3072) and E5 Mistral (4096), resulting in 6-8x lower vectorDB costs, and [3] supports a 32K-token context length, compared to OpenAI (8K) and Cohere (512). Learn more about voyage-3-lite here.