voyage-code-3
Voyage AI introduces voyage-code-3, a next-generation embedding model optimized for code retrieval. It outperforms OpenAI-v3-large and CodeSage-large by an average of 13.80% and 16.81% on a suite of 32 code retrieval datasets, respectively. It supports embeddings of 2048, 1024, 512, and 256 dimensions and offers multiple embedding quantization options, including float (32-bit), int8 (8-bit signed integer), uint8 (8-bit unsigned integer), binary (bit-packed int8), and ubinary (bit-packed uint8). With a 32 K-token context length, it surpasses OpenAI's 8K and CodeSage Large's 1K context lengths. Voyage-code-3 employs Matryoshka learning to create embeddings with a nested family of various lengths within a single vector. This allows users to vectorize documents into a 2048-dimensional vector and later use shorter versions (e.g., 256, 512, or 1024 dimensions) without re-invoking the embedding model.
Learn more
voyage-3-large
Voyage AI has unveiled voyage-3-large, a cutting-edge general-purpose and multilingual embedding model that leads across eight evaluated domains, including law, finance, and code, outperforming OpenAI-v3-large and Cohere-v3-English by averages of 9.74% and 20.71%, respectively. Enabled by Matryoshka learning and quantization-aware training, it supports embeddings of 2048, 1024, 512, and 256 dimensions, along with multiple quantization options such as 32-bit floating point, signed and unsigned 8-bit integer, and binary precision, significantly reducing vector database costs with minimal impact on retrieval quality. Notably, voyage-3-large offers a 32K-token context length, surpassing OpenAI's 8K and Cohere's 512 tokens. Evaluations across 100 datasets in diverse domains demonstrate its superior performance, with flexible precision and dimensionality options enabling substantial storage savings without compromising quality.
Learn more
LexVec
LexVec is a word embedding model that achieves state-of-the-art results in multiple natural language processing tasks by factorizing the Positive Pointwise Mutual Information (PPMI) matrix using stochastic gradient descent. This approach assigns heavier penalties for errors on frequent co-occurrences while accounting for negative co-occurrences. Pre-trained vectors are available, including a common crawl dataset with 58 billion tokens and 2 million words in 300 dimensions, and an English Wikipedia 2015 + NewsCrawl dataset with 7 billion tokens and 368,999 words in 300 dimensions. Evaluations demonstrate that LexVec matches or outperforms other models like word2vec in terms of word similarity and analogy tasks. The implementation is open source under the MIT License and is available on GitHub.
Learn more
EmbeddingGemma
EmbeddingGemma is a 308-million-parameter multilingual text embedding model, lightweight yet powerful, optimized to run entirely on everyday devices such as phones, laptops, and tablets, enabling fast, offline embedding generation that protects user privacy. Built on the Gemma 3 architecture, it supports over 100 languages, processes up to 2,000 input tokens, and leverages Matryoshka Representation Learning (MRL) to offer flexible embedding dimensions (768, 512, 256, or 128) for tailored speed, storage, and precision. Its GPU-and EdgeTPU-accelerated inference delivers embeddings in milliseconds, under 15 ms for 256 tokens on EdgeTPU, while quantization-aware training keeps memory usage under 200 MB without compromising quality. This makes it ideal for real-time, on-device tasks such as semantic search, retrieval-augmented generation (RAG), classification, clustering, and similarity detection, whether for personal file search, mobile chatbots, or custom domain use.
Learn more