🚀 New model available: DeepSeek-V3.1 🚀
BAAI/
BGE-M3 is a versatile text embedding model that supports multi-functionality, multi-linguality, and multi-granularity, allowing it to perform dense retrieval, multi-vector retrieval, and sparse retrieval in over 100 languages and with input sizes up to 8192 tokens. The model can be used in a retrieval pipeline with hybrid retrieval and re-ranking to achieve higher accuracy and stronger generalization capabilities. BGE-M3 has shown state-of-the-art performance on several benchmarks, including MKQA, MLDR, and NarritiveQA, and can be used as a drop-in replacement for other embedding models like DPR and BGE-v1.5.
You need to login to use this model
LoginSettings
The service tier used for processing the request. When set to 'priority', the request will be processed with higher priority. 3
whether to normalize the computed embeddings 2
The number of dimensions in the embedding. If not provided, the model's default will be used.If provided bigger than model's default, the embedding will be padded with zeros. (Default: empty, 32 ≤ dimensions ≤ 8192)
[
[
0,
0.5,
1
],
[
1,
0.5,
0
]
]
© 2025 Deep Infra. All rights reserved.