intfloat/multilingual-e5-large cover image

intfloat/multilingual-e5-large

The Multilingual-E5-large model is a 24-layer text embedding model with an embedding size of 1024, trained on a mixture of multilingual datasets and supporting 100 languages. The model achieves state-of-the-art results on the Mr. TyDi benchmark, outperforming other models such as BM25 and mDPR. The model is intended for use in text retrieval and semantic similarity tasks, and should be used with the "query: " and "passage: " prefixes for input texts to achieve optimal performance.

The Multilingual-E5-large model is a 24-layer text embedding model with an embedding size of 1024, trained on a mixture of multilingual datasets and supporting 100 languages. The model achieves state-of-the-art results on the Mr. TyDi benchmark, outperforming other models such as BM25 and mDPR. The model is intended for use in text retrieval and semantic similarity tasks, and should be used with the "query: " and "passage: " prefixes for input texts to achieve optimal performance.

Public
$0.010 / Mtoken
fp32
512
PaperLicense

Input

inputs
You can add more items with the button on the right

whether to normalize the computed embeddings 2

You need to login to use this model

Output

Multilingual-E5-large

Multilingual E5 Text Embeddings: A Technical Report. Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang, Rangan Majumder, Furu Wei, arXiv 2024

This model has 24 layers and the embedding size is 1024.

Supported Languages

This model is initialized from xlm-roberta-large and continually trained on a mixture of multilingual datasets. It supports 100 languages from xlm-roberta, but low-resource languages may see performance degradation.

Benchmark Results on Mr. TyDi

ModelAvg MRR@10arbnenfiidjakoruswteth
BM2533.336.741.315.128.838.221.728.132.939.642.441.7
mDPR16.726.025.816.211.314.618.121.918.57.310.613.5
BM25 + mDPR41.749.153.528.436.545.535.536.242.740.542.049.2
multilingual-e5-small64.471.566.354.557.763.255.454.360.865.489.170.1
multilingual-e5-base65.972.365.058.560.864.956.655.862.769.086.672.7
multilingual-e5-large70.577.573.260.866.868.562.561.665.872.790.276.2

MTEB Benchmark Evaluation

Check out unilm/e5 to reproduce evaluation results on the BEIR and MTEB benchmark.

FAQ

1. Do I need to add the prefix "query: " and "passage: " to input texts?

Yes, this is how the model is trained, otherwise you will see a performance degradation.

Here are some rules of thumb:

  • Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.

  • Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval.

  • Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.

2. Why are my reproduced results slightly different from reported in the model card?

Different versions of transformers and pytorch could cause negligible but non-zero performance differences.

3. Why does the cosine similarity scores distribute around 0.7 to 1.0?

This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.

For text embedding tasks like text retrieval or semantic similarity, what matters is the relative order of the scores instead of the absolute values, so this should not be an issue.

Citation

If you find our paper or models helpful, please consider cite as follows:

@article{wang2024multilingual,
  title={Multilingual E5 Text Embeddings: A Technical Report},
  author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu},
  journal={arXiv preprint arXiv:2402.05672},
  year={2024}
}

Limitations

Long texts will be truncated to at most 512 tokens.