thenlper/
The GTE models are trained by Alibaba DAMO Academy. They are mainly based on the BERT framework and currently offer three different sizes of models, including GTE-large, GTE-base, and GTE-small. The GTE models are trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be applied to various downstream tasks of text embeddings, including information retrieval, semantic textual similarity, text reranking, etc.
You need to login to use this model
LoginSettings
The service tier used for processing the request. When set to 'priority', the request will be processed with higher priority. 3
whether to normalize the computed embeddings 2
The number of dimensions in the embedding. If not provided, the model's default will be used.If provided bigger than model's default, the embedding will be padded with zeros. (Default: empty, 32 ≤ dimensions ≤ 8192)
[
[
0,
0.5,
1
],
[
1,
0.5,
0
]
]
Run models at scale with our fully managed GPU infrastructure, delivering enterprise-grade uptime at the industry's best rates.