sentence-transformers/
This model is a multilingual version of the OpenAI CLIP-ViT-B32 model, which maps text and images to a common dense vector space. It includes a text embedding model that works for 50+ languages and an image encoder from CLIP. The model was trained using Multilingual Knowledge Distillation, where a multilingual DistilBERT model was trained as a student model to align the vector space of the original CLIP image encoder across many languages.
You need to login to use this model
LoginSettings
The service tier used for processing the request. When set to 'priority', the request will be processed with higher priority. 3
whether to normalize the computed embeddings 2
The number of dimensions in the embedding. If not provided, the model's default will be used.If provided bigger than model's default, the embedding will be padded with zeros. (Default: empty, 32 ≤ dimensions ≤ 8192)
[
[
0,
0.5,
1
],
[
1,
0.5,
0
]
]
Run models at scale with our fully managed GPU infrastructure, delivering enterprise-grade uptime at the industry's best rates.