sentence-transformers/clip-ViT-B-32 cover image

sentence-transformers/clip-ViT-B-32

The CLIP model maps text and images to a shared vector space, enabling various applications such as image search, zero-shot image classification, and image clustering. The model can be used easily after installation, and its performance is demonstrated through zero-shot ImageNet validation set accuracy scores. Multilingual versions of the model are also available for 50+ languages.

The CLIP model maps text and images to a shared vector space, enabling various applications such as image search, zero-shot image classification, and image clustering. The model can be used easily after installation, and its performance is demonstrated through zero-shot ImageNet validation set accuracy scores. Multilingual versions of the model are also available for 50+ languages.

Public
$0.0005/sec
Web inference not supported yet, please check API tab

 


© 2023 Deep Infra. All rights reserved.

Discord Logo