We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

openai/

clip-vit-base-patch32

The CLIP model was developed by OpenAI to investigate the robustness of computer vision models. It uses a Vision Transformer architecture and was trained on a large dataset of image-caption pairs. The model shows promise in various computer vision tasks but also has limitations, including difficulties with fine-grained classification and potential biases in certain applications.

Public
$0.0005 / sec
openai/clip-vit-base-patch32 cover image

Input

You need to login to use this model

Login

Settings

Please upload an image file

candidate_labels
You can add more items with the button on the right

Output

dog (0.90)

cat (0.10)

Model Information

Unlock the most affordable AI hosting

Run models at scale with our fully managed GPU infrastructure, delivering enterprise-grade uptime at the industry's best rates.