sentence-transformers/multi-qa-mpnet-base-dot-v1 cover image

sentence-transformers/multi-qa-mpnet-base-dot-v1

We present a sentence transformation model that maps sentences and paragraphs to a 768-dimensional dense vector space, suitable for semantic search tasks. The model is trained on 215 million question-answer pairs from various sources, including WikiAnswers, PAQ, Stack Exchange, MS MARCO, GOOAQ, Amazon QA, Yahoo Answers, Search QA, ELI5, and Natural Questions. Our model uses a contrastive learning objective.

We present a sentence transformation model that maps sentences and paragraphs to a 768-dimensional dense vector space, suitable for semantic search tasks. The model is trained on 215 million question-answer pairs from various sources, including WikiAnswers, PAQ, Stack Exchange, MS MARCO, GOOAQ, Amazon QA, Yahoo Answers, Search QA, ELI5, and Natural Questions. Our model uses a contrastive learning objective.

Public
$0.005 / Mtoken
512

HTTP/cURL API

You can use cURL or any other http client to run inferences:

curl -X POST \
    -H "Authorization: bearer $DEEPINFRA_TOKEN"  \
    -F 'inputs=["I like chocolate"]'  \
    'https://api.deepinfra.com/v1/inference/sentence-transformers/multi-qa-mpnet-base-dot-v1'

which will give you back something similar to:

{
  "embeddings": [
    [
      0.0,
      0.5,
      1.0
    ],
    [
      1.0,
      0.5,
      0.0
    ]
  ],
  "input_tokens": 42,
  "request_id": null,
  "inference_status": {
    "status": "unknown",
    "runtime_ms": 0,
    "cost": 0.0,
    "tokens_generated": 0,
    "tokens_input": 0
  }
}

Input fields

inputsarray

sequences to embed

Default value: []


normalizeboolean

whether to normalize the computed embeddings

Default value: false


imagestring

image to embed


webhookfile

The webhook to call when inference is done, by default you will get the output in the response of your inference request

Input Schema

Output Schema