nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large cover image

nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large

A multilingual MiniLMv2 model trained on 16 languages, using a shared vocabulary and language-specific embeddings. The model is based on the transformer architecture and was developed by Microsoft Research. It includes support for various natural language processing tasks such as language translation, question answering, and text classification.

A multilingual MiniLMv2 model trained on 16 languages, using a shared vocabulary and language-specific embeddings. The model is based on the transformer architecture and was developed by Microsoft Research. It includes support for various natural language processing tasks such as language translation, question answering, and text classification.

Public
$0.0005 / sec

HTTP/cURL API

You can use cURL or any other http client to run inferences:

curl -X POST \
    -d '{"input": "Where is my <mask>?"}'  \
    -H "Authorization: bearer $DEEPINFRA_TOKEN"  \
    -H 'Content-Type: application/json'  \
    'https://api.deepinfra.com/v1/inference/nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large'

which will give you back something similar to:

{
  "results": [
    {
      "sequence": "where is my father?",
      "score": 0.08898820728063583,
      "token": 2269,
      "token_str": "father"
    },
    {
      "sequence": "where is my mother?",
      "score": 0.07864926755428314,
      "token": 2388,
      "token_str": "mother"
    }
  ],
  "request_id": null,
  "inference_status": {
    "status": "unknown",
    "runtime_ms": 0,
    "cost": 0.0,
    "tokens_generated": 0,
    "tokens_input": 0
  }
}

Input fields

inputstring

text prompt, should include exactly one <mask> token


webhookfile

The webhook to call when inference is done, by default you will get the output in the response of your inference request

Input Schema

Output Schema