sberbank-ai/ruRoberta-large cover image

sberbank-ai/ruRoberta-large

The ruRoberta-large model was trained by the SberDevices team for mask filling tasks using encoders and BBPE tokenizers. It has 355 million parameters and was trained on 250GB of data. The NLP Core Team RnD, including Dmitry Zmitrovich, contributed to its development.

The ruRoberta-large model was trained by the SberDevices team for mask filling tasks using encoders and BBPE tokenizers. It has 355 million parameters and was trained on 250GB of data. The NLP Core Team RnD, including Dmitry Zmitrovich, contributed to its development.

Public
$0.0005 / sec

HTTP/cURL API

You can use cURL or any other http client to run inferences:

curl -X POST \
    -d '{"input": "Where is my <mask>?"}'  \
    -H "Authorization: bearer $DEEPINFRA_TOKEN"  \
    -H 'Content-Type: application/json'  \
    'https://api.deepinfra.com/v1/inference/sberbank-ai/ruRoberta-large'

which will give you back something similar to:

{
  "results": [
    {
      "sequence": "where is my father?",
      "score": 0.08898820728063583,
      "token": 2269,
      "token_str": "father"
    },
    {
      "sequence": "where is my mother?",
      "score": 0.07864926755428314,
      "token": 2388,
      "token_str": "mother"
    }
  ],
  "request_id": null,
  "inference_status": {
    "status": "unknown",
    "runtime_ms": 0,
    "cost": 0.0,
    "tokens_generated": 0,
    "tokens_input": 0
  }
}

Input fields

inputstring

text prompt, should include exactly one <mask> token


webhookfile

The webhook to call when inference is done, by default you will get the output in the response of your inference request

Input Schema

Output Schema