sberbank-ai/ruRoberta-large cover image

sberbank-ai/ruRoberta-large

The ruRoberta-large model was trained by the SberDevices team for mask filling tasks using encoders and BBPE tokenizers. It has 355 million parameters and was trained on 250GB of data. The NLP Core Team RnD, including Dmitry Zmitrovich, contributed to its development.

The ruRoberta-large model was trained by the SberDevices team for mask filling tasks using encoders and BBPE tokenizers. It has 355 million parameters and was trained on 250GB of data. The NLP Core Team RnD, including Dmitry Zmitrovich, contributed to its development.

Public
$0.0005 / sec

Input

text prompt, should include exactly one <mask> token

You need to login to use this model

Output

where is my father? (0.09)

where is my mother? (0.08)

ruRoberta-large

Model was trained by SberDevices team.

  • Task: mask filling
  • Type: encoder
  • Tokenizer: bbpe
  • Dict size: 50 257
  • Num Parameters: 355 M
  • Training Data Volume 250 GB

Authors