roberta-large cover image

roberta-large

The RoBERTa model was pre-trained on a dataset consisting of 11,038 books, English Wikipedia, 63 million news articles, and a dataset containing a subset of Common Crawl data. It achieved state-of-the-art results on Glue, SuperGLUE, and multi-task benchmarks while exhibiting less sensitivity to hyperparameter tuning compared to BERT. RoBERTa uses a robust optimization approach and dynamic masking, which changes during pre-training, unlike BERT.

The RoBERTa model was pre-trained on a dataset consisting of 11,038 books, English Wikipedia, 63 million news articles, and a dataset containing a subset of Common Crawl data. It achieved state-of-the-art results on Glue, SuperGLUE, and multi-task benchmarks while exhibiting less sensitivity to hyperparameter tuning compared to BERT. RoBERTa uses a robust optimization approach and dynamic masking, which changes during pre-training, unlike BERT.

Public
$0.0005/sec

HTTP/cURL API

 

Input fields

inputstring

text prompt, should include exactly one <mask> token


webhookfile

The webhook to call when inference is done, by default you will get the output in the response of your inference request

Input Schema

Output Schema


© 2023 Deep Infra. All rights reserved.

Discord Logo