distilroberta-base cover image

distilroberta-base

DistilRoBERTa is a distilled version of the RoBERTa-base model, with 6 layers, 768 dimensions, and 12 heads, totaling 82M parameters. It is trained on OpenWebTextCorpus, a reproduction of OpenAI's WebText dataset, and achieves comparable performance to RoBERTa while being twice as fast. The model is designed for masked language modeling and can be fine-tuned for downstream tasks, but it also comes with potential biases and limitations, including significant gender and ethnicity biases in its predictions.

DistilRoBERTa is a distilled version of the RoBERTa-base model, with 6 layers, 768 dimensions, and 12 heads, totaling 82M parameters. It is trained on OpenWebTextCorpus, a reproduction of OpenAI's WebText dataset, and achieves comparable performance to RoBERTa while being twice as fast. The model is designed for masked language modeling and can be fine-tuned for downstream tasks, but it also comes with potential biases and limitations, including significant gender and ethnicity biases in its predictions.

Public
$0.0005 / sec

Input

text prompt, should include exactly one <mask> token

You need to login to use this model

Output

where is my father? (0.09)

where is my mother? (0.08)

Model Card for DistilRoBERTa base

Table of Contents

  1. Model Details
  2. Uses
  3. Bias, Risks, and Limitations
  4. Training Details
  5. Evaluation
  6. Environmental Impact
  7. Citation
  8. How To Get Started With the Model

Model Details

Model Description

This model is a distilled version of the RoBERTa-base model. It follows the same training procedure as DistilBERT. The code for the distillation process can be found here. This model is case-sensitive: it makes a difference between english and English.

The model has 6 layers, 768 dimension and 12 heads, totalizing 82M parameters (compared to 125M parameters for RoBERTa-base). On average DistilRoBERTa is twice as fast as Roberta-base.

We encourage users of this model card to check out the RoBERTa-base model card to learn more about usage, limitations and potential biases.

  • Developed by: Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf (Hugging Face)
  • Model type: Transformer-based language model
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Related Models: RoBERTa-base model card
  • Resources for more information:

Uses

Direct Use and Downstream Use

You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.

Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2.

Out of Scope Use

The model should not be used to intentionally create hostile or alienating environments for people. The model was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

Training Details

DistilRoBERTa was pre-trained on OpenWebTextCorpus, a reproduction of OpenAI's WebText dataset (it is ~4 times less training data than the teacher RoBERTa). See the roberta-base model card for further details on training.

Evaluation

When fine-tuned on downstream tasks, this model achieves the following results (see GitHub Repo):

Glue test results:

TaskMNLIQQPQNLISST-2CoLASTS-BMRPCRTE
84.089.490.892.559.388.386.667.9

Environmental Impact

Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).

  • Hardware Type: More information needed
  • Hours used: More information needed
  • Cloud Provider: More information needed
  • Compute Region: More information needed
  • Carbon Emitted: More information needed

Citation

@article{Sanh2019DistilBERTAD,
  title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
  author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
  journal={ArXiv},
  year={2019},
  volume={abs/1910.01108}
}

APA

  • Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.