DistilBERT is a smaller, faster, and cheaper version of BERT, a popular language model. It was trained on the same data as BERT, including BookCorpus and English Wikipedia, but with a few key differences in the preprocessing and training procedures. Despite its smaller size, DistilBERT achieve's similar results to BERT on various natural language processing tasks.
DistilBERT is a smaller, faster, and cheaper version of BERT, a popular language model. It was trained on the same data as BERT, including BookCorpus and English Wikipedia, but with a few key differences in the preprocessing and training procedures. Despite its smaller size, DistilBERT achieve's similar results to BERT on various natural language processing tasks.
text prompt, should include exactly one [MASK] token
You need to login to use this model
where is my father? (0.09)
where is my mother? (0.08)
This model is a distilled version of the BERT base model. It was introduced in this paper. The code for the distillation process can be found here. This model is uncased: it does not make a difference between english and English.
DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts using the BERT base model. More precisely, it was pretrained with three objectives:
This way, the model learns the same inner representation of the English language than its teacher model, while being faster for inference or downstream tasks.
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2.
DistilBERT pretrained on the same data as BERT, which is BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers).
The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form:
[CLS] Sentence A [SEP] Sentence B [SEP]
With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens.
The details of the masking procedure for each sentence are the following:
[MASK]
.The model was trained on 8 16 GB V100 for 90 hours. See the training code for all hyperparameters details.
When fine-tuned on downstream tasks, this model achieves the following results:
Glue test results:
Task | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE |
---|---|---|---|---|---|---|---|---|
82.2 | 88.5 | 89.2 | 91.3 | 51.3 | 85.8 | 87.5 | 59.9 |
@article{Sanh2019DistilBERTAD,
title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
journal={ArXiv},
year={2019},
volume={abs/1910.01108}
}