A pre-trained multilingual model that uses a masked language modeling objective to learn a bidirectional representation of languages. It was trained on 104 languages with the largest Wikipedias, and its inputs are in the form of [CLS] Sentence A [SEP] Sentence B [SEP]. The model is primarily aimed at being fine-tuned on tasks that use the whole sentence, potentially masked, to make decisions.
A pre-trained multilingual model that uses a masked language modeling objective to learn a bidirectional representation of languages. It was trained on 104 languages with the largest Wikipedias, and its inputs are in the form of [CLS] Sentence A [SEP] Sentence B [SEP]. The model is primarily aimed at being fine-tuned on tasks that use the whole sentence, potentially masked, to make decisions.
text prompt, should include exactly one [MASK] token
You need to login to use this model
where is my father? (0.09)
where is my mother? (0.08)
Pretrained model on the top 104 languages with the largest Wikipedia using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model is case sensitive: it makes a difference between english and English.
Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team.
BERT is a transformers model pretrained on a large corpus of multilingual data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:
This way, the model learns an inner representation of the languages in the training set that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs.
You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2.
The BERT model was pretrained on the 104 languages with the largest Wikipedias. You can find the complete list here.
@article{DBLP:journals/corr/abs-1810-04805,
author = {Jacob Devlin and
Ming{-}Wei Chang and
Kenton Lee and
Kristina Toutanova},
title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
Understanding},
journal = {CoRR},
volume = {abs/1810.04805},
year = {2018},
url = {http://arxiv.org/abs/1810.04805},
archivePrefix = {arXiv},
eprint = {1810.04805},
timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}