albert-base-v2 cover image

albert-base-v2

The ALBERT model is a variant of BERT that uses a larger model size and more training data to improve performance on downstream NLP tasks. It was trained on a combination of BookCorpus and English Wikipedia, and achieved state-of-the-art results on several benchmark datasets. Fine-tuning ALBERT on specific tasks can further improve its performance.

The ALBERT model is a variant of BERT that uses a larger model size and more training data to improve performance on downstream NLP tasks. It was trained on a combination of BookCorpus and English Wikipedia, and achieved state-of-the-art results on several benchmark datasets. Fine-tuning ALBERT on specific tasks can further improve its performance.

Public
$0.0005 / sec

Input

text prompt, should include exactly one [MASK] token

You need to login to use this model

Output

where is my father? (0.09)

where is my mother? (0.08)

ALBERT Base v2

Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this paper and first released in this repository. This model, as all ALBERT models, is uncased: it does not make a difference between english and English.

Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by the Hugging Face team.

Model description

ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives:

  • Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
  • Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text.

This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the ALBERT model as inputs.

ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers.

This is the second version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks.

This model has the following configuration:

  • 12 repeating layers
  • 128 embedding dimension
  • 768 hidden dimension
  • 12 attention heads
  • 11M parameters

Intended uses & limitations

You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the model hub to look for fine-tuned versions on a task that interests you.

Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2.

Limitations and bias

Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions.

This bias will also affect all fine-tuned versions of this model.

Training data

The ALBERT model was pretrained on BookCorpus, a dataset consisting of 11,038 unpublished books and English Wikipedia (excluding lists, tables and headers).

BibTeX entry and citation info

@article{DBLP:journals/corr/abs-1909-11942,
  author    = {Zhenzhong Lan and
               Mingda Chen and
               Sebastian Goodman and
               Kevin Gimpel and
               Piyush Sharma and
               Radu Soricut},
  title     = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language
               Representations},
  journal   = {CoRR},
  volume    = {abs/1909.11942},
  year      = {2019},
  url       = {http://arxiv.org/abs/1909.11942},
  archivePrefix = {arXiv},
  eprint    = {1909.11942},
  timestamp = {Fri, 27 Sep 2019 13:04:21 +0200},
  biburl    = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}