We extract contextual embedding features from Camembert, a fill-mask language model, for the task of sentiment analysis. We use the tokenize and encode functions to convert our sentence into a numerical representation, and then feed it into the Camembert model to get the contextual embeddings. We extract the embeddings from all 12 self-attention layers and the input embedding layer to form a 13-dimensional feature vector for each sentence.
We extract contextual embedding features from Camembert, a fill-mask language model, for the task of sentiment analysis. We use the tokenize and encode functions to convert our sentence into a numerical representation, and then feed it into the Camembert model to get the contextual embeddings. We extract the embeddings from all 12 self-attention layers and the input embedding layer to form a 13-dimensional feature vector for each sentence.
text prompt, should include exactly one <mask> token
You need to login to use this model
where is my father? (0.09)
where is my mother? (0.08)
This model can be used for Fill-Mask tasks.
CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.
Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).
This model was pretrained on a subcorpus of OSCAR multilingual corpus. Some of the limitations and risks associated with the OSCAR dataset, which are further detailed in the OSCAR dataset card, include the following:
The quality of some OSCAR sub-corpora might be lower than expected, specifically for the lowest-resource languages.
Constructed from Common Crawl, Personal and sensitive information might be present.
OSCAR or Open Super-large Crawled Aggregated coRpus is a multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the Ungoliant architecture.
Model | #params | Arch. | Training data |
---|---|---|---|
camembert-base | 110M | Base | OSCAR (138 GB of text) |
camembert/camembert-large | 335M | Large | CCNet (135 GB of text) |
camembert/camembert-base-ccnet | 110M | Base | CCNet (135 GB of text) |
camembert/camembert-base-wikipedia-4gb | 110M | Base | Wikipedia (4 GB of text) |
camembert/camembert-base-oscar-4gb | 110M | Base | Subsample of OSCAR (4 GB of text) |
camembert/camembert-base-ccnet-4gb | 110M | Base | Subsample of CCNet (4 GB of text) |
The model developers evaluated CamemBERT using four different downstream tasks for French: part-of-speech (POS) tagging, dependency parsing, named entity recognition (NER) and natural language inference (NLI).
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}