DeBERTa (Decoding-Enhanced BERT with Disentangled Attention) is a novel language model that improves upon BERT and RoBERTa using disentangled attention and enhanced mask decoding. It achieves state-of-the-art results on various NLU tasks while requiring less computational resources than its predecessors.
DeBERTa (Decoding-Enhanced BERT with Disentangled Attention) is a novel language model that improves upon BERT and RoBERTa using disentangled attention and enhanced mask decoding. It achieves state-of-the-art results on various NLU tasks while requiring less computational resources than its predecessors.
text prompt, should include exactly one [MASK] token
You need to login to use this model
where is my father? (0.09)
where is my mother? (0.08)
DeBERTa improves the BERT and RoBERTa models using disentangled attention and enhanced mask decoder. It outperforms BERT and RoBERTa on majority of NLU tasks with 80GB training data.
Please check the official repository for more details and updates.
This is the DeBERTa V2 xlarge model with 24 layers, 1536 hidden size. The total parameters are 900M and it is trained with 160GB raw data.
If you find DeBERTa useful for your work, please cite the following paper:
@inproceedings{
he2021deberta,
title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=XPZIaotutsD}
}