xlm-roberta-base cover image

xlm-roberta-base

The XLM-RoBERTa model is a multilingual version of RoBERTa, pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper "Unsupervised Cross-lingual Representation Learning at Scale" by Conneau et al. and first released in this repository. The model learns an inner representation of 100 languages that can be used to extract features useful for downstream tasks.

The XLM-RoBERTa model is a multilingual version of RoBERTa, pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper "Unsupervised Cross-lingual Representation Learning at Scale" by Conneau et al. and first released in this repository. The model learns an inner representation of 100 languages that can be used to extract features useful for downstream tasks.

Public
$0.0005/sec
demoapi

42f548f32366559214515ec137cdd16002968bf6

2023-03-03T06:40:13+00:00


© 2023 Deep Infra. All rights reserved.

Discord Logo