The National Library of Sweden has released three pre-trained language models based on BERT and ALBERT for Swedish text. The models include a BERT base model, a BERT fine-tuned for named entity recognition, and an experimental ALBERT model. They were trained on approximately 15-20 GB of text data from various sources such as books, news, government publications, Swedish Wikipedia, and internet forums.
The National Library of Sweden has released three pre-trained language models based on BERT and ALBERT for Swedish text. The models include a BERT base model, a BERT fine-tuned for named entity recognition, and an experimental ALBERT model. They were trained on approximately 15-20 GB of text data from various sources such as books, news, government publications, Swedish Wikipedia, and internet forums.
text prompt, should include exactly one [MASK] token
You need to login to use this model
where is my father? (0.09)
where is my mother? (0.08)