A pre-trained language model based on RoBERTa, fine-tuned on the SQuAD2.0 dataset for extractive question answering. It achieved scores of 79.87% exact match and 82.91% F1 score on the SQuAD2.0 dev set. Deepset is the company behind the open-source NLP framework Haystack, and offers other resources such as Distilled roberta-base-squad2, German BERT, and GermanQuAD datasets and models.
A pre-trained language model based on RoBERTa, fine-tuned on the SQuAD2.0 dataset for extractive question answering. It achieved scores of 79.87% exact match and 82.91% F1 score on the SQuAD2.0 dev set. Deepset is the company behind the open-source NLP framework Haystack, and offers other resources such as Distilled roberta-base-squad2, German BERT, and GermanQuAD datasets and models.
e888b4b0bc8ccea2aff7828002f821047c246114
2023-03-03T03:42:21+00:00
d39b8d4166b0683451bbce6f047de1a238c0b5bf
2023-11-29T20:40:57+00:00