We present a fine-tuned BERT-base uncased model for question answering on the SQuAD v1 dataset. Our model achieves an exact match score of 80.9104 and an F1 score of 88.2302 without any hyperparameter search.
We present a fine-tuned BERT-base uncased model for question answering on the SQuAD v1 dataset. Our model achieves an exact match score of 80.9104 and an F1 score of 88.2302 without any hyperparameter search.
question relating to context
question source material
You need to login to use this model
fox (0.18)
This model was fine-tuned from the HuggingFace BERT base uncased checkpoint on SQuAD1.1. This model is case-insensitive: it does not make a difference between english and English.
Dataset | Split | # samples |
---|---|---|
SQuAD1.1 | train | 90.6K |
SQuAD1.1 | eval | 11.1k |
Model size: 418M
Metric | # Value | # Original (Table 2) |
---|---|---|
EM | 80.9 | 80.8 |
F1 | 88.2 | 88.5 |
Note that the above results didn't involve any hyperparameter search.
Created by Qingqing Cao | GitHub | Twitter
Made with ❤️ in New York.