We present a BERT-based language model called bert-large-uncased-whole-word-masking-squad2, trained on the SQuAD2.0 dataset for extractive question answering. The model achieves high scores on exact match and F1 metrics.
We present a BERT-based language model called bert-large-uncased-whole-word-masking-squad2, trained on the SQuAD2.0 dataset for extractive question answering. The model achieves high scores on exact match and F1 metrics.
webhook
fileThe webhook to call when inference is done, by default you will get the output in the response of your inference request