deepset/bert-large-uncased-whole-word-masking-squad2 cover image

deepset/bert-large-uncased-whole-word-masking-squad2

We present a BERT-based language model called bert-large-uncased-whole-word-masking-squad2, trained on the SQuAD2.0 dataset for extractive question answering. The model achieves high scores on exact match and F1 metrics.

We present a BERT-based language model called bert-large-uncased-whole-word-masking-squad2, trained on the SQuAD2.0 dataset for extractive question answering. The model achieves high scores on exact match and F1 metrics.

Public
$0.0005 / sec

Input

question relating to context

question source material

You need to login to use this model

Output

fox (0.18)

bert-large-uncased-whole-word-masking-squad2

This is a berta-large model, fine-tuned using the SQuAD2.0 dataset for the task of question answering.

Overview

Language model: bert-large Language: English Downstream-task: Extractive QA Training data: SQuAD 2.0 Eval data: SQuAD 2.0 Code: See an example QA pipeline on Haystack

About us

deepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.

Some of our other work:

Get in touch and join the Haystack community

For more info on Haystack, visit our GitHub repo and Documentation.

We also have a Discord community open to everyone!

Twitter | LinkedIn | Discord | GitHub Discussions | Website

By the way: we're hiring!