A whole word masking model finetuned on SQuAD is a transformer-based language model pretrained on a large corpus of English data. The model was trained using a masked language modeling objective, where 15% of the tokens in a sentence were randomly masked, and the model had to predict the missing tokens. The model was also fine-tuned on the SQuAD dataset for question answering tasks, achieving high scores on both F1 and exact match metrics.
A whole word masking model finetuned on SQuAD is a transformer-based language model pretrained on a large corpus of English data. The model was trained using a masked language modeling objective, where 15% of the tokens in a sentence were randomly masked, and the model had to predict the missing tokens. The model was also fine-tuned on the SQuAD dataset for question answering tasks, achieving high scores on both F1 and exact match metrics.
webhook
fileThe webhook to call when inference is done, by default you will get the output in the response of your inference request