Microsoft's MiniLM-L12-H384-uncased language model achieved state-of-the-art results on the SQuAD 2.0 question-answering benchmark, with exact match and F1 scores of 76.13% and 79.54%, respectively. The model was trained on the SQuAD 2.0 dataset using a batch size of 12, learning rate of 4e-5, and 4 epochs. The authors suggest using their model as a starting point for building large language models for downstream NLP tasks.
Microsoft's MiniLM-L12-H384-uncased language model achieved state-of-the-art results on the SQuAD 2.0 question-answering benchmark, with exact match and F1 scores of 76.13% and 79.54%, respectively. The model was trained on the SQuAD 2.0 dataset using a batch size of 12, learning rate of 4e-5, and 4 epochs. The authors suggest using their model as a starting point for building large language models for downstream NLP tasks.
webhook
fileThe webhook to call when inference is done, by default you will get the output in the response of your inference request