Deepset presents tinyroberta-squad2, a distilled version of their roberta-base-squad2 model that achieves similar performance while being faster. The model is trained on SQuAD 2.0 and uses Haystack's infrastructure with 4x Tesla V100 GPUs. It achieved 78.69% exact match and 81.92% F1 score on the SQuAD 2.0 dev set.
Deepset presents tinyroberta-squad2, a distilled version of their roberta-base-squad2 model that achieves similar performance while being faster. The model is trained on SQuAD 2.0 and uses Haystack's infrastructure with 4x Tesla V100 GPUs. It achieved 78.69% exact match and 81.92% F1 score on the SQuAD 2.0 dev set.
You can use cURL or any other http client to run inferences:
curl -X POST \
-d '{"question": "Who jumped?", "context": "The quick brown fox jumped over the lazy dog."}' \
-H "Authorization: bearer $DEEPINFRA_TOKEN" \
-H 'Content-Type: application/json' \
'https://api.deepinfra.com/v1/inference/deepset/tinyroberta-squad2'
which will give you back something similar to:
{
"answer": "fox",
"score": 0.1803228110074997,
"start": 16,
"end": 19,
"request_id": null,
"inference_status": {
"status": "unknown",
"runtime_ms": 0,
"cost": 0.0,
"tokens_generated": 0,
"tokens_input": 0
}
}
webhook
fileThe webhook to call when inference is done, by default you will get the output in the response of your inference request