The National Library of Sweden has released three pre-trained language models based on BERT and ALBERT for Swedish text. The models include a BERT base model, a BERT fine-tuned for named entity recognition, and an experimental ALBERT model. They were trained on approximately 15-20 GB of text data from various sources such as books, news, government publications, Swedish Wikipedia, and internet forums.
The National Library of Sweden has released three pre-trained language models based on BERT and ALBERT for Swedish text. The models include a BERT base model, a BERT fine-tuned for named entity recognition, and an experimental ALBERT model. They were trained on approximately 15-20 GB of text data from various sources such as books, news, government publications, Swedish Wikipedia, and internet forums.
You can use cURL or any other http client to run inferences:
curl -X POST \
-d '{"input": "Where is my [MASK]?"}' \
-H "Authorization: bearer $DEEPINFRA_TOKEN" \
-H 'Content-Type: application/json' \
'https://api.deepinfra.com/v1/inference/KB/bert-base-swedish-cased'
which will give you back something similar to:
{
"results": [
{
"sequence": "where is my father?",
"score": 0.08898820728063583,
"token": 2269,
"token_str": "father"
},
{
"sequence": "where is my mother?",
"score": 0.07864926755428314,
"token": 2388,
"token_str": "mother"
}
],
"request_id": null,
"inference_status": {
"status": "unknown",
"runtime_ms": 0,
"cost": 0.0,
"tokens_generated": 0,
"tokens_input": 0
}
}
webhook
fileThe webhook to call when inference is done, by default you will get the output in the response of your inference request