The Bio+Clinical BERT model, is initialized from BioBERT and trained on all MIMIC notes. The model was pre-trained using a rules-based section splitter and Sentispacy tokenizer, with a batch size of 32, max sequence length of 128, and learning rate of 5·10^-5 for 150,000 steps.
The Bio+Clinical BERT model, is initialized from BioBERT and trained on all MIMIC notes. The model was pre-trained using a rules-based section splitter and Sentispacy tokenizer, with a batch size of 32, max sequence length of 128, and learning rate of 5·10^-5 for 150,000 steps.
You can use cURL or any other http client to run inferences:
curl -X POST \
-d '{"input": "Where is my [MASK]?"}' \
-H "Authorization: bearer $DEEPINFRA_TOKEN" \
-H 'Content-Type: application/json' \
'https://api.deepinfra.com/v1/inference/emilyalsentzer/Bio_ClinicalBERT'
which will give you back something similar to:
{
"results": [
{
"sequence": "where is my father?",
"score": 0.08898820728063583,
"token": 2269,
"token_str": "father"
},
{
"sequence": "where is my mother?",
"score": 0.07864926755428314,
"token": 2388,
"token_str": "mother"
}
],
"request_id": null,
"inference_status": {
"status": "unknown",
"runtime_ms": 0,
"cost": 0.0,
"tokens_generated": 0,
"tokens_input": 0
}
}
webhook
fileThe webhook to call when inference is done, by default you will get the output in the response of your inference request