google/flan-t5-xl cover image

google/flan-t5-xl

Fine tuned T5 model on collection of datasets phrased as instructions

Fine tuned T5 model on collection of datasets phrased as instructions

Public
$0.0005 / sec

HTTP/cURL API

You can use cURL or any other http client to run inferences:

curl -X POST \
    -d '{"input": "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?"}'  \
    -H "Authorization: bearer $(deepctl auth token)"  \
    -H 'Content-Type: application/json'  \
    'https://api.deepinfra.com/v1/inference/google/flan-t5-xl'

which will give you back something similar to:

{
  "results": [
    {
      "generated_text": "Haiku is a Japanese poem that is around 108 characters long. A tweet is ..."
    }
  ],
  "request_id": null,
  "inference_status": {
    "status": "unknown",
    "runtime_ms": 0,
    "cost": 0.0,
    "tokens_generated": 0,
    "tokens_input": 0
  }
}

Input fields

inputstring

text to generate from


max_lengthinteger

maximum length of the generated text

Default value: 200

Range: 1 ≤ max_length ≤ 2048


webhookfile

The webhook to call when inference is done, by default you will get the output in the response of your inference request

Input Schema

Output Schema