A transformer model trained on the Pile dataset for masked autoregressive language modeling. With 125 million parameters, our model is capable of generating high-quality text given a prompt. However, we acknowledge potential limitations and biases in the model's responses, particularly regarding profanity and offensiveness, and advise users to exercise caution when deploying the model for real-world applications.
A transformer model trained on the Pile dataset for masked autoregressive language modeling. With 125 million parameters, our model is capable of generating high-quality text given a prompt. However, we acknowledge potential limitations and biases in the model's responses, particularly regarding profanity and offensiveness, and advise users to exercise caution when deploying the model for real-world applications.
max_new_tokens
integermaximum length of the newly generated generated text
Default value: 2048
Range: 1 ≤ max_new_tokens ≤ 100000
temperature
numbertemperature to use for sampling. 0 means the output is deterministic. Values greater than 1 encourage more diversity
Default value: 0.7
Range: 0 ≤ temperature ≤ 100
top_p
numberSample from the set of tokens with highest probability such that sum of probabilies is higher than p. Lower values focus on the most probable tokens.Higher values sample more low-probability tokens
Default value: 0.9
Range: 0 < top_p ≤ 1
top_k
integerSample from the best k (number of) tokens. 0 means off
Default value: 0
Range: 0 ≤ top_k < 100000
repetition_penalty
numberrepetition penalty. Value of 1 means no penalty, values greater than 1 discourage repetition, smaller than 1 encourage repetition.
Default value: 1.2
Range: 0.01 ≤ repetition_penalty ≤ 5
num_responses
integerNumber of output sequences to return. Incompatible with streaming
Default value: 1
Range: 1 ≤ num_responses ≤ 2
webhook
fileThe webhook to call when inference is done, by default you will get the output in the response of your inference request
stream
booleanWhether to stream tokens, by default it will be false, currently only supported for Llama 2 text generation models, token by token updates will be sent over SSE
Default value: false