A 15.5B parameter model trained on 80+ programming languages from The Stack (v1.2) dataset, using a GPT-2 architecture with multi-query attention and Fill-in-the-Middle objective. The model is capable of generating code snippets provided some context, but the generated code is not guaranteed to work as intended and may contain bugs or exploits. The model is licensed under the BigCode OpenRAIL-M v1 license agreement.
A 15.5B parameter model trained on 80+ programming languages from The Stack (v1.2) dataset, using a GPT-2 architecture with multi-query attention and Fill-in-the-Middle objective. The model is capable of generating code snippets provided some context, but the generated code is not guaranteed to work as intended and may contain bugs or exploits. The model is licensed under the BigCode OpenRAIL-M v1 license agreement.
max_new_tokens
integermaximum length of the newly generated generated text
Default value: 2048
Range: 1 ≤ max_new_tokens ≤ 100000
temperature
numbertemperature to use for sampling. 0 means the output is deterministic. Values greater than 1 encourage more diversity
Default value: 0.7
Range: 0 ≤ temperature ≤ 100
top_p
numberSample from the set of tokens with highest probability such that sum of probabilies is higher than p. Lower values focus on the most probable tokens.Higher values sample more low-probability tokens
Default value: 0.9
Range: 0 < top_p ≤ 1
top_k
integerSample from the best k (number of) tokens. 0 means off
Default value: 0
Range: 0 ≤ top_k < 100000
repetition_penalty
numberrepetition penalty. Value of 1 means no penalty, values greater than 1 discourage repetition, smaller than 1 encourage repetition.
Default value: 1.2
Range: 0.01 ≤ repetition_penalty ≤ 5
num_responses
integerNumber of output sequences to return. Incompatible with streaming
Default value: 1
Range: 1 ≤ num_responses ≤ 2
webhook
fileThe webhook to call when inference is done, by default you will get the output in the response of your inference request
stream
booleanWhether to stream tokens, by default it will be false, currently only supported for Llama 2 text generation models, token by token updates will be sent over SSE
Default value: false