openai/whisper-timestamped-medium.en cover image

openai/whisper-timestamped-medium.en

Whisper is a set of multi-lingual, robust speech recognition models trained by OpenAI that achieve state-of-the-art results in many languages. Whisper models were trained to predict approximate timestamps on speech segments (most of the time with 1-second accuracy), but they cannot originally predict word timestamps. This variant contains implementation to predict word timestamps and provide a more accurate estimation of speech segments when transcribing with Whisper models.

Whisper is a set of multi-lingual, robust speech recognition models trained by OpenAI that achieve state-of-the-art results in many languages. Whisper models were trained to predict approximate timestamps on speech segments (most of the time with 1-second accuracy), but they cannot originally predict word timestamps. This variant contains implementation to predict word timestamps and provide a more accurate estimation of speech segments when transcribing with Whisper models.

Public
$0.0005 / sec

Input

Please upload an audio file

task to perform 2

language that the audio is in; uses detected language if None. (Default: empty)

temperature to use for sampling (Default: 0)

patience value to use in beam decoding (Default: 1)

token ids to suppress during sampling. (Default: -1)

optional text to provide as a prompt for the first window.. (Default: empty)

provide the previous output of the model as a prompt for the next window 2

temperature to increase when falling back when the decoding fails to meet either of the thresholds below (Default: 0.2)

gzip compression ratio threshold (Default: 2.4)

average log probability threshold (Default: -1)

probability of the <|nospeech|> token threshold (Default: 0.6)

You need to login to use this model

Output