Documentation

Running Whisper using DeepInfra

Running speech recognition

Whisper is a Speech-To-Text model from OpenAI. Given an audio file with voice data it produces human speech recognition text with per sentence timestamps. There are different model sizes (small, base, large, etc.) and variants for English, see more at deepinfra.com. By default, Whisper produces by sentence timestamp segmentation. We also host whisper-timestamped that can provide timestamps for words in the audio. You can use it with either our rest API or our deepctl command line too. Here is how to use it with the command line tool:

deepctl infer -m 'openai/whisper-timestamped-medium.en'
              -i audio=@/home/user/all-in-01.mp3

You can pass audio formats like mp3 and wav.

To see additional parameters and how to call this model checkout out the documentation page or using command line tool:

deepctl model info -m openai/whisper-base