We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…

How to OpenAI Whisper with per-sentence and per-word timestamp segmentation using DeepInfra

Published on 2023.04.05 by Yessen Kanapin

How to OpenAI Whisper with per-sentence and per-word timestamp segmentation using DeepInfra header picture

Getting started

To use DeepInfra's API, you'll need an API key.

  1. Sign up or log in to your DeepInfra account
  2. Navigate to the Dashboard / API Keys section
  3. Create a new API key if you don't have one already

You'll use this API key in your requests to authenticate with our services.

Running speech recognition

Whisper is a Speech-To-Text model from OpenAI. Given an audio file with voice data it produces human speech recognition text with per sentence timestamps. There are different model sizes (small, base, large, etc.) and variants for English, see more at deepinfra.com. By default, Whisper produces by sentence timestamp segmentation. We also host whisper-timestamped that can provide timestamps for words in the audio. You can use it with our REST API. Here's how to use it:

curl -X POST \
  -F "audio=@/home/user/all-in-01.mp3" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  'https://api.deepinfra.com/v1/inference/openai/whisper-timestamped-medium.en'
copy

To see additional parameters and how to call this model, check out the documentation page for complete API reference and examples.

If you have any question, just reach out to us on our Discord server.