The Dataset Viewer has been disabled on this dataset.

Transcription

Scripts for transcribing audio files using HF Buckets and Jobs.

Quick Start

# 1. Download audio from Internet Archive straight into a bucket
hf jobs uv run \
    -v bucket/user/audio-files:/output \
    download-ia.py SUSPENSE /output

# 2. Transcribe — audio bucket in, transcript bucket out
hf jobs uv run --flavor l4x1 -s HF_TOKEN \
    -e UV_TORCH_BACKEND=cu124 \
    -v bucket/user/audio-files:/input:ro \
    -v bucket/user/transcripts:/output \
    cohere-transcribe.py /input /output --language en --compile

No download/upload step. Buckets are mounted directly as volumes via hf-mount.

Scripts

Transcription

Script Model Backend Speed (A100)
cohere-transcribe.py Cohere Transcribe (2B) transformers 161x RT
cohere-transcribe-vllm.py Cohere Transcribe (2B) vLLM nightly 214x RT

cohere-transcribe.py (recommended) — uses model.transcribe() with automatic long-form chunking, overlap, and reassembly. Stable dependencies.

cohere-transcribe-vllm.py — experimental vLLM variant. Faster but requires nightly vLLM and has minor duplication at chunk boundaries.

Options

Flag Default Description
--language required en, de, fr, it, es, pt, el, nl, pl, ar, vi, zh, ja, ko
--compile off torch.compile encoder (one-time warmup, faster after)
--batch-size 16 Batch size for inference
--max-files all Limit files to process (for testing)

Benchmarks

CBS Suspense (1940s radio drama), 66 episodes, 33 hours of audio:

GPU Time RTFx
A100-SXM4-80GB 12.3 min 161x realtime
L4 ~64s / 30 min episode 28x realtime

Data

Script Description
download-ia.py Download audio from Internet Archive into a mounted bucket

Notes

  • Gated model: Accept terms at the model page before use.
  • Tokenizer workaround: cohere-transcribe.py applies a one-line patch for a tokenizer compat issue. Will be removed once upstream fixes land (model discussion).
Downloads last month
10