this post was submitted on 09 Jan 2025
397 points (98.8% liked)

Opensource

1608 readers
746 users here now

A community for discussion about open source software! Ask questions, share knowledge, share news, or post interesting stuff related to it!

CreditsIcon base by Lorc under CC BY 3.0 with modifications to add a gradient



founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 8 points 1 day ago (1 children)

I've been working on something similar-ish on and off.

There are three (good) solutions involving open-source models that I came across:

  • KenLM/STT
  • DeepSpeech
  • Vosk

Vosk has the best models. But they are large. You can't use the gigaspeech model for example (which is useful even with non-US english) to live-generate subs on many devices, because of the memory requirements. So my guess would be, whatever VLC will provide will probably suck to an extent, because it will have to be fast/lightweight enough.

What also sets vosk-api apart is that you can ask it to provide multiple alternatives (10 is usually used).

One core idea in my tool is to combine all alternatives into one text. So suppose the model predicts text to be either "... still he ..." or "... silly ...". My tool can give you "... (still he|silly) ..." instead of 50/50 chancing it.

[–] [email protected] 6 points 1 day ago

I love that approach you’re taking! So many times, even in shows with official subs, they’re wrong because of homonyms and I’d really appreciate a hedged transcript.