whisper-large-v3-turbo-ko

OpenAI Whisper large-v3-turbo๋ฅผ ํ•œ๊ตญ์–ด์— ํŠนํ™” ํŒŒ์ธํŠœ๋‹ํ•œ ๋ชจ๋ธ.

ํ•™์Šต ์ •๋ณด

ํ•ญ๋ชฉ ๊ฐ’
๋ฒ ์ด์Šค ๋ชจ๋ธ openai/whisper-large-v3-turbo (809M params)
๋ฐฉ๋ฒ• LoRA (rank 32, trainable 1.6%)
๋ฐ์ดํ„ฐ google/fleurs ko_kr + ๋ฌด์Œ/๋…ธ์ด์ฆˆ ๋ฐ์ดํ„ฐ
GPU NVIDIA H200 NVL (141GB VRAM)
Loss 0.947 โ†’ 0.200 (79% ๊ฐ์†Œ)

์‚ฌ์šฉ๋ฒ•

faster-whisper (CTranslate2)

sonote CLI

๋ผ์ด์„ ์Šค

MIT

Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for tellang/whisper-large-v3-turbo-ko

Adapter
(118)
this model

Dataset used to train tellang/whisper-large-v3-turbo-ko