Whisper Fine-Tuned with LoRA for Russian

This model is a fine-tuned version of openai/whisper-small with LoRA adapters for Russian transcription. FInetuned on Farfield part of SberGolos dataset

Usage

from huggingface_hub import hf_hub_download
import torch
from transformers import WhisperProcessor, WhisperForConditionalGeneration

repo_id = "UDZH/whisper-small-lora-finetuned-ru"

# Loading processor and base model Whisper
processor = WhisperProcessor.from_pretrained("openai/whisper-small")
whisper_model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small").to("cuda")

# Loading LoRA weight
lora_path = hf_hub_download(repo_id=repo_id, filename="whisper_lora_weights.pth")
lora_weights = torch.load(lora_path, map_location="cuda")

# Apply LoRA to model
for name, module in whisper_model.named_modules():
    if isinstance(module, torch.nn.Linear) and any(k in name for k in ["q_proj", "v_proj", "k_proj", "out_proj", "fc1", "fc2"]):
        parent = whisper_model.get_submodule(".".join(name.split(".")[:-1]))
        lora_layer = LoRALayer(module, r=16, alpha=32, dropout=0.4).to("cuda")
        setattr(parent, name.split(".")[-1], lora_layer)

missing_keys, unexpected_keys = whisper_model.load_state_dict(lora_weights, strict=False)

print("LoRA weight loaded.")
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for UDZH/whisper-small-lora-finetuned-ru

Finetuned
(3439)
this model

Dataset used to train UDZH/whisper-small-lora-finetuned-ru