TeleLogs RCA - Qwen3-32B LoRA Adapter

LoRA adapter fine-tuned on Qwen3-32B for the AI Telco Troubleshooting Challenge (Zindi).

Model Details

Attribute Value
Base Model Qwen/Qwen3-32B
Adapter Type LoRA
Rank 128

Usage

With vLLM

docker run -d --gpus all \
    -v /path/to/Qwen3-32B:/model \
    -v /path/to/this/adapter:/adapter \
    -p 8001:8000 vllm/vllm-openai:latest \
    --model /model --enable-lora --lora-modules telelogs=/adapter \
    --max-model-len 16000 --max-lora-rank 128

With PEFT

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base_model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-32B")
model = PeftModel.from_pretrained(base_model, "vadery/telelogs-qwen3-32b-lora")
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-32B")

Task Description

Root Cause Analysis for telecom network issues. Given a multi-choice question describing network symptoms, the model identifies the most likely root cause.

License

CC BY 4.0

Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for vadery/ai-mwc-qwen3-32b-lora

Base model

Qwen/Qwen3-32B
Adapter
(246)
this model