FinGPT β Qwen3-8B Sentiment Specialist (Round 1 LoRA)
A LoRA adapter fine-tuned on Qwen3-8B for financial sentiment analysis, trained on the FinGPT sentiment dataset.
Part of a self-taught multi-agent financial AI project. Full training code,
architecture notes, and roadmap available on GitHub:
π github.com/hadioma/FinGPT-Portfolio
What This Model Does
Classifies financial news and tweets into one of three sentiment labels:
positiveneutralnegative
Example
from unsloth import FastLanguageModel
from safetensors.torch import load_file
from peft import set_peft_model_state_dict
# Load base model
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="unsloth/Qwen3-8B",
max_seq_length=512,
load_in_4bit=True,
)
model = FastLanguageModel.get_peft_model(
model, r=8,
target_modules=["q_proj","k_proj","v_proj","o_proj",
"gate_proj","up_proj","down_proj"],
lora_alpha=16, lora_dropout=0, bias="none",
use_gradient_checkpointing=False,
)
# Load this adapter
state_dict = load_file(hf_hub_download(
"hadioma/RetrainedQwen3-7-FinGPT", "adapter_model.safetensors"
))
set_peft_model_state_dict(model, state_dict)
FastLanguageModel.for_inference(model)
# Run inference
SYSTEM = ("You are an expert financial analyst. "
"Reason carefully, cite your logic, and provide structured, professional analysis.")
messages = [
{"role": "system", "content": SYSTEM},
{"role": "user", "content":
"What is the sentiment of this news? Please choose an answer from "
"{negative/neutral/positive}.\n"
"Input: Apple reported record quarterly earnings, beating analyst expectations by 15%."},
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=10, do_sample=False)
response = tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True)
print(response) # β positive
Training Details
| Parameter | Value |
|---|---|
| Base model | unsloth/Qwen3-8B (4-bit quantized) |
| LoRA rank | r=8, alpha=16 |
| Target modules | q/k/v/o/gate/up/down proj |
| Dataset | FinGPT/fingpt-sentiment-train (76.8K samples) |
| Max sequence length | 512 |
| Batch size (effective) | 16 (1 device Γ 16 grad accum) |
| Epochs | 1 (~10,131 steps) |
| Optimizer | paged_adamw_8bit |
| Precision | bf16 |
| Hardware | NVIDIA RTX 5070 Ti Laptop GPU (12GB VRAM) |
| Framework | Unsloth 2026.2 + HuggingFace PEFT 0.18 |
System Prompt Used During Training
You are an expert financial analyst. Reason carefully, cite your logic,
and provide structured, professional analysis.
β οΈ You must use this exact system prompt at inference time for best results.
Intended Use
- Financial news sentiment classification
- Part of a multi-agent financial NLP pipeline
- Component in the FinGPT ecosystem
Out-of-Scope Use
- General-purpose chat or instruction following (use Round 2 adapter for that)
- Non-financial text sentiment
- Real trading decisions β this is not financial advice
Multi-Agent Architecture
This adapter is Agent 1 in a planned multi-agent system:
Base Qwen3-8B (frozen)
βββ adapter_round1 β this model (sentiment specialist)
βββ adapter_round2 (sentiment + Q&A + headline)
βββ adapter_forecaster (stock price forecasting, planned)
All agents share the same frozen base model. The orchestrator hot-swaps adapters at inference time with no model reload.
Citation
If you use this model, please also cite the FinGPT paper:
@article{yang2023fingpt,
title={FinGPT: Open-Source Financial Large Language Models},
author={Yang, Hongyang and Liu, Xiao-Yang and Wang, Christina Dan},
journal={FinLLM Symposium at IJCAI 2023},
year={2023}
}
Links
- π GitHub: hadioma/FinGPT-Portfolio
- π€ HuggingFace: hadioma
- π FinGPT: AI4Finance-Foundation/FinGPT
Trained locally on a consumer laptop as a self-taught ML portfolio project.
Nothing here constitutes financial advice.
- Downloads last month
- 2