YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
π§ Gemma-4 Trading Brain
A specialized AI trading assistant fine-tuned from Google's Gemma-4-E4B-it (~8B parameters) on comprehensive trading knowledge including candlestick patterns, technical indicators, chart patterns, risk management, trading psychology, and market structure analysis.
π What This Model Knows
| Domain | Coverage |
|---|---|
| Candlestick Patterns | 30+ patterns: Hammer, Engulfing, Doji, Morning/Evening Star, Harami, Tweezer, Marubozu, Dragonfly, Gravestone, Abandoned Baby, etc. |
| Technical Indicators | RSI, MACD, Bollinger Bands, Moving Averages, Stochastic, Williams %R, ADX, ATR, Volume Profile, OBV, Fibonacci, Ichimoku Cloud, VWAP |
| Chart Patterns | Head & Shoulders, Double/Triple Top-Bottom, Triangles, Flags, Pennants, Wedges, Cup & Handle, Rectangles, Diamonds |
| Risk Management | Position sizing, Kelly Criterion, R-Multiples, drawdown math, correlation risk, trailing stops, pyramiding, max drawdown limits |
| Trading Psychology | FOMO, revenge trading, confirmation bias, loss aversion, discipline, trading journal, flow state, 12+ cognitive biases |
| Market Structure | Support/resistance, trend definition, market phases, breakouts, liquidity zones, order flow, smart money concepts, multi-timeframe analysis |
| Trade Scenarios | Real-world scenarios: earnings gaps, confluence setups, winner management, counter-trend evaluation, liquidity sweeps, portfolio management |
π Quick Start
1. Chat with the Trading Brain (Web UI)
# Clone and run the Gradio app
pip install gradio transformers peft accelerate bitsandbytes torch
python app.py
Then open http://localhost:7860 in your browser.
2. Use in Python
from inference import TradingBrain
brain = TradingBrain()
# Ask about any trading topic
print(brain.chat("What does a Morning Star pattern signal?"))
# Analyze candlestick data
ohlc = [
{"open": 100, "high": 105, "low": 98, "close": 102, "volume": 10000},
{"open": 102, "high": 103, "low": 99, "close": 100, "volume": 8000},
{"open": 100, "high": 108, "low": 99, "close": 107, "volume": 15000},
]
print(brain.analyze_candles(ohlc, timeframe="1H"))
# Analyze indicators
indicators = {
"RSI(14)": 68,
"MACD": 0.45,
"MACD Signal": 0.30,
"BB Upper": 110,
"BB Lower": 95,
"Price": 107,
}
print(brain.analyze_indicators(indicators))
ποΈ Training the Model
Prerequisites
- Google Colab (free T4 GPU) or Kaggle (free T4 GPU)
- HuggingFace account + token
- Accept Gemma-4 license: google/gemma-4-E4B-it
One-Command Training
# Install dependencies
pip install transformers trl torch datasets peft accelerate bitsandbytes trackio
# Login to HuggingFace (get token from https://huggingface.co/settings/tokens)
huggingface-cli login
# Run training
python train.py
The script will:
- Download 6 trading datasets from HuggingFace Hub
- Generate 400+ synthetic trading knowledge conversations
- Fine-tune Gemma-4-E4B-it with QLoRA (4-bit)
- Push the LoRA adapter to your Hub repo automatically
Hardware Requirements
| GPU | VRAM | Batch Size | Speed | Cost |
|---|---|---|---|---|
| T4 (Colab Free) | 16GB | 1 | ~2h/epoch | Free |
| L4 | 24GB | 2 | ~1h/epoch | ~$0.75/hr |
| A100 | 80GB | 4 | ~20min/epoch | ~$3/hr |
Training Hyperparameters
Base Model: google/gemma-4-E4B-it (~8B params)
Method: QLoRA (4-bit quantization)
LoRA Rank: r=64, alpha=128
Epochs: 3
Batch Size: 1 (effective = 8 via gradient accumulation)
Learning Rate: 2e-4
Max Length: 2048 tokens
Optimizer: AdamW with cosine scheduler
Datasets: 6 trading datasets + synthetic knowledge (~600+ samples)
π Datasets Used
| Dataset | Source | Samples | Content |
|---|---|---|---|
mrzlab630/trading-candles |
HF Hub | ~100K | Candlestick pattern Q&A (open, high, low, close) |
0xMaka/trading-candles-subset-qa |
HF Hub | ~50K | Candlestick recognition in SQuAD format |
yymYYM/stock_trading_QA |
HF Hub | ~5K | Stock trading Q&A (indicators, order types, strategies) |
lumalik/Quant-Trading-Instruct |
HF Hub | ~200 | Quant trading algorithm templates and explanations |
ksrepo/trading_in_the_zone_1 |
HF Hub | ~300 | Trading psychology from Mark Douglas's classic |
robustfeng/livermore_strategies |
HF Hub | ~200 | Jesse Livermore trend trading strategies |
| Synthetic | Generated | ~400 | Expert-written trading knowledge covering all domains |
π§ͺ Example Conversations
Q: What does a Bullish Engulfing pattern mean?
A Bullish Engulfing pattern consists of a small bearish candle followed by a larger bullish candle that completely 'engulfs' the previous candle's body. It signals strong bullish reversal, especially after a downtrend. Wait for confirmation (next candle closing higher, volume increase) before entering long. Set stop-loss below the pattern low.
Q: RSI is 75, MACD is crossing below signal, and price is at upper Bollinger Band. What should I watch for?
This is a bearish confluence cluster β three independent indicators align: (1) RSI 75 = overbought, (2) MACD bearish crossover = momentum shifting down, (3) Upper BB touch = statistical extreme. Watch for: price rejection at resistance, volume drying up on rallies, a bearish candlestick pattern forming. Consider taking profits on longs or preparing a short entry on confirmation. Never short just because indicators say overbought β wait for price action confirmation.
Q: How do I size a position if I have a $25,000 account and my stop is $3 away?
Using the 1% rule: Max risk = $25,000 Γ 1% = $250. Position size = $250 Γ· $3 = ~83 shares. If trading a $50 stock, that's $4,150 notional exposure (16.6% of account). Always check that notional exposure doesn't exceed your comfort level β if it does, either find a tighter stop or skip the trade. The 1% rule ensures you survive 20 consecutive losses with only a 20% drawdown.
ποΈ Architecture
βββββββββββββββββββββββββββββββββββββββββββββββ
β google/gemma-4-E4B-it (Base, ~8B params) β
β βββββββββββββββββββββββββββββββββββββββββ β
β LoRA Adapter (r=64, ~50M trainable) β
β βββ q_proj, k_proj, v_proj, o_proj β
β βββ gate_proj, up_proj, down_proj β
βββββββββββββββββββββββββββββββββββββββββββββββ
β 4-bit quantized
Trading Knowledge Embedding
Memory footprint:
- Full model: ~16GB (4-bit)
- Training VRAM: ~12GB (T4 compatible)
- Inference VRAM: ~10GB
π Repository Structure
.
βββ train.py # Main training script
βββ generate_trading_knowledge.py # Synthetic dataset generator
βββ inference.py # Python API for using the model
βββ app.py # Gradio web interface
βββ README.md # This file
βββ adapter_model/ # LoRA weights (pushed to Hub)
β οΈ Disclaimer
This model is for educational purposes only. It does not provide financial advice. Trading involves substantial risk of loss. Always:
- Do your own research
- Use proper risk management
- Consult a financial advisor for personalized advice
- Never trade with money you cannot afford to lose
The model's knowledge is based on historical patterns and textbook trading concepts. Markets are dynamic and past patterns do not guarantee future results.
π Links
- Base Model: google/gemma-4-E4B-it
- Training Framework: TRL
- Paper: Open-FinLLMs β Financial domain fine-tuning recipe
- Paper: QuantAgent β Multi-agent trading with LLMs
π Citation
@software{gemma4_trading_brain,
author = {r3bel-thewanderer},
title = {Gemma-4 Trading Brain},
year = {2025},
url = {https://huggingface.co/r3bel-thewanderer/gemma-4-trading-brain}
}