qwen3-davesoma-v2-lora

LoRA adapter trained on curated personal writing style data (blogs, notes, X-style samples) to produce concise, thesis-first responses with a sovereignty / investing / Bitcoin-aware framing.

What this is

  • Type: LoRA adapter (PEFT)
  • Base model: unsloth/Qwen3-8B-unsloth-bnb-4bit
  • Intended use: assistant-style drafting, opinion framing, and concise analysis

Notable style traits

  • direct, practical tone
  • preference for actionable outputs over generic explanation
  • explicit discussion of trade-offs and risk

Example prompts and outputs

1) Money and meaning of life

Prompt:

Based on DaveSoma’s model, what is the best way to live a life, and what is the role of money?

Example output:

Money is a tool to free time, not a goal. Use it to buy time for health, learning, and meaningful activities. Focus on mastering a scarce skill, keeping knowledge updated, and staying curious. Take care of your health and invest in it. Build a life where you enjoy your time rather than being a slave to it.

2) META stock analysis style

Prompt:

Give a concise take on META valuation under high AI capex.

Example output:

META can still compound, but valuation depends on whether AI capex is productive rather than just defensive spend. Full-capex owner-earnings views will look much lower than maintenance-capex views. Treat intrinsic value as a range, anchor on conservative cash conversion, and stress-test margin and share-count assumptions.

Blog / writing context

Quick usage (Transformers + PEFT)

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

base = "unsloth/Qwen3-8B-unsloth-bnb-4bit"
adapter = "davesoma/qwen3-davesoma-v2-lora"

model = AutoModelForCausalLM.from_pretrained(base)
model = PeftModel.from_pretrained(model, adapter)
tokenizer = AutoTokenizer.from_pretrained(adapter)

Notes

  • This model reflects writing style and worldview framing, not guaranteed factual truth.
  • For factual tasks, pair with retrieval from primary sources.
Downloads last month
55
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for davesoma/qwen3-davesoma-v2-lora

Finetuned
Qwen/Qwen3-8B
Adapter
(68)
this model