Dominick Wirzba
Chronuid
ยท
AI & ML interests
None yet
Recent Activity
reacted to philipp-zettl's post with ๐ 1 day ago
I've been cooking something neat over the past weeks ๐จโ๐ณ
We all know that training LLMs requires a lot of resources and especially a lot of compute in form of GPUs, or is super slow and inefficient when done on CPUs.
The big players use giant clusters of Nvidia H100s.
But if I look at the profiles of my fellow home brewers, all we can get our hands on are those pesky consumer RTX's. If you're lucky you got yourself a 5080 with 16GB VRAM or something.
To be frank, I don't have that 1.3k disposable cash laying around ยฏ\_(ใ)_/ยฏ
But I can write rust and like building ML libraries.
So I asked myself the question(s):
- can I train SMLs at home on my hardware?
- How hard can it be to build a ML library that can stream data between RAM and VRAM on demand, like llama.cpp's unified memory feature [^1]?
- how hard can it be to implement bf16 support?
The answers are wild, trust me!
Image 1: Metrics form last nights build on my "tiny" RTX 2060 (6 GB VRAM)
Image 2: Metrics from my most recent build on my RTX 4070 Laptop (8GB VRAM)
The majority of my time went into the shared memory, but it's stable and I'm very excited!
Here some debug logs, a la "trust me bro"
```
----
Currently available: 1112735744, attempting to reclaim: 1073741824
--- VRAM STATE [backward pass] ---
Driver Used: 6744 MB / 7805 MB
Data on GPU: 1641 MB
Grads on GPU: 3459 MB
CPU Offloaded: 18230 MB
---------------------------------
Currently available: 1079181312, attempting to reclaim: 1073741824
--- VRAM STATE [backward pass] ---
Driver Used: 6776 MB / 7805 MB
Data on GPU: 1561 MB
Grads on GPU: 3279 MB
CPU Offloaded: 18590 MB
-----------------------------
```
Final models get exported in `safetensors` format and are compatible with PyTorch and `transformers`, for accessibility.
- [^1]: https://github.com/ggml-org/llama.cpp/blob/master/docs/build.md#unified-memory reacted to sergiopaniego's post with ๐ฅ about 1 month ago
Qwen3.5 dense (smol ๐ค) models just dropped
- natively multimodal
- 0.8B ยท 2B ยท 4B ยท 9B (+ base variants)
- 262K context extensible to 1M
- built-in thinking
fine-tune them with TRL out of the box โ SFT, GRPO, DPO and more!
examples: https://huggingface.co/docs/trl/example_overview
collection: https://huggingface.co/collections/Qwen/qwen35 reacted to sergiopaniego's post with ๐ฅ about 1 month ago
did you know you can train agentic models with RL deploying the environments on HF Spaces? ๐ค
with TRL + OpenEnv, your training script connects to remote environments hosted as Spaces
want to train faster? โ just add more Spaces (TRL handles the parallelization natively)
we used this to train a model to solve the trolley problem in CARLA. 2 HF Spaces running a full driving simulator, each on a T4 GPU
full write-up with code and results โ https://huggingface.co/blog/sergiopaniego/bringing-carla-to-openenv-trl