Update README: fix model references to HuggingFaceTB/nanowhale-*
Browse files
README.md
CHANGED
|
@@ -1,9 +1,10 @@
|
|
| 1 |
-
#
|
| 2 |
|
| 3 |
A small ~110M parameter language model implementing the **DeepSeek-V4 architecture**, fine-tuned for chat/instruction following. Trained from scratch — no weights from DeepSeek-V4 were used.
|
| 4 |
|
| 5 |
- **Pretrained base model**: [HuggingFaceTB/nanowhale-100m-base](https://huggingface.co/HuggingFaceTB/nanowhale-100m-base)
|
| 6 |
- **This model**: SFT on [HuggingFaceTB/smol-smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smol-smoltalk)
|
|
|
|
| 7 |
|
| 8 |
## Architecture
|
| 9 |
|
|
@@ -52,19 +53,19 @@ This model implements key DeepSeek-V4 innovations at a miniature scale:
|
|
| 52 |
import torch
|
| 53 |
from safetensors.torch import load_file
|
| 54 |
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer
|
|
|
|
| 55 |
|
| 56 |
# Load model (recommended: manual load for reliability)
|
| 57 |
-
config = AutoConfig.from_pretrained("
|
| 58 |
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True).float()
|
| 59 |
|
| 60 |
# Download and load weights
|
| 61 |
-
|
| 62 |
-
weights_path = hf_hub_download("cmpatino/smol-deepseek-v4-100m", "model.safetensors")
|
| 63 |
state_dict = load_file(weights_path)
|
| 64 |
model.load_state_dict(state_dict, strict=True)
|
| 65 |
model = model.cuda().eval()
|
| 66 |
|
| 67 |
-
tokenizer = AutoTokenizer.from_pretrained("
|
| 68 |
|
| 69 |
# Chat
|
| 70 |
messages = [{"role": "user", "content": "What are 3 benefits of exercise?"}]
|
|
@@ -80,6 +81,7 @@ print(tokenizer.decode(output[0][input_ids.shape[1]:], skip_special_tokens=True)
|
|
| 80 |
- **Tiny model**: 110M params with 129K vocabulary — most capacity goes to embeddings. Generations are often incoherent or factually wrong.
|
| 81 |
- **Undertrained**: Only 5K pretrain + 3K SFT steps. Production models train for 100K+ steps on trillions of tokens.
|
| 82 |
- **Educational purpose**: This model demonstrates the DeepSeek-V4 architecture at small scale. It is **not** suitable for any production use.
|
|
|
|
| 83 |
- **Custom code**: Requires `trust_remote_code=True`.
|
| 84 |
|
| 85 |
## Hardware
|
|
|
|
| 1 |
+
# nanowhale-100m 🐳
|
| 2 |
|
| 3 |
A small ~110M parameter language model implementing the **DeepSeek-V4 architecture**, fine-tuned for chat/instruction following. Trained from scratch — no weights from DeepSeek-V4 were used.
|
| 4 |
|
| 5 |
- **Pretrained base model**: [HuggingFaceTB/nanowhale-100m-base](https://huggingface.co/HuggingFaceTB/nanowhale-100m-base)
|
| 6 |
- **This model**: SFT on [HuggingFaceTB/smol-smoltalk](https://huggingface.co/datasets/HuggingFaceTB/smol-smoltalk)
|
| 7 |
+
- **Training code**: [github.com/huggingface/nanowhale](https://github.com/huggingface/nanowhale)
|
| 8 |
|
| 9 |
## Architecture
|
| 10 |
|
|
|
|
| 53 |
import torch
|
| 54 |
from safetensors.torch import load_file
|
| 55 |
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer
|
| 56 |
+
from huggingface_hub import hf_hub_download
|
| 57 |
|
| 58 |
# Load model (recommended: manual load for reliability)
|
| 59 |
+
config = AutoConfig.from_pretrained("HuggingFaceTB/nanowhale-100m", trust_remote_code=True)
|
| 60 |
model = AutoModelForCausalLM.from_config(config, trust_remote_code=True).float()
|
| 61 |
|
| 62 |
# Download and load weights
|
| 63 |
+
weights_path = hf_hub_download("HuggingFaceTB/nanowhale-100m", "model.safetensors")
|
|
|
|
| 64 |
state_dict = load_file(weights_path)
|
| 65 |
model.load_state_dict(state_dict, strict=True)
|
| 66 |
model = model.cuda().eval()
|
| 67 |
|
| 68 |
+
tokenizer = AutoTokenizer.from_pretrained("HuggingFaceTB/nanowhale-100m")
|
| 69 |
|
| 70 |
# Chat
|
| 71 |
messages = [{"role": "user", "content": "What are 3 benefits of exercise?"}]
|
|
|
|
| 81 |
- **Tiny model**: 110M params with 129K vocabulary — most capacity goes to embeddings. Generations are often incoherent or factually wrong.
|
| 82 |
- **Undertrained**: Only 5K pretrain + 3K SFT steps. Production models train for 100K+ steps on trillions of tokens.
|
| 83 |
- **Educational purpose**: This model demonstrates the DeepSeek-V4 architecture at small scale. It is **not** suitable for any production use.
|
| 84 |
+
- **bf16 NaN**: Use fp32 — the Hyper-Connections architecture produces values that overflow bf16 range at this scale.
|
| 85 |
- **Custom code**: Requires `trust_remote_code=True`.
|
| 86 |
|
| 87 |
## Hardware
|