How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "0labs-in/V1.3-CSD"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "0labs-in/V1.3-CSD",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Use Docker
docker model run hf.co/0labs-in/V1.3-CSD
Quick Links

V1.3-CSD

Sky v1.3 CSD is a 0labs research checkpoint trained with Cognitive Scaffolding Decay.

CSD curriculum:

  1. Long scaffold examples for code understanding.
  2. Medium bridge examples for reduced explanation.
  3. Clean concise examples for daily professional coding use.

This repository contains a standalone saved inference checkpoint and tokenizer/runtime files.

Research Metrics

Training ran on an AMD MI300X.

Stage Rows LR Train Loss
stage1_scaffold 915 5e-7 1.025
stage2_bridge 1121 5e-7 1.043
stage3_clean 677 4e-7 0.840

Quick private objective eval:

Model Objective Score
Sky v1.3 5.5B production baseline 20 / 24
V1.3-CSD 22 / 24

These automatic scores are conservative checks. Rubric categories still need human or judge-model grading for paper-quality results.

Colab Loading Note

Use snapshot_download() into a local folder before loading. This avoids dynamic module import issues caused by dots in repository names.

from huggingface_hub import snapshot_download

snapshot_download(
    repo_id="0labs-in/V1.3-CSD",
    local_dir="/content/sky_v1_3_csd",
    repo_type="model",
)

Then load from /content/sky_v1_3_csd with trust_remote_code=True.

Downloads last month
400
Safetensors
Model size
6B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support