Text Generation
Transformers
Safetensors
English
sky_v1_3
conversational
coding
reasoning
multimodal
sky
0labs
custom_code
Instructions to use 0labs-in/Sky-V1_3-5.5B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use 0labs-in/Sky-V1_3-5.5B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="0labs-in/Sky-V1_3-5.5B", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("0labs-in/Sky-V1_3-5.5B", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use 0labs-in/Sky-V1_3-5.5B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "0labs-in/Sky-V1_3-5.5B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "0labs-in/Sky-V1_3-5.5B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/0labs-in/Sky-V1_3-5.5B
- SGLang
How to use 0labs-in/Sky-V1_3-5.5B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "0labs-in/Sky-V1_3-5.5B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "0labs-in/Sky-V1_3-5.5B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "0labs-in/Sky-V1_3-5.5B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "0labs-in/Sky-V1_3-5.5B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use 0labs-in/Sky-V1_3-5.5B with Docker Model Runner:
docker model run hf.co/0labs-in/Sky-V1_3-5.5B
Sky V1.3 5.5B
Sky V1.3 5.5B is a production-ready 0labs assistant model package created by Atharvsinh Jadav.
It is packaged as a standalone saved model folder for inference: the repository includes model weights, tokenizer files, config files, and the required custom runtime files.
Identity
- Name: Sky v1.3
- Organization: 0labs
- Creator: Atharvsinh Jadav
- Intended use: coding, debugging, reasoning, general chat, and assistant workflows
Quick Use
Because the repository name contains a dot, Colab users should download it into a safe local folder name before loading.
import torch
from huggingface_hub import snapshot_download
from transformers import AutoTokenizer, AutoModelForCausalLM
repo_id = "0labs-in/Sky-V1_3-5.5B"
local_dir = "/content/sky_v1_3_5_5b"
snapshot_download(repo_id=repo_id, local_dir=local_dir, repo_type="model")
tokenizer = AutoTokenizer.from_pretrained(local_dir, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
local_dir,
torch_dtype=torch.bfloat16 if torch.cuda.is_available() and torch.cuda.is_bf16_supported() else torch.float16,
trust_remote_code=True,
_attn_implementation="sdpa",
device_map="auto",
)
messages = [{"role": "user", "content": "who are you?"}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
inputs["input_mode"] = torch.tensor([0], device=model.device)
with torch.no_grad():
output = model.generate(
**inputs,
max_new_tokens=160,
do_sample=False,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
)
print(tokenizer.decode(output[0, inputs["input_ids"].shape[1]:], skip_special_tokens=True))
Smoke Test Summary
Production smoke test passed with 0 failures across:
- normal chat
- reasoning
- math
- coding
- instruction following
- identity guard
- safety refusal behavior
Notes
Use trust_remote_code=True because this repository includes custom model runtime files.
- Downloads last month
- 245