YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

Qwen3 1.7B

This repository follows the structure of a Qwen3-style causal language model on Hugging Face. It includes model configuration, tokenizer assets, generation configuration, and weight artifacts.

Contents:

  • config.json โ€” model configuration (Qwen3 fields)
  • generation_config.json โ€” text generation defaults
  • tokenizer.json โ€” tokenizer (fast) with special tokens
  • tokenizer_config.json โ€” tokenizer settings
  • special_tokens_map.json โ€” mapping for special tokens
  • vocab.json, merges.txt โ€” BPE assets
  • model.safetensors โ€” weights (safetensors, single file)

Notes:

  • The tokenizer uses a compact vocabulary and special tokens typical for causal LMs.

Example usage:

from transformers import AutoTokenizer, AutoModelForCausalLM
tok = AutoTokenizer.from_pretrained("<your-namespace>/<your-model-id>")
model = AutoModelForCausalLM.from_pretrained("<your-namespace>/<your-model-id>")

How to publish to the Hub:

  1. Create a new repo on the Hugging Face Hub (private or public).
  2. Run huggingface-cli login.
  3. From this folder, run git init, git remote add origin <your-hf-repo-url>, git add -A, git commit -m "init", git push -u origin main.
Downloads last month
75
Safetensors
Model size
9.45M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support