| --- |
| license: apache-2.0 |
| language: |
| - en |
| tags: |
| - text-generation-inference |
| - transformers |
| - smolify |
| - dslm |
| pipeline_tag: text-generation |
| inference: |
| parameters: |
| temperature: 1 |
| top_p: 0.95 |
| top_k: 64 |
| --- |
| # ๐ค smolified-code-helper-model |
|
|
| > **Intelligence, Distilled.** |
|
|
| This is a **Domain Specific Language Model (DSLM)** generated by the **Smolify Foundry**. |
|
|
| It has been synthetically distilled from SOTA reasoning engines into a high-efficiency architecture, optimized for deployment on edge hardware (CPU/NPU) or low-VRAM environments. |
|
|
| ## ๐ฆ Asset Details |
| - **Origin:** Smolify Foundry (Job ID: `aa61ab1e`) |
| - **Architecture:** DSLM-Micro (270M Parameter Class) |
| - **Training Method:** Proprietary Neural Distillation |
| - **Optimization:** 4-bit Quantized / FP16 Mixed |
| - **Dataset:** [Link to Dataset](https://huggingface.co/datasets/programmerGodbyte/smolified-code-helper-model) |
|
|
| ## ๐ Usage (Inference) |
| This model is compatible with standard inference backends like vLLM. |
|
|
| ```python |
| # Example: Running your Sovereign Model |
| from transformers import AutoModelForCausalLM, AutoTokenizer |
| |
| model_id = "programmerGodbyte/smolified-code-helper-model" |
| tokenizer = AutoTokenizer.from_pretrained(model_id) |
| model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto") |
| |
| messages = [ |
| {'role': 'system', 'content': '''You are an expert C++ coder. Provide well-commented, formatted code snippets covering a wide range of C++ programming tasks, including basic syntax, data structures, algorithms, and common utility functions. Each code should be concise and demonstrate a clear concept.'''}, |
| {'role': 'user', 'content': '''I need a basic C++ code for summing elements in an array. Super simple.'''} |
| ] |
| text = tokenizer.apply_chat_template( |
| messages, |
| tokenize = False, |
| add_generation_prompt = True, |
| ).removeprefix('<bos>') |
| |
| from transformers import TextStreamer |
| _ = model.generate( |
| **tokenizer(text, return_tensors = "pt").to("cuda"), |
| max_new_tokens = 1000, |
| temperature = 1, top_p = 0.95, top_k = 64, |
| streamer = TextStreamer(tokenizer, skip_prompt = True), |
| ) |
| ``` |
|
|
| ## โ๏ธ License & Ownership |
| This model weights are a sovereign asset owned by **programmerGodbyte**. |
| Generated via [Smolify.ai](https://smolify.ai). |
|
|
| [<img src="https://smolify.ai/smolify.gif" width="100"/>](https://smolify.ai) |
|
|