Update README.md
Browse files
README.md
CHANGED
|
@@ -23,4 +23,66 @@ tags:
|
|
| 23 |
|
| 24 |
# **QWQ R1 [Reasoning] Distill 1.5B CoT**
|
| 25 |
|
| 26 |
-
QWQ R1 [Reasoning] Distill 1.5B CoT is a fine-tuned language model designed for advanced reasoning and instruction-following tasks. It leverages the Qwen2.5 R1 Distill from the DeepSeek base model and has been fine-tuned on chain-of-thought (CoT) reasoning datasets, focusing on CoT reasoning for problem-solving. This model is optimized for tasks requiring logical reasoning, detailed explanations, and multi-step problem-solving, making it ideal for applications such as instruction-following, text generation, and complex reasoning tasks.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
# **QWQ R1 [Reasoning] Distill 1.5B CoT**
|
| 25 |
|
| 26 |
+
QWQ R1 [Reasoning] Distill 1.5B CoT is a fine-tuned language model designed for advanced reasoning and instruction-following tasks. It leverages the Qwen2.5 R1 Distill from the DeepSeek base model and has been fine-tuned on chain-of-thought (CoT) reasoning datasets, focusing on CoT reasoning for problem-solving. This model is optimized for tasks requiring logical reasoning, detailed explanations, and multi-step problem-solving, making it ideal for applications such as instruction-following, text generation, and complex reasoning tasks.
|
| 27 |
+
|
| 28 |
+
# **Quickstart with Transformers**
|
| 29 |
+
|
| 30 |
+
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
|
| 31 |
+
|
| 32 |
+
```python
|
| 33 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 34 |
+
|
| 35 |
+
model_name = "prithivMLmods/QwQ-R1-Distill-1.5B-CoT"
|
| 36 |
+
|
| 37 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 38 |
+
model_name,
|
| 39 |
+
torch_dtype="auto",
|
| 40 |
+
device_map="auto"
|
| 41 |
+
)
|
| 42 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 43 |
+
|
| 44 |
+
prompt = "How many r in strawberry."
|
| 45 |
+
messages = [
|
| 46 |
+
{"role": "system", "content": "You are a helpful and harmless assistant. You are Qwen developed by Alibaba. You should think step-by-step."},
|
| 47 |
+
{"role": "user", "content": prompt}
|
| 48 |
+
]
|
| 49 |
+
text = tokenizer.apply_chat_template(
|
| 50 |
+
messages,
|
| 51 |
+
tokenize=False,
|
| 52 |
+
add_generation_prompt=True
|
| 53 |
+
)
|
| 54 |
+
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
|
| 55 |
+
|
| 56 |
+
generated_ids = model.generate(
|
| 57 |
+
**model_inputs,
|
| 58 |
+
max_new_tokens=512
|
| 59 |
+
)
|
| 60 |
+
generated_ids = [
|
| 61 |
+
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
|
| 62 |
+
]
|
| 63 |
+
|
| 64 |
+
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
|
| 65 |
+
```
|
| 66 |
+
# **Intended Use**
|
| 67 |
+
|
| 68 |
+
**QWQ R1 [Reasoning] Distill 1.5B CoT** is specifically designed for tasks requiring advanced reasoning, structured thinking, and detailed explanations. Its intended applications include:
|
| 69 |
+
|
| 70 |
+
1. **Instruction-Following Tasks**: Performing step-by-step tasks based on user instructions.
|
| 71 |
+
2. **Logical Reasoning**: Solving problems that demand multi-step logical processing and inference.
|
| 72 |
+
3. **Text Generation**: Crafting coherent and contextually appropriate text for various domains.
|
| 73 |
+
4. **Educational Tools**: Assisting in learning environments, providing explanations for complex topics, or guiding through reasoning exercises.
|
| 74 |
+
5. **Problem-Solving**: Addressing computational or real-world problems requiring chain-of-thought reasoning.
|
| 75 |
+
6. **AI-Assisted Decision-Making**: Supporting users in making informed decisions with logical analysis.
|
| 76 |
+
|
| 77 |
+
# **Limitations**
|
| 78 |
+
|
| 79 |
+
While the model excels in reasoning and explanation tasks, it has certain constraints:
|
| 80 |
+
|
| 81 |
+
1. **Context Length**: Limited ability to process or generate outputs for inputs exceeding its maximum token limit.
|
| 82 |
+
2. **Domain Knowledge**: It may lack detailed expertise in niche domains not covered during training.
|
| 83 |
+
3. **Dependence on Training Data**: Performance can be influenced by biases or gaps in the datasets it was fine-tuned on.
|
| 84 |
+
4. **Real-Time Reasoning**: Struggles with tasks requiring dynamic understanding of real-time data or rapidly changing contexts.
|
| 85 |
+
5. **Mathematical Precision**: May produce errors in calculations or fail to interpret ambiguous mathematical problems.
|
| 86 |
+
6. **Factual Accuracy**: Occasionally generates incorrect or outdated information when dealing with facts.
|
| 87 |
+
7. **Language Nuances**: Subtle linguistic or cultural nuances might be misunderstood or misrepresented.
|
| 88 |
+
8. **Complex CoT Chains**: For extremely lengthy or convoluted reasoning chains, the model may lose track of earlier context or steps.
|