Jackrong commited on
Commit
16ec723
·
verified ·
1 Parent(s): cc7e8cb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -15
README.md CHANGED
@@ -11,7 +11,7 @@ tags:
11
  - reasoning
12
  - chain-of-thought
13
  - lora
14
- pipeline_tag: text-generation
15
  datasets:
16
  - Jackrong/Qwen3.5-reasoning-700x
17
  - nohurry/Opus-4.6-Reasoning-3000x-filtered
@@ -19,29 +19,30 @@ datasets:
19
 
20
  # 🌟 Qwen3.5-9B-Claude-4.6-Opus-Reasoning-Distilled
21
 
22
- 🔥 **Update (April 5): To help beginners and enthusiasts better understand and reproduce the fine-tuning process of this model, I have prepared the complete training notebook, codebase, and a comprehensive companion PDF guide! Please check the resource links below.**
23
 
24
  > ❤️ Special thanks to the [**Unsloth**](https://unsloth.ai) open-source library and [@KyleHessling1](https://x.com/kylehessling1) for their support.
25
 
26
  ## 📚 Resources & Guides
27
 
28
- If you want to dive into how this model was trained, or wish to reproduce the results locally or on Colab, please visit my GitHub repository:
29
- 👉 **🔗[Jackrong-llm-finetuning-guide](https://github.com/R6410418/Jackrong-llm-finetuning-guide.git)**
30
 
31
- ### 📥 Core Technical Document Direct Download
32
- You can click the link below to directly access the complete technical manual for the Qwopus3.5 training:
33
-
34
- * **🔗[Qwopus3-5-27b-Colab_complete_guide_to_llm_finetuning.pdf](https://github.com/R6410418/Jackrong-llm-finetuning-guide/blob/8eb33234856054d23675064177de1ac10b54a609/guidePDF/Qwopus3-5-27b-Colab_complete_guide_to_llm_finetuning.pdf)**
35
- * Covers the entire workflow, starting with an introduction to Google Colab and Unsloth.
36
- * Details the complete pipeline with step-by-step explanations—from downloading the base model and normalizing heterogeneous data sources into a unified format, to configuring trainer hyperparameters and finally publishing to Hugging Face.
37
- * Feedback is highly welcome! If you spot any shortcomings or areas for improvement, please let me know, and I will update it promptly.
38
 
39
  > **A Note:**
40
- > My goal in writing this guide goes beyond merely detailing a single training workflow. I want to convey a broader message: fine-tuning, post-training, and even medium-scale pre-training are not unattainable technical rituals, nor are they the exaggerated hype often packaged by social media. More often than not, all you need is a Google account, a standard laptop, and relentless curiosity.
41
  >
42
- > *No one starts as an expert. But every expert was once brave enough to begin.*
43
  >
44
- > All fine-tuning training and testing for this project were conducted at my own expense. If you find this model or the guide helpful, a **Star ⭐️ on GitHub** would be the greatest encouragement for me. Thank you so much! 🙏
 
 
 
45
 
46
 
47
  ## 📢 Announcement
@@ -108,7 +109,6 @@ The dataset consists of high-quality, filtered reasoning distillation data:
108
  | Dataset Name | Description / Purpose |
109
  |--------------|-----------------------|
110
  | [nohurry/Opus-4.6-Reasoning-3000x-filtered](https://huggingface.co/datasets/nohurry/Opus-4.6-Reasoning-3000x-filtered) | Provides comprehensive Claude 4.6 Opus reasoning trajectories. |
111
- | [TeichAI/claude-4.5-opus-high-reasoning-250x](https://huggingface.co/datasets/TeichAI/claude-4.5-opus-high-reasoning-250x) | Injecting high-intensity, structured reasoning instances. |
112
  | [Jackrong/Qwen3.5-reasoning-700x](https://huggingface.co/datasets/Jackrong/Qwen3.5-reasoning-700x) | Additional curated reasoning samples designed to strengthen structured step-by-step problem solving and improve reasoning diversity. |
113
 
114
  ## 🌟 Core Skills & Capabilities
 
11
  - reasoning
12
  - chain-of-thought
13
  - lora
14
+ pipeline_tag: image-text-to-image
15
  datasets:
16
  - Jackrong/Qwen3.5-reasoning-700x
17
  - nohurry/Opus-4.6-Reasoning-3000x-filtered
 
19
 
20
  # 🌟 Qwen3.5-9B-Claude-4.6-Opus-Reasoning-Distilled
21
 
22
+ 🔥 **Update (April 5):** I’ve released the complete training notebook, codebase, and a comprehensive PDF guide to help beginners and enthusiasts understand and reproduce this model's fine-tuning process.
23
 
24
  > ❤️ Special thanks to the [**Unsloth**](https://unsloth.ai) open-source library and [@KyleHessling1](https://x.com/kylehessling1) for their support.
25
 
26
  ## 📚 Resources & Guides
27
 
28
+ 👉 **[GitHub Repository: Jackrong-llm-finetuning-guide](https://github.com/R6410418/Jackrong-llm-finetuning-guide.git)**
29
+ Visit the repo to dive into the codebase and reproduce the results locally or on Colab.
30
 
31
+ ### 📥 Core Technical Document
32
+ **🔗 [Qwopus3.5-27b Complete Fine-Tuning Guide (PDF)](https://github.com/R6410418/Jackrong-llm-finetuning-guide/blob/main/guidePDF/Qwopus3-5-27b-Colab_complete_guide_to_llm_finetuning.pdf)**
33
+ * **The Full Pipeline:** A step-by-step walkthrough—from downloading the base model and unifying heterogeneous data, to configuring trainer hyperparameters and publishing to Hugging Face.
34
+ * **Beginner Friendly:** Includes an introductory guide to getting started with Google Colab and Unsloth.
35
+ * *Feedback welcome! If you spot any areas for improvement, please let me know and I will update it promptly.*
 
 
36
 
37
  > **A Note:**
38
+ > My goal isn't just to detail a workflow, but to demystify LLM training. Beyond the social media hype, fine-tuning isn't an unattainable ritual—often, all you need is a Google account, a standard laptop, and relentless curiosity.
39
  >
40
+ > *No one starts as an expert, but every expert was once brave enough to begin.*
41
  >
42
+ > All training and testing for this project were self-funded. If you find this model or guide helpful, a **Star ⭐️ on GitHub** would be the greatest encouragement. Thank you! 🙏
43
+
44
+ > [!Note]
45
+ > The Claude series model optimizations are named under the **Qwopus3.5 series**, with the latest version being **🌟Qwopus3.5-v3**.
46
 
47
 
48
  ## 📢 Announcement
 
109
  | Dataset Name | Description / Purpose |
110
  |--------------|-----------------------|
111
  | [nohurry/Opus-4.6-Reasoning-3000x-filtered](https://huggingface.co/datasets/nohurry/Opus-4.6-Reasoning-3000x-filtered) | Provides comprehensive Claude 4.6 Opus reasoning trajectories. |
 
112
  | [Jackrong/Qwen3.5-reasoning-700x](https://huggingface.co/datasets/Jackrong/Qwen3.5-reasoning-700x) | Additional curated reasoning samples designed to strengthen structured step-by-step problem solving and improve reasoning diversity. |
113
 
114
  ## 🌟 Core Skills & Capabilities