Jackrong commited on
Commit
eed3545
·
verified ·
1 Parent(s): 9b8af17

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -13
README.md CHANGED
@@ -19,29 +19,30 @@ datasets:
19
  - Roman1111111/claude-opus-4.6-10000x
20
  ---
21
  # 🌟 Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-v2
22
- 🔥 **Update (April 5): To help beginners and enthusiasts better understand and reproduce the fine-tuning process of this model, I have prepared the complete training notebook, codebase, and a comprehensive companion PDF guide! Please check the resource links below.**
23
 
24
  > ❤️ Special thanks to the [**Unsloth**](https://unsloth.ai) open-source library and [@KyleHessling1](https://x.com/kylehessling1) for their support.
25
 
26
  ## 📚 Resources & Guides
27
 
28
- If you want to dive into how this model was trained, or wish to reproduce the results locally or on Colab, please visit my GitHub repository:
29
- 👉 **🔗[Jackrong-llm-finetuning-guide](https://github.com/R6410418/Jackrong-llm-finetuning-guide.git)**
30
 
31
- ### 📥 Core Technical Document Direct Download
32
- You can click the link below to directly access the complete technical manual for the Qwopus3.5 training:
33
-
34
- * **🔗[Qwopus3-5-27b-Colab_complete_guide_to_llm_finetuning.pdf](https://github.com/R6410418/Jackrong-llm-finetuning-guide/blob/8eb33234856054d23675064177de1ac10b54a609/guidePDF/Qwopus3-5-27b-Colab_complete_guide_to_llm_finetuning.pdf)**
35
- * Covers the entire workflow, starting with an introduction to Google Colab and Unsloth.
36
- * Details the complete pipeline with step-by-step explanations—from downloading the base model and normalizing heterogeneous data sources into a unified format, to configuring trainer hyperparameters and finally publishing to Hugging Face.
37
- * Feedback is highly welcome! If you spot any shortcomings or areas for improvement, please let me know, and I will update it promptly.
38
 
39
  > **A Note:**
40
- > My goal in writing this guide goes beyond merely detailing a single training workflow. I want to convey a broader message: fine-tuning, post-training, and even medium-scale pre-training are not unattainable technical rituals, nor are they the exaggerated hype often packaged by social media. More often than not, all you need is a Google account, a standard laptop, and relentless curiosity.
41
  >
42
- > *No one starts as an expert. But every expert was once brave enough to begin.*
43
  >
44
- > All fine-tuning training and testing for this project were conducted at my own expense. If you find this model or the guide helpful, a **Star ⭐️ on GitHub** would be the greatest encouragement for me. Thank you so much! 🙏
 
 
 
45
 
46
  ---
47
 
 
19
  - Roman1111111/claude-opus-4.6-10000x
20
  ---
21
  # 🌟 Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-v2
22
+ 🔥 **Update (April 5):** I’ve released the complete training notebook, codebase, and a comprehensive PDF guide to help beginners and enthusiasts understand and reproduce this model's fine-tuning process.
23
 
24
  > ❤️ Special thanks to the [**Unsloth**](https://unsloth.ai) open-source library and [@KyleHessling1](https://x.com/kylehessling1) for their support.
25
 
26
  ## 📚 Resources & Guides
27
 
28
+ 👉 **[GitHub Repository: Jackrong-llm-finetuning-guide](https://github.com/R6410418/Jackrong-llm-finetuning-guide.git)**
29
+ Visit the repo to dive into the codebase and reproduce the results locally or on Colab.
30
 
31
+ ### 📥 Core Technical Document
32
+ **🔗 [Qwopus3.5-27b Complete Fine-Tuning Guide (PDF)](https://github.com/R6410418/Jackrong-llm-finetuning-guide/blob/main/guidePDF/Qwopus3-5-27b-Colab_complete_guide_to_llm_finetuning.pdf)**
33
+ * **The Full Pipeline:** A step-by-step walkthrough—from downloading the base model and unifying heterogeneous data, to configuring trainer hyperparameters and publishing to Hugging Face.
34
+ * **Beginner Friendly:** Includes an introductory guide to getting started with Google Colab and Unsloth.
35
+ * *Feedback welcome! If you spot any areas for improvement, please let me know and I will update it promptly.*
 
 
36
 
37
  > **A Note:**
38
+ > My goal isn't just to detail a workflow, but to demystify LLM training. Beyond the social media hype, fine-tuning isn't an unattainable ritual—often, all you need is a Google account, a standard laptop, and relentless curiosity.
39
  >
40
+ > *No one starts as an expert, but every expert was once brave enough to begin.*
41
  >
42
+ > All training and testing for this project were self-funded. If you find this model or guide helpful, a **Star ⭐️ on GitHub** would be the greatest encouragement. Thank you! 🙏
43
+
44
+ > [!Note]
45
+ > The Claude series model optimizations are named under the **Qwopus3.5 series**, with the latest version being **🌟Qwopus3.5-v3**.
46
 
47
  ---
48