gss1147 commited on
Commit
29c5663
·
verified ·
1 Parent(s): be6d3ab

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +172 -11
README.md CHANGED
@@ -1,13 +1,174 @@
1
  ---
2
- datasets:
3
- - WithinUsAI/Python_GOD_Coder_50k
4
- - reedmayhew/gemini-3.1-pro-2048-reasoning-1100x
5
- - m-a-p/Code-Feedback
6
- - crownelius/Opus-4.6-Reasoning-2100x-formatted
7
- - crownelius/Opus4.6-No-Reasoning-260x
8
- - crownelius/Creative_Writing_Multiturn_Enhanced
9
- - HuggingFaceH4/llava-instruct-mix-vsft
10
- - Roman1111111/gemini-3-pro-10000x-hard-high-reasoning
11
  base_model:
12
- - Qwen/Qwen3.5-4B
13
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: other
 
 
 
 
 
 
 
 
3
  base_model:
4
+ - Qwen/Qwen3.5-4B
5
+ library_name: llama.cpp
6
+ tags:
7
+ - gguf
8
+ - qwen
9
+ - qwen3.5
10
+ - code
11
+ - coder
12
+ - conversational
13
+ - text-generation
14
+ - withinusai
15
+ language:
16
+ - en
17
+ datasets:
18
+ - WithinUsAI/Python_GOD_Coder_50k
19
+ - reedmayhew/gemini-3.1-pro-2048-reasoning-1100x
20
+ - m-a-p/Code-Feedback
21
+ - crownelius/Opus-4.6-Reasoning-2100x-formatted
22
+ - crownelius/Opus4.6-No-Reasoning-260x
23
+ - crownelius/Creative_Writing_Multiturn_Enhanced
24
+ - HuggingFaceH4/llava-instruct-mix-vsft
25
+ - Roman1111111/gemini-3-pro-10000x-hard-high-reasoning
26
+ model_type: gguf
27
+ inference: false
28
+ ---
29
+
30
+ # WithIn-Us-Coder-4B.gguf
31
+
32
+ **WithIn-Us-Coder-4B.gguf** is a GGUF release from **WithIn Us AI**, built for local inference and coding-focused assistant use cases. It is based on **Qwen/Qwen3.5-4B** and distributed in quantized GGUF formats for efficient deployment in llama.cpp-compatible runtimes.
33
+
34
+ ## Model Summary
35
+
36
+ This model is intended as a coding-oriented conversational assistant with emphasis on:
37
+
38
+ - code generation
39
+ - code reasoning
40
+ - implementation planning
41
+ - debugging assistance
42
+ - instruction following
43
+ - general assistant-style chat for development workflows
44
+
45
+ This repository currently provides the following GGUF variants:
46
+
47
+ - `WithIn-Us-Coder-4B.Q4_K_M.gguf`
48
+ - `WithIn-Us-Coder-4B.Q5_K_M.gguf`
49
+
50
+ ## Creator
51
+
52
+ **WithIn Us AI** is the creator of this model release, including the model packaging, fine-tuning / merging concept, process, naming, and GGUF distribution.
53
+
54
+ ## Base Model
55
+
56
+ This model is based on:
57
+
58
+ - **Qwen/Qwen3.5-4B**
59
+
60
+ Credit and appreciation go to the original creators of the base LLM architecture and weights.
61
+
62
+ ## Training Data
63
+
64
+ The current repository metadata lists the following datasets as part of the model’s training / fine-tuning lineage:
65
+
66
+ - `WithinUsAI/Python_GOD_Coder_50k`
67
+ - `reedmayhew/gemini-3.1-pro-2048-reasoning-1100x`
68
+ - `m-a-p/Code-Feedback`
69
+ - `crownelius/Opus-4.6-Reasoning-2100x-formatted`
70
+ - `crownelius/Opus4.6-No-Reasoning-260x`
71
+ - `crownelius/Creative_Writing_Multiturn_Enhanced`
72
+ - `HuggingFaceH4/llava-instruct-mix-vsft`
73
+ - `Roman1111111/gemini-3-pro-10000x-hard-high-reasoning`
74
+
75
+ **Attribution note:**
76
+ WithIn Us AI does not claim ownership over third-party base models or third-party datasets. Full credit, thanks, and attribution belong to the original model and dataset creators.
77
+
78
+ ## Intended Use
79
+
80
+ This model is intended for:
81
+
82
+ - local coding assistants
83
+ - offline development help
84
+ - code explanation
85
+ - bug-fixing support
86
+ - prompt-based code generation
87
+ - experimentation in llama.cpp and GGUF-compatible environments
88
+
89
+ ### Suggested Use Cases
90
+
91
+ - generating Python, JavaScript, C++, and other programming language snippets
92
+ - explaining code blocks
93
+ - rewriting or improving functions
94
+ - brainstorming implementation strategies
95
+ - creating scaffolding and prototypes
96
+ - assisting with debugging and refactoring
97
+
98
+ ## Out-of-Scope Use
99
+
100
+ This model is not guaranteed to be reliable for:
101
+
102
+ - high-stakes legal advice
103
+ - medical advice
104
+ - financial decision-making
105
+ - autonomous execution without review
106
+ - security-critical production decisions without human verification
107
+
108
+ Users should always validate generated code before deployment.
109
+
110
+ ## Quantization Formats
111
+
112
+ This repository currently includes:
113
+
114
+ - **Q4_K_M** for smaller memory footprint and faster local inference
115
+ - **Q5_K_M** for improved quality while remaining efficient
116
+
117
+ Choose the quant level based on your hardware budget and quality needs.
118
+
119
+ ## Prompting Notes
120
+
121
+ As a coding-focused conversational model, best results usually come from prompts that are:
122
+
123
+ - specific
124
+ - structured
125
+ - explicit about language, framework, or goal
126
+ - clear about desired output format
127
+
128
+ Example prompt style:
129
+
130
+ > Write a Python function that parses a CSV file, removes duplicate rows by email, and saves the cleaned result. Include error handling and comments.
131
+
132
+ ## Limitations
133
+
134
+ Like other language models, this model may:
135
+
136
+ - hallucinate APIs or library behavior
137
+ - generate insecure or inefficient code
138
+ - make reasoning mistakes
139
+ - produce outdated patterns
140
+ - require prompt iteration for best results
141
+
142
+ Human review is strongly recommended, especially for production code.
143
+
144
+ ## License
145
+
146
+ This repository uses a **custom WithIn Us AI license approach**.
147
+
148
+ - The base model may be subject to its original upstream license and terms.
149
+ - Third-party datasets remain the property of their respective creators / licensors.
150
+ - WithIn Us AI claims authorship of the fine-tuning / merging concept, process, packaging, naming, and release structure for this model distribution.
151
+ - This repository does **not** claim ownership over third-party datasets or the underlying upstream base model.
152
+
153
+ You can include a `LICENSE` file in this repository with the exact custom terms you want enforced.
154
+
155
+ ## Acknowledgments
156
+
157
+ Special thanks to:
158
+
159
+ - **Qwen** for the base model
160
+ - all third-party dataset creators listed above
161
+ - the open-source GGUF / llama.cpp ecosystem
162
+ - the broader Hugging Face community
163
+
164
+ ## Files
165
+
166
+ Current repository files include:
167
+
168
+ - `WithIn-Us-Coder-4B.Q4_K_M.gguf`
169
+ - `WithIn-Us-Coder-4B.Q5_K_M.gguf`
170
+
171
+ ## Disclaimer
172
+
173
+ This model may generate incorrect, biased, insecure, or incomplete outputs.
174
+ Use responsibly, validate important results, and review all generated code before real-world use.