nuhmanpk commited on
Commit
d025e7f
·
verified ·
1 Parent(s): 789f1a1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +296 -32
README.md CHANGED
@@ -1,32 +1,296 @@
1
- ---
2
- license: mit
3
- dataset_info:
4
- features:
5
- - name: title
6
- dtype: large_string
7
- - name: source
8
- dtype: large_string
9
- - name: url
10
- dtype: large_string
11
- - name: category
12
- dtype: large_string
13
- - name: language
14
- dtype: large_string
15
- - name: content
16
- dtype: large_string
17
- - name: chunk_id
18
- dtype: int64
19
- - name: last_updated
20
- dtype: large_string
21
- splits:
22
- - name: train
23
- num_bytes: 976709227
24
- num_examples: 784505
25
- download_size: 497571067
26
- dataset_size: 976709227
27
- configs:
28
- - config_name: default
29
- data_files:
30
- - split: train
31
- path: data/train-*
32
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ dataset_info:
4
+ features:
5
+ - name: title
6
+ dtype: large_string
7
+ - name: source
8
+ dtype: large_string
9
+ - name: url
10
+ dtype: large_string
11
+ - name: category
12
+ dtype: large_string
13
+ - name: language
14
+ dtype: large_string
15
+ - name: content
16
+ dtype: large_string
17
+ - name: chunk_id
18
+ dtype: int64
19
+ - name: last_updated
20
+ dtype: large_string
21
+ splits:
22
+ - name: train
23
+ num_bytes: 976709227
24
+ num_examples: 784505
25
+ download_size: 497571067
26
+ dataset_size: 976709227
27
+ configs:
28
+ - config_name: default
29
+ data_files:
30
+ - split: train
31
+ path: data/train-*
32
+ ---
33
+
34
+
35
+ # Dev Knowledge Base (Programming Documentation Dataset)
36
+
37
+ A large-scale, structured dataset of programming documentation collected from official sources across languages, frameworks, tools, and AI ecosystems.
38
+
39
+ Do Follow me on Github: https://github.com/nuhmanpk
40
+
41
+ ---
42
+
43
+ ## Overview
44
+
45
+ This dataset contains cleaned and structured documentation content scraped from official developer docs across multiple domains such as:
46
+
47
+ * Programming languages
48
+ * Frameworks (frontend, backend)
49
+ * DevOps & infrastructure tools
50
+ * Databases
51
+ * Machine learning & AI libraries
52
+
53
+ All content is chunked (~800 characters) and optimized for:
54
+
55
+ * Retrieval-Augmented Generation (RAG)
56
+ * Developer copilots
57
+ * Code assistants
58
+ * Semantic search
59
+
60
+ ---
61
+
62
+ ## Dataset Structure
63
+
64
+ Each row represents a chunk of documentation.
65
+
66
+ | Column | Description |
67
+ | ------------ | ------------------------------------------ |
68
+ | title | Page title or endpoint |
69
+ | source | Source name (e.g., react, python, fastapi) |
70
+ | url | Original documentation URL |
71
+ | category | Type (language, framework, database, etc.) |
72
+ | language | Programming language |
73
+ | content | Cleaned text chunk |
74
+ | chunk_id | Chunk index within page |
75
+ | chunk_length | Character length |
76
+ | last_updated | Timestamp |
77
+
78
+ ---
79
+
80
+ ## Sources Included
81
+
82
+ ### Languages
83
+
84
+ python, javascript, typescript, go, rust, java, csharp, dart, swift, kotlin
85
+
86
+ ### Frontend & Frameworks
87
+
88
+ react, nextjs, vue, nuxt, svelte, sveltekit, angular, astro, qwik, solidjs
89
+
90
+ ### Backend & APIs
91
+
92
+ fastapi, django, flask, express, nestjs, hono, elysia
93
+
94
+ ### Runtime & Tooling
95
+
96
+ nodejs, deno, bun, vite, webpack, turborepo, nx, pnpm, biome
97
+
98
+ ### UI Libraries
99
+
100
+ tailwind, shadcn_ui, chakra_ui, mui
101
+
102
+ ### Mobile & Desktop
103
+
104
+ react_native, expo, flutter, tauri, electron
105
+
106
+ ### Machine Learning & AI
107
+
108
+ numpy, pandas, pytorch, tensorflow, scikit_learn, xgboost, lightgbm
109
+ transformers, langchain, llamaindex, openai, vllm, ollama, haystack
110
+ mastra, pydantic_ai, langfuse, mcp
111
+
112
+ ### Databases
113
+
114
+ postgresql, mysql, sqlite, mongodb, redis, supabase, firebase
115
+ planetscale, neon, convex, drizzle_orm, qdrant, turso
116
+
117
+ ### DevOps & Infrastructure
118
+
119
+ docker, kubernetes, terraform, ansible
120
+ github_actions, gitlab_ci, git, opentelemetry, inngest, temporal
121
+
122
+ ### Other
123
+
124
+ claude_agent_sdk
125
+
126
+ Full crawl configuration available here:
127
+
128
+ ---
129
+
130
+ ## Chunk Distribution
131
+
132
+ Example distribution after cleaning and removing Zig:
133
+
134
+ | Source | Chunks |
135
+ | ------------ | -------- |
136
+ | python | ~15,000 |
137
+ | javascript | ~4,000 |
138
+ | go | ~8,000 |
139
+ | react | ~3,000 |
140
+ | nextjs | ~4,000 |
141
+ | docker | ~4,000 |
142
+ | kubernetes | ~14,000 |
143
+ | transformers | ~14,000 |
144
+ | firebase | ~300,000 |
145
+ | redis | ~17,000 |
146
+ | git | ~14,000 |
147
+ | flutter | ~14,000 |
148
+ | supabase | ~10,000 |
149
+
150
+ Total: **millions of chunks across 80+ sources**
151
+
152
+ ---
153
+
154
+ ## How to Use (Hugging Face)
155
+
156
+ ### Install
157
+
158
+ ```bash
159
+ pip install datasets
160
+ ```
161
+
162
+ ### Load Dataset
163
+
164
+ ```python
165
+ from datasets import load_dataset
166
+
167
+ dataset = load_dataset("nuhmanpk/dev-knowledge-base")
168
+
169
+ print(dataset["train"][0])
170
+ ```
171
+
172
+ ---
173
+
174
+ ## Example Use Cases
175
+
176
+ ### 1. Semantic Search
177
+
178
+ ```python
179
+ from sentence_transformers import SentenceTransformer
180
+ import numpy as np
181
+
182
+ model = SentenceTransformer("all-MiniLM-L6-v2")
183
+
184
+ docs = [x["content"] for x in dataset["train"][:1000]]
185
+ embeddings = model.encode(docs)
186
+
187
+ query = "how to build api with fastapi"
188
+ q_emb = model.encode([query])
189
+
190
+ scores = np.dot(embeddings, q_emb.T).squeeze()
191
+ print(docs[scores.argmax()])
192
+ ```
193
+
194
+ ---
195
+
196
+ ### 2. RAG Pipeline
197
+
198
+ ```text
199
+ User Query → Embed → Vector DB → Retrieve → LLM → Answer
200
+ ```
201
+
202
+ Use with:
203
+
204
+ * FAISS
205
+ * Qdrant
206
+ * Pinecone
207
+
208
+ ---
209
+
210
+ ### 3. Fine-tuning
211
+
212
+ Convert to instruction format:
213
+
214
+ ```json
215
+ {
216
+ "instruction": "Explain JWT authentication",
217
+ "input": "",
218
+ "output": "<documentation chunk>"
219
+ }
220
+ ```
221
+
222
+ ---
223
+
224
+ ### 4. Developer Chatbot
225
+
226
+ Build:
227
+
228
+ * AI coding assistant
229
+ * StackOverflow-style search
230
+ * Internal dev knowledge system
231
+
232
+ ---
233
+
234
+ ## Data Processing Pipeline
235
+
236
+ * Async crawling with rate limiting
237
+ * HTML parsing (BeautifulSoup)
238
+ * Navigation/content filtering
239
+ * Chunking (~800 chars)
240
+ * Cleaning & binary removal
241
+
242
+ Crawler implementation:
243
+
244
+ ---
245
+
246
+ ## Limitations
247
+
248
+ * Some duplicate content may exist
249
+ * Chunk-level context only (not full pages)
250
+ * No semantic labeling yet
251
+ * Some sources larger than others
252
+
253
+ ---
254
+
255
+ ## Future Improvements
256
+
257
+ * Deduplication
258
+ * Better chunking (semantic splitting)
259
+ * Q/A generation
260
+ * Code extraction
261
+ * Metadata enrichment
262
+
263
+ ---
264
+
265
+ ## License
266
+
267
+ This dataset is built from publicly available documentation.
268
+ Refer to individual sources for licensing.
269
+
270
+ ---
271
+
272
+ ## Author
273
+
274
+ https://github.com/nuhmanpk
275
+
276
+ ---
277
+
278
+ ## Quick Example
279
+
280
+ ```python
281
+ from datasets import load_dataset
282
+
283
+ ds = load_dataset("nuhmanpk/dev-knowledge-base")
284
+
285
+ for row in ds["train"].select(range(3)):
286
+ print(row["source"], "→", row["content"][:150])
287
+ ```
288
+
289
+ ---
290
+
291
+ ## Summary
292
+
293
+ A large, structured, and practical dataset for building developer-focused AI systems from code assistants to full RAG pipelines.
294
+
295
+ ---
296
+