--- license: mit dataset_info: features: - name: title dtype: string - name: source dtype: string - name: url dtype: string - name: category dtype: string - name: language dtype: string - name: content dtype: string - name: chunk_id dtype: int64 - name: chunk_length dtype: int64 - name: last_updated dtype: string splits: - name: train num_bytes: 401051216 num_examples: 426107 - name: test num_bytes: 941198 num_examples: 1000 download_size: 180107389 dataset_size: 401992414 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* task_categories: - question-answering - summarization - text-generation language: - en tags: - code pretty_name: 'DevBase ' size_categories: - 100K" } ``` --- ### 4. Developer Chatbot Build: * AI coding assistant * StackOverflow-style search * Internal dev knowledge system --- ## Data Processing Pipeline * Async crawling with rate limiting * HTML parsing (BeautifulSoup) * Navigation/content filtering * Chunking (~800 chars) * Cleaning & binary removal Crawler implementation: --- ## Limitations * Some duplicate content may exist * Chunk-level context only (not full pages) * No semantic labeling yet * Some sources larger than others --- ## Future Improvements * Deduplication * Better chunking (semantic splitting) * Q/A generation * Code extraction * Metadata enrichment --- ## License This dataset is built from publicly available documentation. Refer to individual sources for licensing. --- ## Author https://github.com/nuhmanpk --- ## Quick Example ```python from datasets import load_dataset ds = load_dataset("nuhmanpk/dev-knowledge-base") for row in ds["train"].select(range(3)): print(row["source"], "→", row["content"][:150]) ``` --- ## Summary A large, structured, and practical dataset for building developer-focused AI systems from code assistants to full RAG pipelines. ---