Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,217 @@
|
|
| 1 |
-
---
|
| 2 |
-
|
| 3 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
tags:
|
| 3 |
+
- embeddings
|
| 4 |
+
- vector-database
|
| 5 |
+
- information-retrieval
|
| 6 |
+
- stackoverflow
|
| 7 |
+
- question-answering
|
| 8 |
+
- semantic-search
|
| 9 |
+
pretty_name: StackOverflow Vector Dataset
|
| 10 |
+
license: cc-by-sa-4.0
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
# Dataset Summary
|
| 14 |
+
|
| 15 |
+
This dataset contains the vector-retrieval component of Stack2Graph, a retrieval-oriented representation of Stack Overflow content introduced in the paper *Stack2Graph: A Structured Knowledge Representation of Stack Overflow Data for Retrieval-based Question Answering*.
|
| 16 |
+
|
| 17 |
+
The vector dataset is derived from Stack Overflow questions that match a supported programming-language tag set. It is designed for dense and sparse retrieval workflows and can be exported either from the local preprocessing caches, directly during embedding, or from an existing Qdrant vector index.
|
| 18 |
+
|
| 19 |
+
As described in the paper, Stack2Graph combines a knowledge graph with a vector database. This Hugging Face dataset card covers only the vector-dataset artifact. The complementary knowledge-graph artifact is documented separately.
|
| 20 |
+
|
| 21 |
+
# Repository Layout
|
| 22 |
+
|
| 23 |
+
The uploaded repository stores compressed archive batches for the vector dataset under the `vector_dataset` directory.
|
| 24 |
+
|
| 25 |
+
```text
|
| 26 |
+
README.md
|
| 27 |
+
vector_dataset/
|
| 28 |
+
archive_manifest.json
|
| 29 |
+
part_0.tar.gz
|
| 30 |
+
part_1.tar.gz
|
| 31 |
+
part_2.tar.gz
|
| 32 |
+
...
|
| 33 |
+
```
|
| 34 |
+
|
| 35 |
+
Each archive contains a subset of the exported Parquet shards and may also include the `dataset_manifest.json` file produced by the exporter.
|
| 36 |
+
|
| 37 |
+
The `archive_manifest.json` file records:
|
| 38 |
+
|
| 39 |
+
- the archive batch size
|
| 40 |
+
- the number of archive files
|
| 41 |
+
- the exact export files included in each archive
|
| 42 |
+
- a source manifest used to detect whether staged archives still match the current vector export
|
| 43 |
+
|
| 44 |
+
# Export Layout
|
| 45 |
+
|
| 46 |
+
The underlying vector export is organized by category and programming language.
|
| 47 |
+
|
| 48 |
+
When parent-child indexing is enabled, the exported layout follows this pattern:
|
| 49 |
+
|
| 50 |
+
```text
|
| 51 |
+
question_metadata/<language>/part-*.parquet
|
| 52 |
+
chunk_records/<language>/part-*.parquet
|
| 53 |
+
dataset_manifest.json
|
| 54 |
+
```
|
| 55 |
+
|
| 56 |
+
When parent-child indexing is disabled, the export instead uses:
|
| 57 |
+
|
| 58 |
+
```text
|
| 59 |
+
question_records/<language>/part-*.parquet
|
| 60 |
+
dataset_manifest.json
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
# Record Types
|
| 64 |
+
|
| 65 |
+
## `question_metadata`
|
| 66 |
+
|
| 67 |
+
Question metadata shards contain one row per retained question and provide the canonical textual payload used to interpret dense and sparse vector rows.
|
| 68 |
+
|
| 69 |
+
Columns:
|
| 70 |
+
|
| 71 |
+
- `question_id`
|
| 72 |
+
- `language`
|
| 73 |
+
- `question_title`
|
| 74 |
+
- `question_full_text`
|
| 75 |
+
- `dense_text`
|
| 76 |
+
- `tags`
|
| 77 |
+
|
| 78 |
+
## `question_records`
|
| 79 |
+
|
| 80 |
+
Question record shards contain one row per retained question when parent-child indexing is disabled.
|
| 81 |
+
|
| 82 |
+
Columns:
|
| 83 |
+
|
| 84 |
+
- `question_id`
|
| 85 |
+
- `language`
|
| 86 |
+
- `question_title`
|
| 87 |
+
- `question_full_text`
|
| 88 |
+
- `dense_text`
|
| 89 |
+
- `tags`
|
| 90 |
+
- `dense_vector`
|
| 91 |
+
- `sparse_indices`
|
| 92 |
+
- `sparse_values`
|
| 93 |
+
- `export_source`
|
| 94 |
+
|
| 95 |
+
## `chunk_records`
|
| 96 |
+
|
| 97 |
+
Chunk record shards contain one row per child chunk when parent-child indexing is enabled.
|
| 98 |
+
|
| 99 |
+
Columns:
|
| 100 |
+
|
| 101 |
+
- `chunk_id`
|
| 102 |
+
- `question_id`
|
| 103 |
+
- `language`
|
| 104 |
+
- `chunk_index`
|
| 105 |
+
- `chunk_text`
|
| 106 |
+
- `sparse_text`
|
| 107 |
+
- `dense_vector`
|
| 108 |
+
- `sparse_indices`
|
| 109 |
+
- `sparse_values`
|
| 110 |
+
- `export_source`
|
| 111 |
+
|
| 112 |
+
# Embedding and Sparse Features
|
| 113 |
+
|
| 114 |
+
The current pipeline combines:
|
| 115 |
+
|
| 116 |
+
- dense embeddings from `nomic-ai/nomic-embed-code`
|
| 117 |
+
- sparse lexical features derived from `BAAI/bge-m3`
|
| 118 |
+
|
| 119 |
+
Dense vectors are stored in the `dense_vector` column when they are available in the export source.
|
| 120 |
+
|
| 121 |
+
Sparse vectors are stored in split form:
|
| 122 |
+
|
| 123 |
+
- `sparse_indices`
|
| 124 |
+
- `sparse_values`
|
| 125 |
+
|
| 126 |
+
This split representation is designed to be Parquet-friendly and easy to reconstruct into sparse retrieval structures in downstream systems.
|
| 127 |
+
|
| 128 |
+
# Export Provenance
|
| 129 |
+
|
| 130 |
+
The `export_source` column indicates where a given vector row came from:
|
| 131 |
+
|
| 132 |
+
- `embed`: written directly during the embedding run
|
| 133 |
+
- `qdrant`: exported from an existing Qdrant collection
|
| 134 |
+
- `cache`: exported from local preprocessing caches
|
| 135 |
+
|
| 136 |
+
Important note:
|
| 137 |
+
|
| 138 |
+
- exports from `embed` and `qdrant` include dense vectors
|
| 139 |
+
- exports from `cache` may omit dense vectors and therefore set `dense_vector` to `null`
|
| 140 |
+
|
| 141 |
+
# Parent-Child Indexing
|
| 142 |
+
|
| 143 |
+
The default vector pipeline uses parent-child indexing.
|
| 144 |
+
|
| 145 |
+
In this mode:
|
| 146 |
+
|
| 147 |
+
- each question remains available as `question_metadata`
|
| 148 |
+
- each question body is additionally split into overlapping child chunks
|
| 149 |
+
- chunk records are the main retrieval rows for vector search
|
| 150 |
+
- `question_id` links each chunk back to its parent question
|
| 151 |
+
|
| 152 |
+
The current default chunking parameters in the repository are:
|
| 153 |
+
|
| 154 |
+
- child chunk tokens: `384`
|
| 155 |
+
- child chunk stride: `256`
|
| 156 |
+
|
| 157 |
+
These values are configurable at export time through the repository pipeline.
|
| 158 |
+
|
| 159 |
+
# Supported Language Tags
|
| 160 |
+
|
| 161 |
+
The current pipeline retains questions when at least one tag matches this set:
|
| 162 |
+
|
| 163 |
+
- `java`, `c`, `c++`, `python`, `c#`, `javascript`, `assembly`, `php`, `perl`, `ruby`, `vb.net`, `swift`, `r`, `objective-c`, `go`, `sql`, `matlab`, `typescript`, `scala`, `kotlin`, `rust`, `lua`, `haskell`, `cobol`, `fortran`, `lisp`, `erlang`, `elixir`, `f#`, `dart`, `shell`, `bash`, `powershell`, `css`, `html`, `.net`, `julia`, `prolog`, `abap`
|
| 164 |
+
|
| 165 |
+
The `language` field in export rows is the normalized language bucket used by the repository pipeline.
|
| 166 |
+
|
| 167 |
+
# Construction Pipeline
|
| 168 |
+
|
| 169 |
+
The dataset is built from the Stack Overflow XML dump through the following stages:
|
| 170 |
+
|
| 171 |
+
1. XML data is ingested into SQL tables.
|
| 172 |
+
2. Questions tagged with at least one supported programming-language tag are selected.
|
| 173 |
+
3. Question text is cleaned and normalized.
|
| 174 |
+
4. Sparse lexical features are generated.
|
| 175 |
+
5. Optional parent-child chunk records are generated.
|
| 176 |
+
6. Dense embeddings are computed and uploaded to Qdrant.
|
| 177 |
+
7. The final vector dataset is exported as Parquet shards and packaged into `.tar.gz` archive batches for Hugging Face upload.
|
| 178 |
+
|
| 179 |
+
The main repository components involved are:
|
| 180 |
+
|
| 181 |
+
- `vectorDatabase/getQuestions.py`
|
| 182 |
+
- `vectorDatabase/preprocessing.py`
|
| 183 |
+
- `vectorDatabase/embed.py`
|
| 184 |
+
- `vectorDatabase/export_vector_dataset.py`
|
| 185 |
+
- `vectorDatabase/upload_hf_vector_dataset.py`
|
| 186 |
+
|
| 187 |
+
# Intended Uses
|
| 188 |
+
|
| 189 |
+
This dataset is intended for:
|
| 190 |
+
|
| 191 |
+
- retrieval-augmented generation
|
| 192 |
+
- semantic search over Stack Overflow questions
|
| 193 |
+
- hybrid dense+sparse retrieval
|
| 194 |
+
- chunk-level code and technical-text retrieval
|
| 195 |
+
- evaluation of retrieval pipelines over programming-language-specific Stack Overflow subsets
|
| 196 |
+
|
| 197 |
+
# Limitations
|
| 198 |
+
|
| 199 |
+
- The dataset only includes questions that match the repository's supported language-tag list.
|
| 200 |
+
- A question may appear in multiple language buckets if it carries multiple supported language tags in earlier pipeline stages.
|
| 201 |
+
- Dense vectors may be missing when the dataset is exported from local caches rather than from embedding output or Qdrant.
|
| 202 |
+
- Chunk boundaries are heuristic and token-window-based, not semantic.
|
| 203 |
+
- Sparse features are lexical approximations and should not be treated as full inverted-index statistics.
|
| 204 |
+
- The export inherits the biases, moderation artifacts, language imbalance, and temporal drift of Stack Overflow content.
|
| 205 |
+
- Textual content is derived from Stack Overflow posts and may contain markup artifacts, noisy formatting, or incomplete context.
|
| 206 |
+
|
| 207 |
+
# Licensing
|
| 208 |
+
|
| 209 |
+
This dataset is distributed under the `CC-BY-SA-4.0` license.
|
| 210 |
+
|
| 211 |
+
If you redistribute derived artifacts or use this dataset in downstream resources, you should preserve the attribution and share-alike requirements of `CC-BY-SA-4.0`.
|
| 212 |
+
|
| 213 |
+
# Citation
|
| 214 |
+
|
| 215 |
+
If you use this dataset, cite:
|
| 216 |
+
|
| 217 |
+
TODO
|