The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
Dataset Summary
This dataset contains the vector-retrieval component of Stack2Graph, a retrieval-oriented representation of Stack Overflow content introduced in the paper Stack2Graph: A Structured Knowledge Representation of Stack Overflow Data for Retrieval-based Question Answering.
The vector dataset is derived from Stack Overflow questions that match a supported programming-language tag set. It is designed for dense and sparse retrieval workflows and can be exported either from the local preprocessing caches, directly during embedding, or from an existing Qdrant vector index.
As described in the paper, Stack2Graph combines a knowledge graph with a vector database. This Hugging Face dataset card covers only the vector-dataset artifact. The complementary knowledge-graph artifact is documented separately.
Repository Layout
The uploaded repository stores compressed archive batches for the vector dataset under the vector_dataset directory.
README.md
vector_dataset/
archive_manifest.json
part_0.tar.gz
part_1.tar.gz
part_2.tar.gz
...
Each archive contains a subset of the exported Parquet shards and may also include the dataset_manifest.json file produced by the exporter.
The archive_manifest.json file records:
- the archive batch size
- the number of archive files
- the exact export files included in each archive
- a source manifest used to detect whether staged archives still match the current vector export
Export Layout
The underlying vector export is organized by category and programming language.
When parent-child indexing is enabled, the exported layout follows this pattern:
question_metadata/<language>/part-*.parquet
chunk_records/<language>/part-*.parquet
dataset_manifest.json
When parent-child indexing is disabled, the export instead uses:
question_records/<language>/part-*.parquet
dataset_manifest.json
Record Types
question_metadata
Question metadata shards contain one row per retained question and provide the canonical textual payload used to interpret dense and sparse vector rows.
Columns:
question_idlanguagequestion_titlequestion_full_textdense_texttags
question_records
Question record shards contain one row per retained question when parent-child indexing is disabled.
Columns:
question_idlanguagequestion_titlequestion_full_textdense_texttagsdense_vectorsparse_indicessparse_valuesexport_source
chunk_records
Chunk record shards contain one row per child chunk when parent-child indexing is enabled.
Columns:
chunk_idquestion_idlanguagechunk_indexchunk_textsparse_textdense_vectorsparse_indicessparse_valuesexport_source
Embedding and Sparse Features
The current pipeline combines:
- dense embeddings from
nomic-ai/nomic-embed-code - sparse lexical features derived from
BAAI/bge-m3
Dense vectors are stored in the dense_vector column when they are available in the export source.
Sparse vectors are stored in split form:
sparse_indicessparse_values
This split representation is designed to be Parquet-friendly and easy to reconstruct into sparse retrieval structures in downstream systems.
Export Provenance
The export_source column indicates where a given vector row came from:
embed: written directly during the embedding runqdrant: exported from an existing Qdrant collectioncache: exported from local preprocessing caches
Important note:
- exports from
embedandqdrantinclude dense vectors - exports from
cachemay omit dense vectors and therefore setdense_vectortonull
Parent-Child Indexing
The default vector pipeline uses parent-child indexing.
In this mode:
- each question remains available as
question_metadata - each question body is additionally split into overlapping child chunks
- chunk records are the main retrieval rows for vector search
question_idlinks each chunk back to its parent question
The current default chunking parameters in the repository are:
- child chunk tokens:
384 - child chunk stride:
256
These values are configurable at export time through the repository pipeline.
Supported Language Tags
The current pipeline retains questions when at least one tag matches this set:
java,c,c++,python,c#,javascript,assembly,php,perl,ruby,vb.net,swift,r,objective-c,go,sql,matlab,typescript,scala,kotlin,rust,lua,haskell,cobol,fortran,lisp,erlang,elixir,f#,dart,shell,bash,powershell,css,html,.net,julia,prolog,abap
The language field in export rows is the normalized language bucket used by the repository pipeline.
Construction Pipeline
The dataset is built from the Stack Overflow XML dump through the following stages:
- XML data is ingested into SQL tables.
- Questions tagged with at least one supported programming-language tag are selected.
- Question text is cleaned and normalized.
- Sparse lexical features are generated.
- Optional parent-child chunk records are generated.
- Dense embeddings are computed and uploaded to Qdrant.
- The final vector dataset is exported as Parquet shards and packaged into
.tar.gzarchive batches for Hugging Face upload.
The main repository components involved are:
vectorDatabase/getQuestions.pyvectorDatabase/preprocessing.pyvectorDatabase/embed.pyvectorDatabase/export_vector_dataset.pyvectorDatabase/upload_hf_vector_dataset.py
Intended Uses
This dataset is intended for:
- retrieval-augmented generation
- semantic search over Stack Overflow questions
- hybrid dense+sparse retrieval
- chunk-level code and technical-text retrieval
- evaluation of retrieval pipelines over programming-language-specific Stack Overflow subsets
Limitations
- The dataset only includes questions that match the repository's supported language-tag list.
- A question may appear in multiple language buckets if it carries multiple supported language tags in earlier pipeline stages.
- Dense vectors may be missing when the dataset is exported from local caches rather than from embedding output or Qdrant.
- Chunk boundaries are heuristic and token-window-based, not semantic.
- Sparse features are lexical approximations and should not be treated as full inverted-index statistics.
- The export inherits the biases, moderation artifacts, language imbalance, and temporal drift of Stack Overflow content.
- Textual content is derived from Stack Overflow posts and may contain markup artifacts, noisy formatting, or incomplete context.
Licensing
This dataset is distributed under the CC-BY-SA-4.0 license.
If you redistribute derived artifacts or use this dataset in downstream resources, you should preserve the attribution and share-alike requirements of CC-BY-SA-4.0.
Citation
If you use this dataset, cite:
TODO
- Downloads last month
- -