The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.

Dataset Summary

This dataset contains the vector-retrieval component of Stack2Graph, a retrieval-oriented representation of Stack Overflow content introduced in the paper Stack2Graph: A Structured Knowledge Representation of Stack Overflow Data for Retrieval-based Question Answering.

The vector dataset is derived from Stack Overflow questions that match a supported programming-language tag set. It is designed for dense and sparse retrieval workflows and can be exported either from the local preprocessing caches, directly during embedding, or from an existing Qdrant vector index.

As described in the paper, Stack2Graph combines a knowledge graph with a vector database. This Hugging Face dataset card covers only the vector-dataset artifact. The complementary knowledge-graph artifact is documented separately.

Repository Layout

The uploaded repository stores compressed archive batches for the vector dataset under the vector_dataset directory.

README.md
vector_dataset/
  archive_manifest.json
  part_0.tar.gz
  part_1.tar.gz
  part_2.tar.gz
  ...

Each archive contains a subset of the exported Parquet shards and may also include the dataset_manifest.json file produced by the exporter.

The archive_manifest.json file records:

  • the archive batch size
  • the number of archive files
  • the exact export files included in each archive
  • a source manifest used to detect whether staged archives still match the current vector export

Export Layout

The underlying vector export is organized by category and programming language.

When parent-child indexing is enabled, the exported layout follows this pattern:

question_metadata/<language>/part-*.parquet
chunk_records/<language>/part-*.parquet
dataset_manifest.json

When parent-child indexing is disabled, the export instead uses:

question_records/<language>/part-*.parquet
dataset_manifest.json

Record Types

question_metadata

Question metadata shards contain one row per retained question and provide the canonical textual payload used to interpret dense and sparse vector rows.

Columns:

  • question_id
  • language
  • question_title
  • question_full_text
  • dense_text
  • tags

question_records

Question record shards contain one row per retained question when parent-child indexing is disabled.

Columns:

  • question_id
  • language
  • question_title
  • question_full_text
  • dense_text
  • tags
  • dense_vector
  • sparse_indices
  • sparse_values
  • export_source

chunk_records

Chunk record shards contain one row per child chunk when parent-child indexing is enabled.

Columns:

  • chunk_id
  • question_id
  • language
  • chunk_index
  • chunk_text
  • sparse_text
  • dense_vector
  • sparse_indices
  • sparse_values
  • export_source

Embedding and Sparse Features

The current pipeline combines:

  • dense embeddings from nomic-ai/nomic-embed-code
  • sparse lexical features derived from BAAI/bge-m3

Dense vectors are stored in the dense_vector column when they are available in the export source.

Sparse vectors are stored in split form:

  • sparse_indices
  • sparse_values

This split representation is designed to be Parquet-friendly and easy to reconstruct into sparse retrieval structures in downstream systems.

Export Provenance

The export_source column indicates where a given vector row came from:

  • embed: written directly during the embedding run
  • qdrant: exported from an existing Qdrant collection
  • cache: exported from local preprocessing caches

Important note:

  • exports from embed and qdrant include dense vectors
  • exports from cache may omit dense vectors and therefore set dense_vector to null

Parent-Child Indexing

The default vector pipeline uses parent-child indexing.

In this mode:

  • each question remains available as question_metadata
  • each question body is additionally split into overlapping child chunks
  • chunk records are the main retrieval rows for vector search
  • question_id links each chunk back to its parent question

The current default chunking parameters in the repository are:

  • child chunk tokens: 384
  • child chunk stride: 256

These values are configurable at export time through the repository pipeline.

Supported Language Tags

The current pipeline retains questions when at least one tag matches this set:

  • java, c, c++, python, c#, javascript, assembly, php, perl, ruby, vb.net, swift, r, objective-c, go, sql, matlab, typescript, scala, kotlin, rust, lua, haskell, cobol, fortran, lisp, erlang, elixir, f#, dart, shell, bash, powershell, css, html, .net, julia, prolog, abap

The language field in export rows is the normalized language bucket used by the repository pipeline.

Construction Pipeline

The dataset is built from the Stack Overflow XML dump through the following stages:

  1. XML data is ingested into SQL tables.
  2. Questions tagged with at least one supported programming-language tag are selected.
  3. Question text is cleaned and normalized.
  4. Sparse lexical features are generated.
  5. Optional parent-child chunk records are generated.
  6. Dense embeddings are computed and uploaded to Qdrant.
  7. The final vector dataset is exported as Parquet shards and packaged into .tar.gz archive batches for Hugging Face upload.

The main repository components involved are:

  • vectorDatabase/getQuestions.py
  • vectorDatabase/preprocessing.py
  • vectorDatabase/embed.py
  • vectorDatabase/export_vector_dataset.py
  • vectorDatabase/upload_hf_vector_dataset.py

Intended Uses

This dataset is intended for:

  • retrieval-augmented generation
  • semantic search over Stack Overflow questions
  • hybrid dense+sparse retrieval
  • chunk-level code and technical-text retrieval
  • evaluation of retrieval pipelines over programming-language-specific Stack Overflow subsets

Limitations

  • The dataset only includes questions that match the repository's supported language-tag list.
  • A question may appear in multiple language buckets if it carries multiple supported language tags in earlier pipeline stages.
  • Dense vectors may be missing when the dataset is exported from local caches rather than from embedding output or Qdrant.
  • Chunk boundaries are heuristic and token-window-based, not semantic.
  • Sparse features are lexical approximations and should not be treated as full inverted-index statistics.
  • The export inherits the biases, moderation artifacts, language imbalance, and temporal drift of Stack Overflow content.
  • Textual content is derived from Stack Overflow posts and may contain markup artifacts, noisy formatting, or incomplete context.

Licensing

This dataset is distributed under the CC-BY-SA-4.0 license.

If you redistribute derived artifacts or use this dataset in downstream resources, you should preserve the attribution and share-alike requirements of CC-BY-SA-4.0.

Citation

If you use this dataset, cite:

TODO

Downloads last month
-