Dataset Viewer (First 5GB)
Auto-converted to Parquet Duplicate
repo_name
string
text
string
Mirocow/pingcrm-yii2
version: '2' services: php: image: yiisoftware/yii2-php:7.1-apache volumes: - ~/.composer-docker/cache:/root/.composer/cache:delegated - ./:/app:delegated ports: - '8000:80'<|endoftext|>const purgecss = require("@fullhuman/postcss-purgecss")({ content: ["./web/**/*.html"], defaultExtra...
philiply/philiply.github.io
"# philiply.com\n\n## Intro\n\nStoring dev code in github while personal site is hosted on philiply.(...TRUNCATED)
benjdiasaad/MapReduce_WordCount
"package org.mbds.hadoop.wordcount;\n\nimport org.apache.hadoop.fs.Path;\nimport org.apache.hadoop.m(...TRUNCATED)
jeffs06/ps4-enable-updates
"{\n \"title\": \"Enable Updates\",\n \"version\": \"1.0.0\",\n \"updated\": \"2001-01-01T00:00:0(...TRUNCATED)
ArtUshak/wiki_tool_python
"[tool.poetry]\nname = \"wiki_tool_python\"\nversion = \"0.2.7\"\ndescription = \"Script to perform (...TRUNCATED)
erickakoyama/cst336_hw4
"{\n\t\"name\": \"cst336_hw4\",\n\t\"version\": \"1.0.0\",\n\t\"description\": \"\",\n\t\"main\": \"(...TRUNCATED)
tearsduh/OG-VANITY
"import os\nimport threading\nimport time\n\nimport requests\nfrom colorama import Fore\n\nr1 = Fore(...TRUNCATED)
eeng/shevek
"version: \"3\"\nservices:\n shevek:\n build: .\n depends_on:\n - mongo\n ports:\n (...TRUNCATED)
pocke/rubocop-typed
"require: rubocop-typed\n\nAllCops:\n DisabledByDefault: true\n\nTyped:\n Enabled: true\n<|endofte(...TRUNCATED)
consolidation/json-api-cli
"{\n \"name\": \"consolidation/jsonapi-cli\",\n \"description\": \"JSON API cli.\",\n \"lic(...TRUNCATED)
End of preview. Expand in Data Studio

Incrementally uploaded Parquet shards under data/ with columns:

  • repo_name: str
  • text: str

Downloaded all repos from https://huggingface.co/datasets/thepowerfuldeez/the-stack-v2-train-smol-ids-updated and run

formatting / linting / import sort on all files

Using ruff + black + ty stack for python and biomejs for js / ts / html / css / json / graphql

Total amount of tokens: ~100B

Example:

from datasets import load_dataset
ds = load_dataset("thepowerfuldeez/the-stack-v2-train-smol-ids-updated-content", split="train", streaming=True)
for row in ds.take(3):
    print(row)
Downloads last month
80

Collection including thepowerfuldeez/the-stack-v2-train-smol-ids-updated-content