--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 417527081 num_examples: 64226 download_size: 128042399 dataset_size: 417527081 --- # LangMap-TheStack-typescript-100M Code finetuning dataset for **typescript** streamed from [bigcode/the-stack](https://huggingface.co/datasets/bigcode/the-stack). - **Target tokens**: 100,000,000 - **Tokenizer**: `allenai/OLMo-3-1025-7B` - **Schema**: `{"text": [...]}` (sanitised source code)