Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
Tags:
code
DOI:
Libraries:
Datasets
Dask
License:
File size: 5,550 Bytes
9b62ca6
 
 
 
 
 
 
6966282
 
 
 
2fe0a5c
6966282
 
 
 
 
2fe0a5c
9b62ca6
 
 
 
f08cf92
6966282
 
 
 
 
ba00a54
 
 
 
 
 
 
 
 
 
6966282
 
 
9b62ca6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6966282
 
9b62ca6
6966282
 
 
 
 
 
 
 
 
 
 
 
 
9b62ca6
 
 
 
 
 
6966282
 
 
9b62ca6
 
 
6966282
 
 
 
 
9b62ca6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6966282
 
 
 
 
 
 
 
9b62ca6
6966282
 
 
 
 
 
 
 
 
 
 
 
9b62ca6
6966282
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
---
license: mit
language:
- en
tags:
- code
---
# Filtered StarCoder Dataset Mini

## Dataset Description

This dataset contains filtered and processed code samples from 10 popular programming languages: C, C++, C#, Go, Java, JavaScript, Python, Ruby, Scala, and TypeScript. The dataset was created by filtering source code based on quality metrics, removing outliers, and standardizing the format for machine learning and code analysis applications.

### Key Features

- **Cleaned and Filtered Code**: Samples have been processed to remove outliers in terms of line length and code size
- **Quality Metrics**: Each sample includes metadata about average line length and line count
- **Multi-language Support**: 10 programming languages represented in separate subsets
- **Consistent Format**: All samples follow the same Parquet structure for easy processing

### Dataset Size

The complete dataset is approximately 12GB in size. Individual language files vary in size, with the largest being C++ (2GB) and the smallest being Scala (665MB).

### Dataset Statistics

| Language   | Sample Count | Avg. Line Length | Avg. Line Count |
|------------|--------------|------------------|-----------------|
| C          | 1,752,078    | 22.54           | 74.52            |
| C++        | 1,769,333    | 23.51           | 103.56           |
| C#         | 1,763,508    | 25.77           | 51.53            |
| Go         | 1,751,120    | 20.68           | 81.79            |
| Java       | 1,779,659    | 25.48           | 64.59            |
| JavaScript | 1,718,133    | 23.30           | 51.22            |
| Python     | 1,764,099    | 26.51           | 66.16            |
| Ruby       | 1,756,771    | 22.31           | 33.86            |
| Scala      | 952,890      | 28.31           | 53.92            |
| TypeScript | 1,738,885    | 24.14           | 43.39            |

## Dataset Structure

The dataset is organized with separate Parquet files for each programming language:
- `c.parquet` - C language samples
- `cpp.parquet` - C++ language samples
- `c-sharp.parquet` - C# language samples
- `go.parquet` - Go language samples
- `java.parquet` - Java language samples
- `javascript.parquet` - JavaScript language samples
- `python.parquet` - Python language samples
- `ruby.parquet` - Ruby language samples
- `scala.parquet` - Scala language samples
- `typescript.parquet` - TypeScript language samples

Within each file, data is stored with the following schema:

```
- language: string (the programming language of the code sample)
- code: string (the complete code content)
- avg_line_length: float (average character count per line)
- line_count: integer (total number of lines in the code)
```

Each sample is stored as a row in the Parquet file with these four columns.

## How to Access the Dataset

### Using the Hugging Face `datasets` Library

This dataset is hosted on the Hugging Face Hub and can be easily accessed using the `datasets` library.

#### Install the Required Library

```bash
pip install datasets
```

#### Import Library

```python
from datasets import load_dataset
```

#### Load the Entire Dataset

```python
dataset = load_dataset(
    "jugalgajjar/Filtered-StarCoder-Dataset-Mini"
)
```

#### Load a Specific Language

```python
dataset = load_dataset(
    "jugalgajjar/Filtered-StarCoder-Dataset-Mini",
    data_files="scala.parquet"
)
```

#### Stream Data

```python
dataset = load_dataset(
    "jugalgajjar/Filtered-StarCoder-Dataset-Mini",
    data_files="scala.parquet",
    streaming=True
)
```

#### Access Data Content (After Downloading)

```python
try:
    for example in dataset["train"].take(5):
        print(example)
        print("-"*25)
except Exception as e:
    print(f"An error occurred: {e}")
```

### Manual Download

You can also manually download specific language files from the Hugging Face repository page:

1. Visit `https://huggingface.co/datasets/jugalgajjar/Filtered-StarCoder-Dataset-Mini`
2. Navigate to the "Files" tab
3. Click on the language file you want to download (e.g., `python.parquet`)
4. Use the download button to save the file locally

## Dataset Creation

This dataset was created through the following process:

1. Original code samples were collected from the StarCoder dataset ([URL](https://huggingface.co/datasets/bigcode/starcoderdata))
2. Statistical analysis was performed to identify quality metrics
3. Outliers were removed using IQR (Interquartile Range) method
4. Samples were filtered to remove excessively long or short code examples
5. Data was normalized and standardized across languages
6. Metadata (average line length and line count) was calculated for each sample
7. Final data was serialized in the efficient Parquet format for optimal storage and access speed

The processing pipeline included steps to:
- Remove code samples with abnormal line lengths (potential formatting issues)
- Filter out extremely long files (exceeding the 90th percentile)
- Ensure consistent formatting and structure
- Generate useful metadata for each example

## Citation

If you use this dataset in your research or project, please cite it as follows:

```bibtex
@misc{fscdmini2025,
  author = {Jugal Gajjar, Kamalasankari Subramaniakuppusamy, Kaustik Ranaware},
  title = {Filtered CodeStar Dataset Mini},
  year = {2025},
  publisher = {HuggingFace},
  howpublished = {\url{https://huggingface.co/datasets/jugalgajjar/Filtered-StarCoder-Dataset-Mini}}
}
```

## License

This dataset is released under the MIT License. See the LICENSE file for more details.