Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -14,11 +14,11 @@ size_categories:
|
|
| 14 |
- 100K<n<1M
|
| 15 |
---
|
| 16 |
|
| 17 |
-
# PrimeVul
|
| 18 |
|
| 19 |
-
Pre-extracted [CLS] token embeddings from
|
| 20 |
|
| 21 |
-
## Embeddings (.npz files)
|
| 22 |
|
| 23 |
Each .npz file contains frozen CodeBERT embeddings (768-dimensional vectors) for C/C++ functions, along with their labels and CWE type annotations. These were extracted once using a frozen CodeBERT model and are used for downstream PU (positive-unlabeled) learning experiments without requiring GPU access.
|
| 24 |
|
|
@@ -49,6 +49,19 @@ cwes = data["cwe_types"] # (175797,)
|
|
| 49 |
|
| 50 |
No special flags needed. All arrays use standard numpy dtypes (float32, int32, U20, int64).
|
| 51 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52 |
## Raw PrimeVul v0.1 data (raw/ folder)
|
| 53 |
|
| 54 |
The raw/ folder contains the original PrimeVul v0.1 JSONL files from the PrimeVul project. Each line is a JSON object with fields including func (source code), target (0/1 label), cwe (list of CWE strings), cve (CVE identifier), and project metadata.
|
|
@@ -64,12 +77,22 @@ The raw/ folder contains the original PrimeVul v0.1 JSONL files from the PrimeVu
|
|
| 64 |
|
| 65 |
## Extraction details
|
| 66 |
|
|
|
|
|
|
|
| 67 |
- Model: microsoft/codebert-base (RoBERTa architecture, 125M parameters)
|
| 68 |
- Extraction: frozen model, [CLS] token from final layer
|
| 69 |
- Tokenization: max_length=512, truncation=True, padding=max_length
|
| 70 |
- Source data: PrimeVul v0.1 (chronological train/valid/test splits)
|
| 71 |
- Extracted on: Google Colab, A100 GPU, ~23 minutes for all splits
|
| 72 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 73 |
## Citation
|
| 74 |
|
| 75 |
If you use this data, please cite the PrimeVul dataset:
|
|
|
|
| 14 |
- 100K<n<1M
|
| 15 |
---
|
| 16 |
|
| 17 |
+
# PrimeVul Embeddings for PU Learning
|
| 18 |
|
| 19 |
+
Pre-extracted [CLS] token embeddings from two code models for all functions in the PrimeVul v0.1 vulnerability detection dataset, plus the raw PrimeVul v0.1 JSONL source files.
|
| 20 |
|
| 21 |
+
## CodeBERT Embeddings (root .npz files)
|
| 22 |
|
| 23 |
Each .npz file contains frozen CodeBERT embeddings (768-dimensional vectors) for C/C++ functions, along with their labels and CWE type annotations. These were extracted once using a frozen CodeBERT model and are used for downstream PU (positive-unlabeled) learning experiments without requiring GPU access.
|
| 24 |
|
|
|
|
| 49 |
|
| 50 |
No special flags needed. All arrays use standard numpy dtypes (float32, int32, U20, int64).
|
| 51 |
|
| 52 |
+
## VulBERTa Embeddings (vulberta/ folder)
|
| 53 |
+
|
| 54 |
+
Same format as CodeBERT but extracted from claudios/VulBERTa-mlm, a RoBERTa model pretrained on C/C++ vulnerability code. Same functions, same labels, same idxs -- only the embedding vectors differ.
|
| 55 |
+
|
| 56 |
+
| File | Functions | Shape |
|
| 57 |
+
|------|-----------|-------|
|
| 58 |
+
| vulberta/train.npz | 175,797 | (175797, 768) |
|
| 59 |
+
| vulberta/valid.npz | 23,948 | (23948, 768) |
|
| 60 |
+
| vulberta/test.npz | 24,788 | (24788, 768) |
|
| 61 |
+
| vulberta/test_paired.npz | 870 | (870, 768) |
|
| 62 |
+
|
| 63 |
+
VulBERTa embeddings have higher L2 magnitude (~27 vs ~21 for CodeBERT) but the same 768 dimensions. Load the same way: np.load("vulberta/train.npz").
|
| 64 |
+
|
| 65 |
## Raw PrimeVul v0.1 data (raw/ folder)
|
| 66 |
|
| 67 |
The raw/ folder contains the original PrimeVul v0.1 JSONL files from the PrimeVul project. Each line is a JSON object with fields including func (source code), target (0/1 label), cwe (list of CWE strings), cve (CVE identifier), and project metadata.
|
|
|
|
| 77 |
|
| 78 |
## Extraction details
|
| 79 |
|
| 80 |
+
### CodeBERT
|
| 81 |
+
|
| 82 |
- Model: microsoft/codebert-base (RoBERTa architecture, 125M parameters)
|
| 83 |
- Extraction: frozen model, [CLS] token from final layer
|
| 84 |
- Tokenization: max_length=512, truncation=True, padding=max_length
|
| 85 |
- Source data: PrimeVul v0.1 (chronological train/valid/test splits)
|
| 86 |
- Extracted on: Google Colab, A100 GPU, ~23 minutes for all splits
|
| 87 |
|
| 88 |
+
### VulBERTa
|
| 89 |
+
|
| 90 |
+
- Model: claudios/VulBERTa-mlm (RoBERTa architecture, 125M parameters, pretrained on C/C++ vulnerability code)
|
| 91 |
+
- Extraction: frozen model, [CLS] token from final layer
|
| 92 |
+
- Tokenization: max_length=512, truncation=True, padding=max_length
|
| 93 |
+
- Source data: PrimeVul v0.1 (same functions as CodeBERT)
|
| 94 |
+
- Extracted on: Google Colab, A100 GPU, ~23 minutes for all splits
|
| 95 |
+
|
| 96 |
## Citation
|
| 97 |
|
| 98 |
If you use this data, please cite the PrimeVul dataset:
|