Update README.md
Browse files
README.md
CHANGED
|
@@ -10,31 +10,31 @@ size_categories:
|
|
| 10 |
---
|
| 11 |
# ## Dataset Card: Terminal Log Boundary Prediction (Streaming)
|
| 12 |
|
| 13 |
-
### 📋 Dataset
|
| 14 |
-
This dataset is designed to train
|
| 15 |
-
|
|
|
|
| 16 |
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
### 🗂️ Dataset Structure
|
| 23 |
The dataset is in `JSONL` format, each row contains three primary fields:
|
| 24 |
-
|
| 25 |
* **`instruction`**: The system prompt defining "new" vs. "old" events.
|
| 26 |
* **`input`**: The sliding-window data, split into:
|
| 27 |
* `### CONTEXT`: Up to 14 historical XML chunks. (Or 14 timestamps)
|
| 28 |
* `### TARGET LINE`: The 15th chunk to be classified. (Or the 15-th timestamp)
|
| 29 |
* **`label / output`**: Formatted as `{timestamp}, {class} event`.
|
| 30 |
|
| 31 |
-
### 🎯 The Model's Goal
|
| 32 |
-
The primary objective of the model is **binary classification of sequential data**.
|
| 33 |
-
By looking at the historical context (e.g., "The terminal has been downloading
|
| 34 |
-
packages for the last 14 timesteps"), the model must predict if the timestamp in
|
| 35 |
-
the Target Line breaks that pattern and establishes a new boundary (e.g.,
|
| 36 |
-
"The download finished and the shell prompt returned").
|
| 37 |
-
|
| 38 |
### ✂️ Rules of Truncation
|
| 39 |
Raw terminal logs (like `apt-get` installations) can easily overflow an LLM's
|
| 40 |
context window. To prevent this, the data engineering pipeline applies a
|
|
@@ -58,7 +58,6 @@ is compressed:
|
|
| 58 |
* This preserves the chronological timeline and sequence of events
|
| 59 |
without bloating the token count.
|
| 60 |
|
| 61 |
-
|
| 62 |
### ⚖️ Data Sampling & Balancing
|
| 63 |
In a typical terminal log, over 95% of the lines are "Old Events," which
|
| 64 |
would lead the model to simply guess the majority class. To force actual
|
|
|
|
| 10 |
---
|
| 11 |
# ## Dataset Card: Terminal Log Boundary Prediction (Streaming)
|
| 12 |
|
| 13 |
+
### 📋 Dataset Purpose & Model 0 Overview
|
| 14 |
+
This dataset is designed to train **"Model 0"** for the Winter 2026 iteration of the **AutoDocs** project.
|
| 15 |
+
You can access the official repository here:
|
| 16 |
+
[AutoDocs (Winter 2026) Repository](https://github.com/CSC392-CSC492-Building-AI-ML-systems/AutoDocs-Winter2026/tree/main)
|
| 17 |
|
| 18 |
+
#### Objective
|
| 19 |
+
The primary objective of the model is the **binary classification of sequential data**.
|
| 20 |
+
It is engineered to process continuous, timestamped terminal logs formatted in XML to determine
|
| 21 |
+
if a specific line represents a **"Boundary"** between logical events.
|
| 22 |
|
| 23 |
+
#### Methodology: Sliding-Window Approach
|
| 24 |
+
Instead of ingesting a massive log file in its entirety, the dataset employs a **sliding-window approach**.
|
| 25 |
+
The model analyzes a short historical context to evaluate the **Target Line** (the most recent entry):
|
| 26 |
+
* **Pattern Recognition**: The model looks at the previous 14 timesteps (e.g., "The terminal has been downloading packages").
|
| 27 |
+
* **Boundary Prediction**: It predicts if the Target Line breaks that pattern (e.g., "The download finished and the shell prompt returned")
|
| 28 |
+
* or represents the continuation of the ongoing process.
|
| 29 |
+
|
| 30 |
### 🗂️ Dataset Structure
|
| 31 |
The dataset is in `JSONL` format, each row contains three primary fields:
|
|
|
|
| 32 |
* **`instruction`**: The system prompt defining "new" vs. "old" events.
|
| 33 |
* **`input`**: The sliding-window data, split into:
|
| 34 |
* `### CONTEXT`: Up to 14 historical XML chunks. (Or 14 timestamps)
|
| 35 |
* `### TARGET LINE`: The 15th chunk to be classified. (Or the 15-th timestamp)
|
| 36 |
* **`label / output`**: Formatted as `{timestamp}, {class} event`.
|
| 37 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 38 |
### ✂️ Rules of Truncation
|
| 39 |
Raw terminal logs (like `apt-get` installations) can easily overflow an LLM's
|
| 40 |
context window. To prevent this, the data engineering pipeline applies a
|
|
|
|
| 58 |
* This preserves the chronological timeline and sequence of events
|
| 59 |
without bloating the token count.
|
| 60 |
|
|
|
|
| 61 |
### ⚖️ Data Sampling & Balancing
|
| 62 |
In a typical terminal log, over 95% of the lines are "Old Events," which
|
| 63 |
would lead the model to simply guess the majority class. To force actual
|