Jaiccc's picture
Update README.md
e38e01a verified
metadata
license: mit
task_categories:
  - text-classification
tags:
  - code
pretty_name: model 0 training dataset
size_categories:
  - 1K<n<10K

## Dataset Card: Terminal Log Boundary Prediction (Streaming)

๐Ÿ“‹ Dataset Purpose & Model 0 Overview

This dataset is designed to train "Model 0" for the Winter 2026 iteration of the AutoDocs project. You can access the official repository here:
AutoDocs (Winter 2026) Repository

Objective

The primary objective of the model is the binary classification of sequential data. It is engineered to process continuous, timestamped terminal logs formatted in XML to determine if a specific line represents a "Boundary" between logical events.

Methodology: Sliding-Window Approach

Instead of ingesting a massive log file in its entirety, the dataset employs a sliding-window approach. The model analyzes a short historical context to evaluate the Target Line (the most recent entry):

  • Pattern Recognition: The model looks at the previous 14 timesteps (e.g., "The terminal has been downloading packages").
  • Boundary Prediction: It predicts if the Target Line breaks that pattern (e.g., "The download finished and the shell prompt returned")
  • or represents the continuation of the ongoing process.

๐Ÿ—‚๏ธ Dataset Structure

The dataset is in JSONL format, each row contains three primary fields:

  • instruction: The system prompt defining "new" vs. "old" events.
  • input: The sliding-window data, split into:
    • ### CONTEXT: Up to 14 historical XML chunks. (Or 14 timestamps)
    • ### TARGET LINE: The 15th chunk to be classified. (Or the 15-th timestamp)
  • label / output: Formatted as {timestamp}, {class} event.

โœ‚๏ธ Rules of Truncation

Raw terminal logs (like apt-get installations) can easily overflow an LLM's context window. To prevent this, the data engineering pipeline applies a strict Two-Phase Truncation rule:

Phase 1: Intra-Chunk Truncation (Line Limit)

If a single <system_output> block contains more than 15 lines of text, it is sliced. The first 5 lines and the last 5 lines are preserved, and the middle is replaced with a marker: ... [TRUNCATED X LINES] .... Note that <user_input> tags are never truncated to preserve human-interaction signals.

Phase 2: Window-Level Compression (Context Limit)

If the entire 14-chunk context window exceeds 25 total lines, the window is compressed:

  • The 5 oldest chunks and the 5 most recent chunks are kept fully intact.
  • For the chunks in the middle, the text is completely stripped out, leaving only the XML tags (e.g., <system_output timestamp="X">... [TRUNCATED TO SAVE SPACE] ...</system_output>).
  • This preserves the chronological timeline and sequence of events without bloating the token count.

โš–๏ธ Data Sampling & Balancing

In a typical terminal log, over 95% of the lines are "Old Events," which would lead the model to simply guess the majority class. To force actual learning, this dataset uses Negative Downsampling:

  • New Events (Positives): 100% of detected boundaries are kept.
  • Old Events (Negatives): Downsampled so that there is exactly a 2:1 ratio (Two old events for every one new event).

Hard Negative Mining

When selecting which "Old Events" to keep for the 2:1 ratio, the algorithm prioritizes Hard Negatives. Specifically, it targets <user_input> tags that contain a newline character (\n). This teaches the model the difficult lesson that a user pressing "Enter" is often just a completion of an input phase, not necessarily a new logical event.

Example Data Row

{
  "instruction": "Your task is to analyze terminal XML logs...",
  "input": "### CONTEXT (Previous Events):\n<system_output timestamp=\"10.01\">demo@server:~$ apt update</system_output>\n<system_output timestamp=\"10.05\">Reading lists...</system_output>\n\n### TARGET LINE:\n<user_input timestamp=\"12.40\">s</user_input>",
  "output": "12.40, old event"
}