--- license: mit task_categories: - text-classification tags: - code pretty_name: model 0 training dataset size_categories: - 1K` block contains more than 15 lines of text, it is sliced. The first 5 lines and the last 5 lines are preserved, and the middle is replaced with a marker: `... [TRUNCATED X LINES] ...`. Note that `` tags are **never** truncated to preserve human-interaction signals. #### Phase 2: Window-Level Compression (Context Limit) If the entire 14-chunk context window exceeds 25 total lines, the window is compressed: * The **5 oldest chunks** and the **5 most recent chunks** are kept fully intact. * For the chunks in the **middle**, the text is completely stripped out, leaving only the XML tags (e.g., `... [TRUNCATED TO SAVE SPACE] ...`). * This preserves the chronological timeline and sequence of events without bloating the token count. ### ⚖️ Data Sampling & Balancing In a typical terminal log, over 95% of the lines are "Old Events," which would lead the model to simply guess the majority class. To force actual learning, this dataset uses **Negative Downsampling**: * **New Events (Positives):** 100% of detected boundaries are kept. * **Old Events (Negatives):** Downsampled so that there is exactly a **2:1 ratio** (Two old events for every one new event). #### Hard Negative Mining When selecting which "Old Events" to keep for the 2:1 ratio, the algorithm prioritizes **Hard Negatives**. Specifically, it targets `` tags that contain a newline character (`\n`). This teaches the model the difficult lesson that a user pressing "Enter" is often just a completion of an input phase, not necessarily a new logical event. #### Example Data Row ```json { "instruction": "Your task is to analyze terminal XML logs...", "input": "### CONTEXT (Previous Events):\ndemo@server:~$ apt update\nReading lists...\n\n### TARGET LINE:\ns", "output": "12.40, old event" }