CICFlow-Meter ModernBERT LoRA
Collection
A collection of ModernBERT‑based binary classifiers fine‑tuned with LoRA adapters at ranks 4, 8, and 16 for efficient network flow analysis. • 3 items • Updated
This model fine‑tunes ModernBERT‑base using LoRA (Low‑Rank Adaptation) for efficient parameter‑tuning.
It is designed for binary classification tasks where high recall and controlled false positive rates are important.
LoRA adapters allow efficient fine‑tuning by updating only small low‑rank matrices, reducing memory and compute requirements.
Training uses Asymmetric Focal Loss, which emphasizes hard negatives while keeping positive weighting mild. This helps balance recall and false positive rate.
Validation is performed every 5000 steps, with early stopping to prevent overfitting.
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM, pipeline
from peft import PeftModel
# Base ModernBERT model
base_model_name = "answerdotai/ModernBERT-base"
# LoRA adapter checkpoint
adapter_model_name = "AINovice2005/ModernBERT-base-lora-cicflow-1m-r4"
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
# Load base masked language model
base_model = AutoModelForMaskedLM.from_pretrained(base_model_name)
# Attach LoRA adapter
model = PeftModel.from_pretrained(base_model, adapter_model_name)
# Move to device
device = "cuda" if torch.cuda.is_available() else "cpu"
model = model.to(device)
# Build fill-mask pipeline
fill_mask = pipeline(
"fill-mask",
model=model,
tokenizer=tokenizer,
device=0 if device == "cuda" else -1
)
# Example usage
text = "The network traffic shows a [MASK] pattern."
outputs = fill_mask(text)
for o in outputs:
print(f"Token: {o['token_str']}, Score: {o['score']:.4f}")
LoRA adapter
Training configuration and evaluation logs
PEFT 0.18.1
Base model
answerdotai/ModernBERT-base