BERT-as-a-Judge: A Robust Alternative to Lexical Methods for Efficient Reference-Based LLM Evaluation Paper • 2604.09497 • Published 6 days ago • 21
Learned Hallucination Detection in Black-Box LLMs using Token-level Entropy Production Rate Paper • 2509.04492 • Published Sep 1, 2025 • 10
Learned Hallucination Detection in Black-Box LLMs using Token-level Entropy Production Rate Paper • 2509.04492 • Published Sep 1, 2025 • 10
MLM vs CLM Collection Research material on research about pre-training encoders, with extensive comparison on masked language modeling paradigm vs causal langage modeling. • 5 items • Updated Dec 1, 2025
MLM versus CLM for NLP tasks Collection Related paper: "Should We Still Pretrain Encoders with Masked Language Modeling?" • 51 items • Updated 3 days ago
EuroBERT Encoding model Collection Suite of models for improved integration into RAG (for information retrieval), designed for ease-of-use and practicability in industrial context • 5 items • Updated Sep 11, 2025 • 1
EuroBERT Encoding model Collection Suite of models for improved integration into RAG (for information retrieval), designed for ease-of-use and practicability in industrial context • 5 items • Updated Sep 11, 2025 • 1
artefactory/Argimi-Legal-French-Jurisprudence Viewer • Updated Jul 22, 2025 • 768k • 197 • 10
Should We Still Pretrain Encoders with Masked Language Modeling? Paper • 2507.00994 • Published Jul 1, 2025 • 81
Should We Still Pretrain Encoders with Masked Language Modeling? Paper • 2507.00994 • Published Jul 1, 2025 • 81