new

Get trending papers in your email inbox!

Subscribe

Daily Papers

byAK and the research community

May 7

DurableUn: Quantization-Induced Recovery Attacks in Machine Unlearning

Machine unlearning aims to remove specified training data to satisfy privacy regulations such as GDPR. However, existing evaluations assume identical precision at unlearning and deployment, overlooking that production LLMs are deployed at low-bit precision. We show that INT4 quantization systematically restores forgotten content even when models pass compliance audits at bfloat16 (BF16), we term this the quantization recovery attack (QRA). We conduct the first systematic study of unlearning robustness under adapter-space INT4 quantization in the NF4+LoRA regime, evaluating seven methods on LLaMA-3-8B-Instruct across TOFU, MUSE-News, and WikiBio-WPU. INT8 is benign; INT4 induces recovery of up to 22x, worsening with dataset difficulty. We identify the FA-RA-Q-INT4 trilemma: no method simultaneously achieves strong forgetting, high utility, and quantization robustness. A dense Pareto sweep reveals a sharp phase transition once robustness is achieved, retaining accuracy collapses regardless of further tuning. To address this, we propose DURABLEUN-SAF (Sharpness-Aware Forgetting), a quantization-aware objective using Straight-Through Estimator gradients through INT4 rounding. DURABLEUN-SAF is the only method to achieve a stable empirical (0.047, {BF16, INT8, INT4})- durability certificate: Q-INT4= 0.043 +- 0.002, cert rate= 3/3, versus SalUn's cert rate= 1/3 at its own published hyperparameters. We call for Q-INT4 to be adopted as a standard evaluation metric alongside FA and RA.

  • 2 authors
·
May 3

Probabilistic Assessment of Engineered Timber Reusability after Moisture Exposure

Engineered timber is pivotal to low-carbon construction, but moisture uptake during its service life can compromise structural reliability and impede reuse within a circular economy model. Despite growing interest, quantitative standards for classifying the reusability of moisture-exposed timber are still lacking. This study develops a probabilistic framework to determine the post-exposure reusability of engineered timber. Laminated specimens were soaked to full saturation, dried to 25% moisture content, and subjected to destructive three-point flexural testing. Structural integrity was quantified by a residual-performance metric that assigns 80% weight to the retained flexural modulus and 20% to the retained maximum load, benchmarked against unexposed controls. A hierarchical Bayesian multinomial logistic model with horseshoe priors, calibrated through Markov-Chain Monte-Carlo sampling, jointly infers the decision threshold separating three Modern Methods of Construction (MMC) reuse levels and predicts those levels from five field-measurable features: density, moisture content, specimen size, grain orientation, and surface hardness. Results indicate that a single wet-dry cycle preserves 70% of specimens above the 0.90 residual-performance threshold (Level 1), whereas repeated cycling lowers the mean residual to 0.78 and reallocates many specimens to Levels 2-3. The proposed framework yields quantified decision boundaries and a streamlined on-site testing protocol, providing a foundation for robust quality assurance standards.

  • 5 authors
·
May 29, 2025

Grading Handwritten Engineering Exams with Multimodal Large Language Models

Handwritten STEM exams capture open-ended reasoning and diagrams, but manual grading is slow and difficult to scale. We present an end-to-end workflow for grading scanned handwritten engineering quizzes with multimodal large language models (LLMs) that preserves the standard exam process (A4 paper, unconstrained student handwriting). The lecturer provides only a handwritten reference solution (100%) and a short set of grading rules; the reference is converted into a text-only summary that conditions grading without exposing the reference scan. Reliability is achieved through a multi-stage design with a format/presence check to prevent grading blank answers, an ensemble of independent graders, supervisor aggregation, and rigid templates with deterministic validation to produce auditable, machine-parseable reports. We evaluate the frozen pipeline in a clean-room protocol on a held-out real course quiz in Slovenian, including hand-drawn circuit schematics. With state-of-the-art backends (GPT-5.2 and Gemini-3 Pro), the full pipeline achieves approx8-point mean absolute difference to lecturer grades with low bias and an estimated manual-review trigger rate of approx17% at D_{max}=40. Ablations show that trivial prompting and removing the reference solution substantially degrade accuracy and introduce systematic over-grading, confirming that structured prompting and reference grounding are essential.

  • 4 authors
·
Jan 2