Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
EnyiJiang 's Collections
AI Safety and Alignment

AI Safety and Alignment

updated Feb 2
Upvote
-

  • Latent Adversarial Regularization for Offline Preference Optimization

    Paper • 2601.22083 • Published Jan 29 • 14

  • RAMP: Boosting Adversarial Robustness Against Multiple l_p Perturbations for Universal Robustness

    Paper • 2402.06827 • Published Feb 9, 2024

  • Misaligning Reasoning with Answers -- A Framework for Assessing LLM CoT Robustness

    Paper • 2505.17406 • Published May 23, 2025

  • Towards Universal Certified Robustness with Multi-Norm Training

    Paper • 2410.03000 • Published Oct 3, 2024

  • EnyiJiang/Gemma-2-2B-it-GANPO

    0.7B • Updated Feb 1 • 5

  • EnyiJiang/Llama-3-8B-Instruct-GANPO

    2B • Updated Feb 1 • 8
Upvote
-
  • Collection guide
  • Browse collections
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs