Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
ttttonyhe
's Collections
LLM Guardrails
Red-Teaming Datasets
Prompt Injection Defense
Specialized LLMs
Red-Teaming Models
Safety Alignment Datasets
Dense LLMs
Reasoning LLMs
Tiny Models
Small Models
Embedding Models
OCR Models
Domain-specific Datasets
Novel Model Architectures
Templates
Prompt Injection Defense
updated
12 days ago
Upvote
-
enkryptai/SecAlign-8B-DPO
Text Generation
•
8B
•
Updated
Jul 12, 2025
•
2
facebook/Meta-SecAlign-8B
Updated
Nov 11, 2025
•
1.31k
•
11
Upvote
-
Share collection
View history
Collection guide
Browse collections