Granite Guardian Models Collection Safety models for detecting risks, toxicity, and hallucinations in LLM workflows. • 14 items • Updated 7 days ago • 22
Larimar: Large Language Models with Episodic Memory Control Paper • 2403.11901 • Published Mar 18, 2024 • 33
Alignment Studio: Aligning Large Language Models to Particular Contextual Regulations Paper • 2403.09704 • Published Mar 8, 2024 • 32