File size: 933 Bytes
80611b7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
728cac1
 
80611b7
 
 
 
728cac1
80611b7
 
2d91257
728cac1
80611b7
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
license: mit
base_model: google/gemma-3-270m-it
tags:
  - causal-reasoning
  - semantic-loss
  - fine-tuned
  - d-separation
language:
- en
---

# Gemma D-Separation Semantic V2

Fine-tuned Gemma 270M-IT for d-separation causal reasoning using semantic loss with dynamic lambda scheduling.

## Performance
- Standard accuracy: 68.6%
- Adversarial accuracy: 67.8%
- F1 score: 25.0% (vs 7.6% collapsed baseline)

## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM

model = AutoModelForCausalLM.from_pretrained("ludwigw/gemma-dseparation-semantic-v2")
tokenizer = AutoTokenizer.from_pretrained("ludwigw/gemma-dseparation-semantic-v2")
```

## Citation
```bibtex
@article{deshmukh2026semantic,
  title={On Semantic Loss Fine-Tuning Approach for Preventing Model Collapse in Causal Reasoning},
  author={Deshmukh, Pratik and Gupta, Atirek},
  journal={arXiv preprint arXiv:2605.05438},
  year={2026}
}
```