EF-LLM: Executive Function Behavior Prediction (7B LoRA-64)
Fine-tuned Qwen2.5-7B-Instruct with LoRA on human executive function (EF) behavioral data.
Model Description
Trained on behavioral data from 190 subjects completing 7 executive function tasks:
| Task | EF Component | Model Acc |
|---|---|---|
| Stroop | Inhibitory Control | 96.8% |
| Go/No-Go | Inhibitory Control | 93.4% |
| Task Switching | Cognitive Flexibility | 90.9% |
| Digit Span Forward | Updating/WM | 82.5% |
| Digit Span Backward | Updating/WM | 79.6% |
| Working Memory 1750ms | Updating/WM | 89.1% |
| Working Memory 750ms | Updating/WM | 85.1% |
Performance
- Behavioral alignment: r = 0.866 (subject-level mean accuracy)
- EF factor structure: PCA PC1 = 33.5%, PC2 = 32.3%
- Training: 999 steps, BF16, LoRA r=64, α=128
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
tokenizer = AutoTokenizer.from_pretrained("Zhang9804/ef-llm-7b-lora", trust_remote_code=True)
base_model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2.5-7B-Instruct",
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True,
)
model = PeftModel.from_pretrained(base_model, "Zhang9804/ef-llm-7b-lora")
model.eval()
context = (
"你正在参加一个Stroop色词实验。如果文字颜色是红色,按F键;如果文字颜色是绿色,按J键。\n\n"
"你看到红色的####。你按<<F>>。\n"
"你看到绿色的\u201c红\u201d字。你按<<J>>。\n"
"你看到红色的\u201c绿\u201d字。你按<<"
)
messages = [
{"role": "system", "content": "你是一名参与认知心理学实验的被试,请根据实验规则做出反应。"},
{"role": "user", "content": context},
]
input_ids = tokenizer.apply_chat_template(
messages, tokenize=True, add_generation_prompt=True, return_tensors="pt"
).to(model.device)
with torch.no_grad():
output = model.generate(input_ids, max_new_tokens=10, do_sample=False)
print(tokenizer.decode(output[0][input_ids.shape[1]:], skip_special_tokens=True))
Citation
@misc{ef-llm-2025,
title={Fine-tuning Large Language Models on Human Executive Function Behavioral Data},
author={Zhang, Tongyi},
year={2025},
}
- Downloads last month
- 55
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support