YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

PD Cognition Llama 3 Fine Tuned Adapters

This repository contains QLoRA fine tuned adapters for Meta Llama 3 8B Instruct for extracting cognitive process related narrative categories from first person reports of individuals with Parkinson disease.

Model Details

Base model: Meta Llama 3 8B Instruct Adapter type: QLoRA Task: cognitive process category extraction Language: English

Categories

Location Time Sensory Action Thought Emotion Social Interaction

Usage

These adapters must be loaded with the gated base model. Users need access to Meta Llama 3 8B Instruct on Hugging Face.

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
adapter_id = "nilaybhatt/PD_cognition_Llama_finetuned"

tokenizer = AutoTokenizer.from_pretrained(base_model_id)

base_model = AutoModelForCausalLM.from_pretrained(
    base_model_id,
    device_map="auto"
)

model = PeftModel.from_pretrained(base_model, adapter_id)

Output

The model is intended to identify spans or text segments corresponding to cognitive process categories in narrative reports.

License

These adapters are provided for research use. Use of the base model is governed by the Meta Llama 3 Community License.

Citation

@article{khanna2025cognitive,
  title={Toward Automated Cognitive Assessment in Parkinson’s Disease Using Pretrained Language Models},
  author={Khanna, Varada and Bhatt, Nilay and Shin, Ikgyu and Rosso, Mattia and Tinaz, Sule and Ren, Yang and Xu, Hua and Keloth, Vipina K},
  journal={arXiv preprint arXiv:2511.08806},
  year={2025}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for nilaybhatt/PD_cognition_Llama_finetuned