SWAP_LLM / README.md
sxiong's picture
Super-squash branch 'main' using huggingface_hub
2618ca6
metadata
license: mit
datasets:
  - sxiong/SWAP
language:
  - en
base_model:
  - meta-llama/Meta-Llama-3-8B-Instruct

Model Card for SWAP_LLM

SWAP_LLM is a suite of fine-tuned models developed for multi-step reasoning with large language models (LLMs). The framework encompasses two primary components: generator and discriminator.

Model Details

Generator

  • Base Model: meta-llama/Meta-Llama-3-8B-Instruct

  • LoRA Configuration:

    • lora_alpha: 32
    • r: 16
    • target_modules: ["q_proj","k_proj", "v_proj", "o_proj"]
    • bias: "none"

Discriminator

  • Base Model: meta-llama/Meta-Llama-3-8B-Instruct

  • LoRA Configuration:

    • lora_alpha: 32
    • r: 16
    • target_modules: ["q_proj","k_proj", "v_proj", "o_proj"]
    • bias: "none"

For additional information and implementation details, please refer to the SWAP GitHub repository.

Citation

@inproceedings{xiong2025deliberate,
  title={Deliberate reasoning in language models as structure-aware planning with an accurate world model},
  author={Xiong, Siheng and Payani, Ali and Yang, Yuan and Fekri, Faramarz},
  booktitle={Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
  pages={31900--31931},
  year={2025}
}