Model Card for SWAP_LLM
SWAP_LLM is a suite of fine-tuned models developed for multi-step reasoning with large language models (LLMs). The framework encompasses two primary components: generator and discriminator.
Model Details
Generator
Base Model:
meta-llama/Meta-Llama-3-8B-InstructLoRA Configuration:
lora_alpha: 32r: 16target_modules:["q_proj","k_proj", "v_proj", "o_proj"]bias:"none"
Discriminator
Base Model:
meta-llama/Meta-Llama-3-8B-InstructLoRA Configuration:
lora_alpha: 32r: 16target_modules:["q_proj","k_proj", "v_proj", "o_proj"]bias:"none"
For additional information and implementation details, please refer to the SWAP GitHub repository.
Citation
@inproceedings{xiong2025deliberate,
title={Deliberate reasoning in language models as structure-aware planning with an accurate world model},
author={Xiong, Siheng and Payani, Ali and Yang, Yuan and Fekri, Faramarz},
booktitle={Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},
pages={31900--31931},
year={2025}
}
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for sxiong/SWAP_LLM
Base model
meta-llama/Meta-Llama-3-8B-Instruct