--- base_model: meta-llama/Llama-3.1-70B-Instruct library_name: peft model_name: output tags: - base_model:adapter:meta-llama/Llama-3.1-70B-Instruct - lora - sft - transformers - trl licence: license pipeline_tag: text-generation --- # Model Card for output This model is a fine-tuned version of [meta-llama/Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="None", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [Visualize in Weights & Biases](https://wandb.ai/dv347/alignment-theater/runs/bk8e6fnf) This model was trained with SFT. ### Framework versions - PEFT 0.18.1 - TRL: 0.27.2 - Transformers: 5.1.0 - Pytorch: 2.8.0 - Datasets: 4.5.0 - Tokenizers: 0.22.2 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```