Uploaded finetuned model

  • Developed by: Varadrajan
  • License: apache-2.0
  • Finetuned from model : unsloth/meta-llama-3.1-8b

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

πŸ“– Overview / Purpose

This model transforms informal / casual English sentences into polished, formal prose.
The adapter originally trained on LLaMA 3.1-8B has been merged into the base model in 16-bit precision, so you can use it as a single artifact.

πŸš€ How to Use

You can use the model in two modes:

  • Direct merged inference: Load this merged 16-bit model directly, no separate adapter required.
  • Quantized loading: You may further load it in 8-bit or 4-bit modes (if supported) to reduce memory / storage footprint. This allows flexibility based on resource constraints.

πŸ“Œ Intended Applications & Use Cases

  • Customer support / chat interfaces, rewriting user text more professionally

  • Formalizing user-generated text (e.g. chats, emails)

  • Content polishing and maintaining style consistency

  • Internal documentation tone standardization across teams

  • Assisting writers or non-native English speakers in producing more formal prose

Downloads last month
2
Safetensors
Model size
8B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Varadrajan/llama-3.1-8b-alpaca-finetuned_16bit_merged

Quantizations
1 model

Dataset used to train Varadrajan/llama-3.1-8b-alpaca-finetuned_16bit_merged