Sunyx.ai - Fine-tuned Llama-3.3-70B for ERP
This is a LoRA fine-tuned version of Llama-3.3-70B-Instruct, specialized for ERP operations.
Model Details
- Base Model: meta-llama/Llama-3.3-70B-Instruct
- Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Training Data: 3,961 ERP examples
- Training Time: 4 hours 19 minutes
- Final Loss: 0.0615
- LoRA Rank: 16
- Trainable Parameters: 207M (0.57% of base model)
Capabilities
This model is fine-tuned for:
- Policy questions and approvals
- Budget queries and analysis
- Purchase order workflows
- Expense reimbursement
- Vendor management
- Real-time ERP data queries
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
# Load base model
base_model = "meta-llama/Llama-3.3-70B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
base_model,
torch_dtype=torch.bfloat16,
device_map="auto",
)
# Load LoRA adapters
model = PeftModel.from_pretrained(model, "eliazulai/sunyx-ai-llama-3.3-70b-lora")
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained("eliazulai/sunyx-ai-llama-3.3-70b-lora")
# Generate
messages = [
{"role": "system", "content": "You are Sunyx.ai, an ERP assistant."},
{"role": "user", "content": "Can I approve a purchase order for $5,000?"}
]
input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(input_text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Training Details
- Epochs: 1
- Batch Size: 4 per device
- Gradient Accumulation: 2 steps
- Learning Rate: 2e-5
- Hardware: 8x NVIDIA H100 80GB
- Training Framework: Transformers + PEFT
Limitations
- Requires access to base Llama-3.3-70B model
- Optimized for ERP/business contexts
- May not perform well on general tasks outside training domain
License
This model inherits the Llama 3.3 license from the base model.
- Downloads last month
- 1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support
Model tree for eliazulai/sunyx-ai-llama-3.3-70b-lora
Base model
meta-llama/Llama-3.1-70B Finetuned
meta-llama/Llama-3.3-70B-Instruct