Text Generation
Transformers
Safetensors
English
llama
text-generation-inference
unsloth
ml-intern
conversational
How to use from
Unsloth StudioInstall Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex
# Run unsloth studio
unsloth studio -H 0.0.0.0 -p 8888
# Then open http://localhost:8888 in your browser
# Search for tritesh/Erato-V1-Foundation to start chattingUsing HuggingFace Spaces for Unsloth
# No setup required# Open https://huggingface.co/spaces/unsloth/studio in your browser
# Search for tritesh/Erato-V1-Foundation to start chattingLoad model with FastModel
pip install unsloth
from unsloth import FastModel
model, tokenizer = FastModel.from_pretrained(
model_name="tritesh/Erato-V1-Foundation",
max_seq_length=2048,
)Quick Links
Uploaded finetuned model
- Developed by: raazkumar
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
Generated by ML Intern
This model repository was generated by ML Intern, an agent for machine learning research and development on the Hugging Face Hub.
- Try ML Intern: https://smolagents-ml-intern.hf.space
- Source code: https://github.com/huggingface/ml-intern
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = 'tritesh/Erato-V1-Foundation'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
For non-causal architectures, replace AutoModelForCausalLM with the appropriate AutoModel class.
- Downloads last month
- 27
Model tree for tritesh/Erato-V1-Foundation
Base model
meta-llama/Meta-Llama-3-8B Quantized
unsloth/llama-3-8b-bnb-4bit
Install Unsloth Studio (macOS, Linux, WSL)
# Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for tritesh/Erato-V1-Foundation to start chatting