Text Generation
Transformers
Safetensors
English
llama
text-generation-inference
unsloth
ml-intern
conversational
Uploaded finetuned model
- Developed by: raazkumar
- License: apache-2.0
- Finetuned from model : unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
Generated by ML Intern
This model repository was generated by ML Intern, an agent for machine learning research and development on the Hugging Face Hub.
- Try ML Intern: https://smolagents-ml-intern.hf.space
- Source code: https://github.com/huggingface/ml-intern
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = 'tritesh/Erato-V1-Foundation'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
For non-causal architectures, replace AutoModelForCausalLM with the appropriate AutoModel class.
- Downloads last month
- 30
Model tree for tritesh/Erato-V1-Foundation
Base model
meta-llama/Meta-Llama-3-8B Quantized
unsloth/llama-3-8b-bnb-4bit
Install from pip and serve model
# Install vLLM from pip: pip install vllm# Start the vLLM server: vllm serve "tritesh/Erato-V1-Foundation"# Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "tritesh/Erato-V1-Foundation", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'