RAG Fine-tuned Model: `gemma-3-270m-it-RAG-finetuned-202510190155`
This model was fine-tuned using the CPU RAG Fine-Tuner Space.
Model Details
- Base Model: `google/gemma-3-270m-it`
- Fine-Tuning Method: Retrieval-Augmented Generation (RAG) on a CPU. The model was trained to answer questions based on retrieved context.
Training Data
- Dataset: `openai/gsm8k`
- Dataset Configuration: `main`
- Data Slice: Rows `0` to `500` were used.
- Question Column: `question`
- Answer Column: `answer`
Training Hyperparameters
- Learning Rate: `2e-05`
- Epochs: `1`
- Batch Size: `1`
How to Use
This model expects prompts to be formatted in a specific RAG chat structure. The context should be retrieved from a knowledge base built from the training data.
Prompt Template
Use the following context to answer the question.
Context:
---
[Retrieved Document 1]
---
[Retrieved Document 2]
---
...
Question:
[Your Question]
Inference
You can use this model directly in a new RAG Inference Space by simply pasting the model repository ID: `broadfield-dev/gemma-3-270m-it-RAG-finetuned-202510190155`.
- Downloads last month
- 3
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support