| --- | |
| language: en | |
| tags: | |
| - unsloth | |
| - fine-tuned | |
| license: mit | |
| # test1 | |
| This model was fine-tuned using Unsloth and Aifisu. | |
| ## Model Details | |
| - Base Model: google/gemma-2b | |
| - Number of Training Examples: 99 | |
| - Created: 2025-03-14 | |
| ## Usage | |
| ```python | |
| from transformers import AutoModelForCausalLM, AutoTokenizer | |
| model_name = "fususu/test1" | |
| tokenizer = AutoTokenizer.from_pretrained(model_name) | |
| model = AutoModelForCausalLM.from_pretrained(model_name) | |
| # Generate text | |
| inputs = tokenizer("### Instruction: Tell me about AI fine-tuning\n\n### Response:", return_tensors="pt") | |
| outputs = model.generate(**inputs, max_length=200) | |
| print(tokenizer.decode(outputs[0], skip_special_tokens=True)) | |
| ``` | |
| Created using Aifisu | |