gsm8k-deepseek-llm-7b-chat-rajat-seed-42-G-16_merged

Merged model fine-tuned from deepseek-ai/deepseek-llm-7b-chat on GSM8K using GRPO.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("rghosh8/gsm8k-deepseek-llm-7b-chat-rajat-seed-42-G-16_merged", torch_dtype="auto", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("rghosh8/gsm8k-deepseek-llm-7b-chat-rajat-seed-42-G-16_merged")
Downloads last month
2,339
Safetensors
Model size
7B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rghosh8/gsm8k-deepseek-llm-7b-chat-rajat-seed-42-G-16_merged

Adapter
(39)
this model

Collection including rghosh8/gsm8k-deepseek-llm-7b-chat-rajat-seed-42-G-16_merged