Model Card for Model ID

Minstral 7B parameter model fine-tuned on Rust code dataset.

Model Details

This is majorly fine-tuned on axum and async-graphql code.

Model Description

This took Minstral 7b parameter base model and fine-tuned it on Rust code majorly containing axum and async graphql code. It should be useful to assist in writing Rust backend code.

  • Developed by: Plawan Rath
  • Finetuned from model: Minstral-7B

Downstream Use [optional]

This can be locally used using GUI like LM Studio. To use it with LM Studio you can follow the following steps:

Converting to GGUF for LMStudio

  1. Install llama.cpp
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
pip install -r requirements.txt
  1. Run convert-hf-to-gguf.py
python ./convert-hf-to-gguf.py \
   <path-to-merged-model> \
   --outfile ./mistral-lora-f16.gguf \
   --outtype f16
  1. Put it in LM Studio Folder structure LM Studio doesnโ€™t just scan every file directly inside ~/.lmstudio/models/. For each model it expects two nested folders:
~/.lmstudio/models/
โ””โ”€โ”€ <publisher-name>/
    โ””โ”€โ”€ <model-name>/
        โ””โ”€โ”€ model-file.gguf
  1. Now refresh "My Models" in Lm Studio to see this model.

You should now be able to use this model to chat for codegen.

Downloads last month
2
Safetensors
Model size
7B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support