Model Card for Model ID
Minstral 7B parameter model fine-tuned on Rust code dataset.
Model Details
This is majorly fine-tuned on axum and async-graphql code.
Model Description
This took Minstral 7b parameter base model and fine-tuned it on Rust code majorly containing axum and async graphql code. It should be useful to assist in writing Rust backend code.
- Developed by: Plawan Rath
- Finetuned from model: Minstral-7B
Downstream Use [optional]
This can be locally used using GUI like LM Studio. To use it with LM Studio you can follow the following steps:
Converting to GGUF for LMStudio
- Install llama.cpp
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
pip install -r requirements.txt
- Run convert-hf-to-gguf.py
python ./convert-hf-to-gguf.py \
<path-to-merged-model> \
--outfile ./mistral-lora-f16.gguf \
--outtype f16
- Put it in LM Studio Folder structure LM Studio doesnโt just scan every file directly inside ~/.lmstudio/models/. For each model it expects two nested folders:
~/.lmstudio/models/
โโโ <publisher-name>/
โโโ <model-name>/
โโโ model-file.gguf
- Now refresh "My Models" in Lm Studio to see this model.
You should now be able to use this model to chat for codegen.
- Downloads last month
- 2
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support