Quantizations of https://huggingface.co/Undi95/BigL-7B

Open source inference clients/UIs

Closed source inference clients/UIs


From original readme

This repo contains fp16 files of BigL-7B.

The model was created by merging the basic Mistral-0.2-7B, a full finetuned RP model of it (Undi95/LewdMistral-7B-0.2) and some others top Mistral model I enjoyed (Toppy, ...)

It's a demonstration to show that merging Mistral 0.1 and 0.2 models is possible if done correctly, without the need to retrain entirely (dataset lost, repo deleted, etc...)

Prompt template: Alpaca

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{system prompt}

### Input:
{prompt}

### Response:
{output}
Downloads last month
111
GGUF
Model size
7B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support