Uploaded model

  • Compute sponsored by: Nvidia and Arrow ECS Denmark through Danish Data Science Community
  • Developed by: ThatsGroes
  • License: apache-2.0
  • Finetuned from model : AI-Sweden-Models/Llama-3-8B-instruct

Fine tuned for 1 epoch.

We ended up using 65.62 GB GPU memory (82.92%), of which 49.89 GB (63.04%) was used for LoRa.

[codecarbon INFO @ 21:31:34] Energy consumed for RAM : 0.404226 kWh. RAM Power : 188.78840446472168 W [codecarbon INFO @ 21:31:34] Energy consumed for all GPUs : 0.625855 kWh. Total GPU Power : 82.8216447468557 W [codecarbon INFO @ 21:31:34] Energy consumed for all CPUs : 0.091042 kWh. Total CPU Power : 42.5 W [codecarbon INFO @ 21:31:34] 1.121123 kWh of electricity used since the beginning.

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
28
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ThatsGroes/Llama-3-8B-instruct-AI-Sweden-SkoleGPT-GGUF

Dataset used to train ThatsGroes/Llama-3-8B-instruct-AI-Sweden-SkoleGPT-GGUF