| --- |
| language: |
| - code |
| pipeline_tag: text-generation |
| tags: |
| - llama-2 |
| - TensorBlock |
| - GGUF |
| license: llama2 |
| base_model: codellama/CodeLlama-13b-Python-hf |
| --- |
| |
| <div style="width: auto; margin-left: auto; margin-right: auto"> |
| <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;"> |
| </div> |
| <div style="display: flex; justify-content: space-between; width: 100%;"> |
| <div style="display: flex; flex-direction: column; align-items: flex-start;"> |
| <p style="margin-top: 0.5em; margin-bottom: 0em;"> |
| Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a> |
| </p> |
| </div> |
| </div> |
| |
| ## codellama/CodeLlama-13b-Python-hf - GGUF |
|
|
| This repo contains GGUF format model files for [codellama/CodeLlama-13b-Python-hf](https://huggingface.co/codellama/CodeLlama-13b-Python-hf). |
|
|
| The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d). |
|
|
| <div style="text-align: left; margin: 20px 0;"> |
| <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;"> |
| Run them on the TensorBlock client using your local machine ↗ |
| </a> |
| </div> |
| |
| ## Prompt template |
|
|
| ``` |
| |
| ``` |
|
|
| ## Model file specification |
|
|
| | Filename | Quant type | File Size | Description | |
| | -------- | ---------- | --------- | ----------- | |
| | [CodeLlama-13b-Python-hf-Q2_K.gguf](https://huggingface.co/tensorblock/CodeLlama-13b-Python-hf-GGUF/blob/main/CodeLlama-13b-Python-hf-Q2_K.gguf) | Q2_K | 4.521 GB | smallest, significant quality loss - not recommended for most purposes | |
| | [CodeLlama-13b-Python-hf-Q3_K_S.gguf](https://huggingface.co/tensorblock/CodeLlama-13b-Python-hf-GGUF/blob/main/CodeLlama-13b-Python-hf-Q3_K_S.gguf) | Q3_K_S | 5.270 GB | very small, high quality loss | |
| | [CodeLlama-13b-Python-hf-Q3_K_M.gguf](https://huggingface.co/tensorblock/CodeLlama-13b-Python-hf-GGUF/blob/main/CodeLlama-13b-Python-hf-Q3_K_M.gguf) | Q3_K_M | 5.903 GB | very small, high quality loss | |
| | [CodeLlama-13b-Python-hf-Q3_K_L.gguf](https://huggingface.co/tensorblock/CodeLlama-13b-Python-hf-GGUF/blob/main/CodeLlama-13b-Python-hf-Q3_K_L.gguf) | Q3_K_L | 6.454 GB | small, substantial quality loss | |
| | [CodeLlama-13b-Python-hf-Q4_0.gguf](https://huggingface.co/tensorblock/CodeLlama-13b-Python-hf-GGUF/blob/main/CodeLlama-13b-Python-hf-Q4_0.gguf) | Q4_0 | 6.860 GB | legacy; small, very high quality loss - prefer using Q3_K_M | |
| | [CodeLlama-13b-Python-hf-Q4_K_S.gguf](https://huggingface.co/tensorblock/CodeLlama-13b-Python-hf-GGUF/blob/main/CodeLlama-13b-Python-hf-Q4_K_S.gguf) | Q4_K_S | 6.913 GB | small, greater quality loss | |
| | [CodeLlama-13b-Python-hf-Q4_K_M.gguf](https://huggingface.co/tensorblock/CodeLlama-13b-Python-hf-GGUF/blob/main/CodeLlama-13b-Python-hf-Q4_K_M.gguf) | Q4_K_M | 7.326 GB | medium, balanced quality - recommended | |
| | [CodeLlama-13b-Python-hf-Q5_0.gguf](https://huggingface.co/tensorblock/CodeLlama-13b-Python-hf-GGUF/blob/main/CodeLlama-13b-Python-hf-Q5_0.gguf) | Q5_0 | 8.356 GB | legacy; medium, balanced quality - prefer using Q4_K_M | |
| | [CodeLlama-13b-Python-hf-Q5_K_S.gguf](https://huggingface.co/tensorblock/CodeLlama-13b-Python-hf-GGUF/blob/main/CodeLlama-13b-Python-hf-Q5_K_S.gguf) | Q5_K_S | 8.356 GB | large, low quality loss - recommended | |
| | [CodeLlama-13b-Python-hf-Q5_K_M.gguf](https://huggingface.co/tensorblock/CodeLlama-13b-Python-hf-GGUF/blob/main/CodeLlama-13b-Python-hf-Q5_K_M.gguf) | Q5_K_M | 8.596 GB | large, very low quality loss - recommended | |
| | [CodeLlama-13b-Python-hf-Q6_K.gguf](https://huggingface.co/tensorblock/CodeLlama-13b-Python-hf-GGUF/blob/main/CodeLlama-13b-Python-hf-Q6_K.gguf) | Q6_K | 9.946 GB | very large, extremely low quality loss | |
| | [CodeLlama-13b-Python-hf-Q8_0.gguf](https://huggingface.co/tensorblock/CodeLlama-13b-Python-hf-GGUF/blob/main/CodeLlama-13b-Python-hf-Q8_0.gguf) | Q8_0 | 12.881 GB | very large, extremely low quality loss - not recommended | |
| |
| |
| ## Downloading instruction |
| |
| ### Command line |
| |
| Firstly, install Huggingface Client |
| |
| ```shell |
| pip install -U "huggingface_hub[cli]" |
| ``` |
| |
| Then, downoad the individual model file the a local directory |
| |
| ```shell |
| huggingface-cli download tensorblock/CodeLlama-13b-Python-hf-GGUF --include "CodeLlama-13b-Python-hf-Q2_K.gguf" --local-dir MY_LOCAL_DIR |
| ``` |
| |
| If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try: |
| |
| ```shell |
| huggingface-cli download tensorblock/CodeLlama-13b-Python-hf-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf' |
| ``` |
| |