| --- |
| license: apache-2.0 |
| --- |
| |
| ## Overview |
|
|
| The Mixtral-7x8B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mistral-7x8Boutperforms Llama 2 70B on most benchmarks we tested. |
|
|
| ## Variants |
|
|
| | No | Variant | Cortex CLI command | |
| | --- | --- | --- | |
| | 1 | [7x8b-gguf](https://huggingface.co/cortexhub/mixtral/tree/7x8b-gguf) | `cortex run mixtral:7x8b-gguf` | |
|
|
| ## Use it with Jan (UI) |
|
|
| 1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart) |
| 2. Use in Jan model Hub: |
| ``` |
| cortexhub/mixtral |
| ``` |
| |
| ## Use it with Cortex (CLI) |
|
|
| 1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart) |
| 2. Run the model with command: |
| ``` |
| cortex run mixtral |
| ``` |
| |
| ## Credits |
|
|
| - **Author:** Mistralai |
| - **Converter:** [Homebrew](https://www.homebrew.ltd/) |