vexp-devmind v1 (GGUF, Q4_K_M)
Local LLM used by the vexp code-intelligence pipeline. Runs entirely on the user's machine.
Available as:
- npm:
vexp-cli - VS Code: Vexp extension
| Quantization | Q4_K_M (llama.cpp / GGUF) |
| File | vexp-devmind-v1-Q4_K_M.gguf |
| Min RAM | 4 GB |
| License | MIT |
Intended use
This model is consumed exclusively by the vexp runtime. It is not a general-purpose chat model and produces terse, structured outputs that require the vexp runtime to be useful.
Install via either entry point:
CLI (npm):
npm install -g vexp-cli
vexp setup-llm --install
VS Code extension: Install Vexp from the marketplace, then click "Install LLM" in the vexp sidebar (or accept the first-run prompt).
Out-of-scope
Conversational chat, long-form generation, general reasoning. Use a general-purpose model for those tasks.
License
MIT.
- Downloads last month
- 2,319
Hardware compatibility
Log In to add your hardware
4-bit