Works great with your PR 22493!
Thanks AesSedai! I've been waiting to test this model as I really like the speed and quality of V2. Your GGUFs are working perfectly.
Any chance you will be able to work on the multimodal capabilities next?
Can someone give me the easy way to use the PR 22493? I usually just install the specific versions of llama.cpp that I need across my devices (e.g., Metal, CUDA, etc.). I am not seeing any sort of easy way to do that in the PR -- it just gives a zip file with lots of files but no binaries that I can see?
I can confirm that Q4 does not work on a recent version of llama.cpp -- I get complete gibberish. Further, it will not load on CUDA at all, but seemed to work on Vulkan and Metal.
Any tips are appreciated as I'd love to check out Q4! And THANK YOU for creating this so quickly.
Can someone give me the easy way to use the PR 22493? I usually just install the specific versions of llama.cpp that I need across my devices (e.g., Metal, CUDA, etc.). I am not seeing any sort of easy way to do that in the PR -- it just gives a zip file with lots of files but no binaries that I can see?
I can confirm that Q4 does not work on a recent version of llama.cpp -- I get complete gibberish. Further, it will not load on CUDA at all, but seemed to work on Vulkan and Metal.
Any tips are appreciated as I'd love to check out Q4! And THANK YOU for creating this so quickly.
You can just ask a LLM for steps :-)
If you already have C:\llama.cpp cloned:
cd C:\llama.cpp
git fetch origin
git switch master
git pull --ff-only origin master
git fetch origin pull/22493/head:pr-22493
git switch -c master-plus-pr-22493 master
git merge --no-ff pr-22493
Then go about building llama.cpp from source as you would typically. Bob's your uncle.
Thanks! I downloaded the zip manually and I'm attempting to build now.
I have several different rigs so building across all of them will be an endeavor. Using the prebuilt binaries is much easier, but I appreciate the learning experience at any rate. Hopefully this model will be good! :)
You're welcome brotha yeah I think you'll like it. I have been running the model for over an hour now across various prompts and so far so good.
@x-polyglot-x llama only has published builds for the master branch, glad you were able to get it compiling.
@Mdubbya I'll try to give the multimodal vision part a whack in the next few days, no promises though.
I get repeated compilation issues on Windows. It either cannot detect the CUDA toolkit, or when it does, it repeatedly displays warnings about " warning #221-D: floating-point value does not fit in required floating-point type -- return -((float)(1e+300)) ;" and many others. It just gets caught in the same error loops (or "warnings" I should say).
It compiled quickly and seemingly with no issues on Apple Silicon.
EDIT: It compiled fine on Silicon + CPU builds. I am currently running the model successfully using rpc-server with both types of machines / devices. I have no idea why CUDA fails during the build process. I'm new to compiling from scratch, but I installed CUDA toolkit + VS Studio and ran the standard commands AFAIK. It could very likely be an issue on my end, but not sure what it would be atm. I'm happy it at least works for now on this setup!
I did dig into the FA issue a bit and got a pretty good speedup, tested on the Q8_0 for the non-Pro version:
Before:
| PP | TG | N_KV | T_PP s | S_PP t/s | T_TG s | S_TG t/s |
|-------|--------|--------|----------|----------|----------|----------|
| 8192 | 2048 | 0 | 14.860 | 551.27 | 59.324 | 34.52 |
| 8192 | 2048 | 8192 | 27.317 | 299.89 | 198.618 | 10.31 |
| 8192 | 2048 | 16384 | 40.400 | 202.77 | 220.156 | 9.30 |
| 8192 | 2048 | 24576 | 53.788 | 152.30 | 240.882 | 8.50 |
After:
| PP | TG | N_KV | T_PP s | S_PP t/s | T_TG s | S_TG t/s |
|-------|--------|--------|----------|----------|----------|----------|
| 8192 | 2048 | 0 | 2.646 | 3096.50 | 26.495 | 77.30 |
| 8192 | 2048 | 8192 | 2.849 | 2875.61 | 28.700 | 71.36 |
| 8192 | 2048 | 16384 | 3.035 | 2698.98 | 28.985 | 70.66 |
| 8192 | 2048 | 24576 | 3.224 | 2541.33 | 29.247 | 70.02 |
and PPL is still sane: Final estimate: PPL = 5.1331 +/- 0.03025
I've pushed it to a new branch, based on the branch from this PR: https://github.com/AesSedai/llama.cpp/tree/mimo-v2.5-fattn
I tested this on my 6000 Pros but I think it would require more work for older arches / vulkan / etc. But for newish arches + CUDA it should be fine?
After:
| PP | TG | N_KV | T_PP s | S_PP t/s | T_TG s | S_TG t/s | |-------|--------|--------|----------|----------|----------|----------| | 8192 | 2048 | 0 | 2.646 | 3096.50 | 26.495 | 77.30 | | 8192 | 2048 | 8192 | 2.849 | 2875.61 | 28.700 | 71.36 | | 8192 | 2048 | 16384 | 3.035 | 2698.98 | 28.985 | 70.66 | | 8192 | 2048 | 24576 | 3.224 | 2541.33 | 29.247 | 70.02 |I tested this on my 6000 Pros but I think it would require more work for older arches / vulkan / etc. But for newish arches + CUDA it should be fine?
Indeed now the model is behaving nicely and the tks is tolerable on 8x RTX 3090 +256 GB ECC DDR4 (3200).
Thanks again for your effort to tame the model's FA integration into llama.cpp!
| PP | TG | B | N_KV | T_PP s | S_PP t/s | T_TG s | S_TG t/s | T s | S t/s |
|---|---|---|---|---|---|---|---|---|---|
| 8192 | 2048 | 1 | 10240 | 10.422 | 786.05 | 37.574 | 54.51 | 47.996 | 213.35 |
| 8192 | 2048 | 2 | 20480 | 20.762 | 789.12 | 55.171 | 74.24 | 75.934 | 269.71 |
| 8192 | 2048 | 3 | 30720 | 31.132 | 789.43 | 74.468 | 82.51 | 105.599 | 290.91 |
P.S. What command line (llama-bench or llama-batched-bench) are you using precisely for your benchmark? I wanted to get the same measurement structure but didn't quite exactly the same...
This looks great! Thanks AesSedai!
I got vision working and uploaded the mmproj files. It's on this branch, based off of the fattn branch: https://github.com/AesSedai/llama.cpp/tree/mimo-v2.5-vision
(Once the basic support branch gets merged, I'll put up the PR for the vision and fattn and work on getting those in).