boapro commited on
Commit
d941989
·
verified ·
1 Parent(s): 1cf5655

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -10,6 +10,9 @@ tags:
10
  - qwen-coder
11
  - finetune
12
  quantized_by: boapro
 
 
 
13
  ---
14
 
15
  ## Llamacpp imatrix Quantizations of Qwen/Qwen2.5-7B
@@ -65,6 +68,4 @@ But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia)
65
 
66
  These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
67
 
68
- The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
69
-
70
-
 
10
  - qwen-coder
11
  - finetune
12
  quantized_by: boapro
13
+ datasets:
14
+ - boapro/W1
15
+ - boapro/W2
16
  ---
17
 
18
  ## Llamacpp imatrix Quantizations of Qwen/Qwen2.5-7B
 
68
 
69
  These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
70
 
71
+ The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.