Deepdive404 danielhanchen commited on
Commit
0d1195e
·
0 Parent(s):

Duplicate from unsloth/Kimi-K2.6-GGUF

Browse files

Co-authored-by: Daniel (Unsloth) <danielhanchen@users.noreply.huggingface.co>

This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +120 -0
  2. BF16/Kimi-K2.6-BF16-00001-of-00046.gguf +3 -0
  3. BF16/Kimi-K2.6-BF16-00002-of-00046.gguf +3 -0
  4. BF16/Kimi-K2.6-BF16-00003-of-00046.gguf +3 -0
  5. BF16/Kimi-K2.6-BF16-00004-of-00046.gguf +3 -0
  6. BF16/Kimi-K2.6-BF16-00005-of-00046.gguf +3 -0
  7. BF16/Kimi-K2.6-BF16-00006-of-00046.gguf +3 -0
  8. BF16/Kimi-K2.6-BF16-00007-of-00046.gguf +3 -0
  9. BF16/Kimi-K2.6-BF16-00008-of-00046.gguf +3 -0
  10. BF16/Kimi-K2.6-BF16-00009-of-00046.gguf +3 -0
  11. BF16/Kimi-K2.6-BF16-00010-of-00046.gguf +3 -0
  12. BF16/Kimi-K2.6-BF16-00011-of-00046.gguf +3 -0
  13. BF16/Kimi-K2.6-BF16-00012-of-00046.gguf +3 -0
  14. BF16/Kimi-K2.6-BF16-00013-of-00046.gguf +3 -0
  15. BF16/Kimi-K2.6-BF16-00014-of-00046.gguf +3 -0
  16. BF16/Kimi-K2.6-BF16-00015-of-00046.gguf +3 -0
  17. BF16/Kimi-K2.6-BF16-00016-of-00046.gguf +3 -0
  18. BF16/Kimi-K2.6-BF16-00017-of-00046.gguf +3 -0
  19. BF16/Kimi-K2.6-BF16-00018-of-00046.gguf +3 -0
  20. BF16/Kimi-K2.6-BF16-00019-of-00046.gguf +3 -0
  21. BF16/Kimi-K2.6-BF16-00020-of-00046.gguf +3 -0
  22. BF16/Kimi-K2.6-BF16-00021-of-00046.gguf +3 -0
  23. BF16/Kimi-K2.6-BF16-00022-of-00046.gguf +3 -0
  24. BF16/Kimi-K2.6-BF16-00023-of-00046.gguf +3 -0
  25. BF16/Kimi-K2.6-BF16-00024-of-00046.gguf +3 -0
  26. BF16/Kimi-K2.6-BF16-00025-of-00046.gguf +3 -0
  27. BF16/Kimi-K2.6-BF16-00026-of-00046.gguf +3 -0
  28. BF16/Kimi-K2.6-BF16-00027-of-00046.gguf +3 -0
  29. BF16/Kimi-K2.6-BF16-00028-of-00046.gguf +3 -0
  30. BF16/Kimi-K2.6-BF16-00029-of-00046.gguf +3 -0
  31. BF16/Kimi-K2.6-BF16-00030-of-00046.gguf +3 -0
  32. BF16/Kimi-K2.6-BF16-00031-of-00046.gguf +3 -0
  33. BF16/Kimi-K2.6-BF16-00032-of-00046.gguf +3 -0
  34. BF16/Kimi-K2.6-BF16-00033-of-00046.gguf +3 -0
  35. BF16/Kimi-K2.6-BF16-00034-of-00046.gguf +3 -0
  36. BF16/Kimi-K2.6-BF16-00035-of-00046.gguf +3 -0
  37. BF16/Kimi-K2.6-BF16-00036-of-00046.gguf +3 -0
  38. BF16/Kimi-K2.6-BF16-00037-of-00046.gguf +3 -0
  39. BF16/Kimi-K2.6-BF16-00038-of-00046.gguf +3 -0
  40. BF16/Kimi-K2.6-BF16-00039-of-00046.gguf +3 -0
  41. BF16/Kimi-K2.6-BF16-00040-of-00046.gguf +3 -0
  42. BF16/Kimi-K2.6-BF16-00041-of-00046.gguf +3 -0
  43. BF16/Kimi-K2.6-BF16-00042-of-00046.gguf +3 -0
  44. BF16/Kimi-K2.6-BF16-00043-of-00046.gguf +3 -0
  45. BF16/Kimi-K2.6-BF16-00044-of-00046.gguf +3 -0
  46. BF16/Kimi-K2.6-BF16-00045-of-00046.gguf +3 -0
  47. BF16/Kimi-K2.6-BF16-00046-of-00046.gguf +3 -0
  48. README.md +628 -0
  49. UD-Q2_K_XL/Kimi-K2.6-UD-Q2_K_XL-00001-of-00008.gguf +3 -0
  50. UD-Q2_K_XL/Kimi-K2.6-UD-Q2_K_XL-00002-of-00008.gguf +3 -0
.gitattributes ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ BF16/Kimi-K2.6-BF16-00046-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
37
+ BF16/Kimi-K2.6-BF16-00017-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
38
+ BF16/Kimi-K2.6-BF16-00027-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
39
+ BF16/Kimi-K2.6-BF16-00013-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
40
+ BF16/Kimi-K2.6-BF16-00037-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
41
+ BF16/Kimi-K2.6-BF16-00035-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
42
+ BF16/Kimi-K2.6-BF16-00023-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
43
+ BF16/Kimi-K2.6-BF16-00045-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
44
+ BF16/Kimi-K2.6-BF16-00033-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
45
+ BF16/Kimi-K2.6-BF16-00015-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
46
+ BF16/Kimi-K2.6-BF16-00034-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
47
+ BF16/Kimi-K2.6-BF16-00006-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
48
+ BF16/Kimi-K2.6-BF16-00007-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
49
+ BF16/Kimi-K2.6-BF16-00022-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
50
+ BF16/Kimi-K2.6-BF16-00038-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
51
+ BF16/Kimi-K2.6-BF16-00030-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
52
+ BF16/Kimi-K2.6-BF16-00005-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
53
+ BF16/Kimi-K2.6-BF16-00025-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
54
+ BF16/Kimi-K2.6-BF16-00008-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
55
+ BF16/Kimi-K2.6-BF16-00019-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
56
+ BF16/Kimi-K2.6-BF16-00039-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
57
+ BF16/Kimi-K2.6-BF16-00036-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
58
+ BF16/Kimi-K2.6-BF16-00031-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
59
+ BF16/Kimi-K2.6-BF16-00024-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
60
+ BF16/Kimi-K2.6-BF16-00020-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
61
+ BF16/Kimi-K2.6-BF16-00026-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
62
+ BF16/Kimi-K2.6-BF16-00040-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
63
+ BF16/Kimi-K2.6-BF16-00016-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
64
+ BF16/Kimi-K2.6-BF16-00021-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
65
+ BF16/Kimi-K2.6-BF16-00012-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
66
+ BF16/Kimi-K2.6-BF16-00041-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
67
+ BF16/Kimi-K2.6-BF16-00011-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
68
+ BF16/Kimi-K2.6-BF16-00009-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
69
+ BF16/Kimi-K2.6-BF16-00032-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
70
+ BF16/Kimi-K2.6-BF16-00002-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
71
+ BF16/Kimi-K2.6-BF16-00043-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
72
+ BF16/Kimi-K2.6-BF16-00044-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
73
+ BF16/Kimi-K2.6-BF16-00028-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
74
+ BF16/Kimi-K2.6-BF16-00003-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
75
+ BF16/Kimi-K2.6-BF16-00010-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
76
+ BF16/Kimi-K2.6-BF16-00004-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
77
+ BF16/Kimi-K2.6-BF16-00014-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
78
+ BF16/Kimi-K2.6-BF16-00018-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
79
+ BF16/Kimi-K2.6-BF16-00029-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
80
+ BF16/Kimi-K2.6-BF16-00042-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
81
+ BF16/Kimi-K2.6-BF16-00001-of-00046.gguf filter=lfs diff=lfs merge=lfs -text
82
+ UD-Q8_K_XL/Kimi-K2.6-UD-Q8_K_XL-00001-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
83
+ UD-Q8_K_XL/Kimi-K2.6-UD-Q8_K_XL-00002-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
84
+ UD-Q8_K_XL/Kimi-K2.6-UD-Q8_K_XL-00003-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
85
+ UD-Q8_K_XL/Kimi-K2.6-UD-Q8_K_XL-00004-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
86
+ UD-Q8_K_XL/Kimi-K2.6-UD-Q8_K_XL-00005-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
87
+ UD-Q8_K_XL/Kimi-K2.6-UD-Q8_K_XL-00006-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
88
+ UD-Q8_K_XL/Kimi-K2.6-UD-Q8_K_XL-00007-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
89
+ UD-Q8_K_XL/Kimi-K2.6-UD-Q8_K_XL-00008-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
90
+ UD-Q8_K_XL/Kimi-K2.6-UD-Q8_K_XL-00009-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
91
+ UD-Q8_K_XL/Kimi-K2.6-UD-Q8_K_XL-00010-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
92
+ UD-Q8_K_XL/Kimi-K2.6-UD-Q8_K_XL-00011-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
93
+ UD-Q8_K_XL/Kimi-K2.6-UD-Q8_K_XL-00012-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
94
+ UD-Q8_K_XL/Kimi-K2.6-UD-Q8_K_XL-00013-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
95
+ UD-Q8_K_XL/Kimi-K2.6-UD-Q8_K_XL-00014-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
96
+ UD-Q4_K_XL/Kimi-K2.6-UD-Q4_K_XL-00001-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
97
+ UD-Q4_K_XL/Kimi-K2.6-UD-Q4_K_XL-00002-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
98
+ UD-Q4_K_XL/Kimi-K2.6-UD-Q4_K_XL-00003-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
99
+ UD-Q4_K_XL/Kimi-K2.6-UD-Q4_K_XL-00004-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
100
+ UD-Q4_K_XL/Kimi-K2.6-UD-Q4_K_XL-00005-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
101
+ UD-Q4_K_XL/Kimi-K2.6-UD-Q4_K_XL-00006-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
102
+ UD-Q4_K_XL/Kimi-K2.6-UD-Q4_K_XL-00007-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
103
+ UD-Q4_K_XL/Kimi-K2.6-UD-Q4_K_XL-00008-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
104
+ UD-Q4_K_XL/Kimi-K2.6-UD-Q4_K_XL-00009-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
105
+ UD-Q4_K_XL/Kimi-K2.6-UD-Q4_K_XL-00010-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
106
+ UD-Q4_K_XL/Kimi-K2.6-UD-Q4_K_XL-00011-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
107
+ UD-Q4_K_XL/Kimi-K2.6-UD-Q4_K_XL-00012-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
108
+ UD-Q4_K_XL/Kimi-K2.6-UD-Q4_K_XL-00013-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
109
+ UD-Q4_K_XL/Kimi-K2.6-UD-Q4_K_XL-00014-of-00014.gguf filter=lfs diff=lfs merge=lfs -text
110
+ mmproj-BF16.gguf filter=lfs diff=lfs merge=lfs -text
111
+ mmproj-F16.gguf filter=lfs diff=lfs merge=lfs -text
112
+ mmproj-F32.gguf filter=lfs diff=lfs merge=lfs -text
113
+ UD-Q2_K_XL/Kimi-K2.6-UD-Q2_K_XL-00001-of-00008.gguf filter=lfs diff=lfs merge=lfs -text
114
+ UD-Q2_K_XL/Kimi-K2.6-UD-Q2_K_XL-00002-of-00008.gguf filter=lfs diff=lfs merge=lfs -text
115
+ UD-Q2_K_XL/Kimi-K2.6-UD-Q2_K_XL-00003-of-00008.gguf filter=lfs diff=lfs merge=lfs -text
116
+ UD-Q2_K_XL/Kimi-K2.6-UD-Q2_K_XL-00004-of-00008.gguf filter=lfs diff=lfs merge=lfs -text
117
+ UD-Q2_K_XL/Kimi-K2.6-UD-Q2_K_XL-00005-of-00008.gguf filter=lfs diff=lfs merge=lfs -text
118
+ UD-Q2_K_XL/Kimi-K2.6-UD-Q2_K_XL-00006-of-00008.gguf filter=lfs diff=lfs merge=lfs -text
119
+ UD-Q2_K_XL/Kimi-K2.6-UD-Q2_K_XL-00007-of-00008.gguf filter=lfs diff=lfs merge=lfs -text
120
+ UD-Q2_K_XL/Kimi-K2.6-UD-Q2_K_XL-00008-of-00008.gguf filter=lfs diff=lfs merge=lfs -text
BF16/Kimi-K2.6-BF16-00001-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0223592ed821cd6526a9f8ea816c3000c6d67879fd2bb6f5ab282e624714af78
3
+ size 46332327264
BF16/Kimi-K2.6-BF16-00002-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35b46fddc1fe40477a19db07981417f76ab9c7da5555450692f93edc7c6a9c94
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00003-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d4a1c1134e4e0c2e53b43d37b05a103ced7dbca3382f9b309e8d2962a4eeb50
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00004-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:447a8509088be5cc84f95929c65082db52eb82e90f0f861c222e0250d3e6198a
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00005-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eaeee8f56883fe4838f2bb78865c9405cf9cc5d652a44e0ab106072826861b13
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00006-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:129179de44bfed2ff4435ffef1e5fb113d25283f6031f87495ffb5cfd77dd09a
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00007-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:43cf830fd102d24d107f6a8d7d8ac46515edd454c7e16bdd3e63499f6ba88c40
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00008-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:57ed8c1d26aa3a2a325fcf83ac8be676e07e10754a8bbc1c2f5daf3feabf4177
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00009-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f97a0d70b40ef6c3975fdb76b89541b0420b16816f261abf11b4ea62d9e4c920
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00010-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79730dd4d740aefbed110abcac869455cec326ee93e6cc1d38b85186db94efab
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00011-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99665113223a781760b8664c02d3e2c4ed7e8218d79f0aa87fb386dbf20acabf
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00012-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4969b9935806763e24936bc6b14e6a8c25dda188379a5a70afc709fe982b8371
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00013-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d23111cbc770a72ce3a81caccd5d7c35a4ed8f2b970b592e2213a45df60e2921
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00014-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6082eea9c658bdd159c6b3a856268e248c57d97d409ecb84442d1e5cec84232
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00015-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:021857a52979b4aa0f6d1be70758fbdbbf4533fc518ac14cd59d322882d60102
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00016-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff549c73290b8bfcc138645a5cfe25de5726f972e99d8f6240db72d34a9c9f0f
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00017-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9670ea4e702784217b7cf982876ba6db1b3dcc6ebbfd6408ab46a9b6a81ac3a8
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00018-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:976c100437338df540707a4f299e417138768bdde029e1916a6307421efca716
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00019-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f925f355d657a41a9e8a398ec22e6d1f9ca08d37f7367f990aafe226fa5175cc
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00020-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b791a30379ffa74de52e92b28239e3af989f9ed7a0dafb1578e96e1ed331f5ee
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00021-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fbf48f255efd8447d861f81c5fea2b34eeeabc2ee6e01939ca177ed144bef3d5
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00022-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f335bb0b6d2f7ab99910498a096523e1ab8cb4870b1b602e6db85f8585fb80f1
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00023-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d6d8b85cc75e67cf5a04cd3aa3678697014175fae5e55afff4cab20c634a31c
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00024-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:706fde43a78f52717a0554934fac845598f7e3750b8ad3bf351a8dca34f11a5a
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00025-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:03bde09400ab831d2b0c599b081cfccdf3849496bb767e21fd4d03b1c706e057
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00026-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:770046f1217c836f285b21f7294d191857faee182e27593baec42bcbcb7c5728
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00027-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27a0eec6295d51109f5cf89765e0227c4b6739ff87b3709042aad7167201a9ee
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00028-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:73e473248a310d6398e2be47fd910d1059fd1167241e6f843180a2601c429a25
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00029-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df3966422ea4e5548fe57a702264c3515b0ec440ab8046807d3146a9af75bd9f
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00030-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1770bbdd6217f931916a033d2659b6c89c17701e20f926651a2118f3dee66683
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00031-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5067c65090e853cafda5cd057b26a1c2c1ce57a4d7b4456ed5bb1f290c321d16
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00032-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea96280b4c2640b9c30c9ae367329870e896e22378f874a0f872b6e3df0357fb
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00033-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ef970d5025d53a9d20c5f73cb7f7a76487d8510f7824352dbeba6f0bb6939f2
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00034-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed9ce448779318ab61293cabdacf639497a4b168426dd725c6bdaeb028fa9dd4
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00035-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:755725d0d21f59d45a2bc3d875c5ec011bb43f95745c7cb99583dbe7be0caa78
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00036-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f74761d9c520e76e27c564429138b9ab835380a58c28c37498aad0368f5cf8d6
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00037-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74ebd349638110bfa0045784c7dedbba074c64c24b9a0f87e36a2b90a907d2b9
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00038-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:92f062456aefb8801aed48c8fe0025fa78417659f0bea18e38a95744915b2897
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00039-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b5084287206cefbb8dc81c827092707ff34d688dd69061e9d5b1bcc8e0963a6
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00040-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05dc29b194d4f8c357430bf1c67b360f8829d2dff1561bb93b1f39168a37d0b3
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00041-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f4da380888a4acc6e69a95ed818af0cdc05dde280e52c1610f821a54dc71ffc
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00042-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dc249a9288ca9e9b4bd0310f64d87de8452b8a14b5429b5033caa08a6c98d23e
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00043-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:516c5f078d9cffc175ebafdf100581a99f4ceb8b96464071170956aaab87f8cd
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00044-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:846cea56d4ca541ab4ab3ab6e560a858e7bd64c17ab664295c24d16504ac8196
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00045-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:491d7be9c5d4c25320a4f407b76b3d71c30221132dd3ee15c1bf75ec3e5f2172
3
+ size 45097157024
BF16/Kimi-K2.6-BF16-00046-of-00046.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d50e7d0c7d10e0f8058c70c874a3795a07797117a0e8db6f1f3d3c3d58dc04e3
3
+ size 22548578560
README.md ADDED
@@ -0,0 +1,628 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - moonshotai/Kimi-K2.6
4
+ tags:
5
+ - compressed-tensors
6
+ - unsloth
7
+ - kimi_k25
8
+ license: other
9
+ license_name: modified-mit
10
+ library_name: transformers
11
+ pipeline_tag: image-text-to-text
12
+ ---
13
+ # Read our How to [Run Kimi K2.6 Guide!](https://unsloth.ai/docs/models/kimi-k2.6)
14
+ <div>
15
+ <p style="margin: 0 0 0px 0; margin-top: 0px;">
16
+ <em>See <a href="https://unsloth.ai/docs/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0 GGUFs</a> for our quantization benchmarks.</em>
17
+ </p>
18
+ <div style="display: flex; gap: 5px; align-items: center; margin-bottom: 0px;">
19
+ <a href="https://github.com/unslothai/unsloth/">
20
+ <img src="https://github.com/unslothai/unsloth/raw/main/images/unsloth%20new%20logo.png" width="133">
21
+ </a>
22
+ <a href="https://discord.gg/unsloth">
23
+ <img src="https://github.com/unslothai/unsloth/raw/main/images/Discord%20button.png" width="173">
24
+ </a>
25
+ <a href="https://unsloth.ai/docs/models/kimi-k2.6">
26
+ <img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="143">
27
+ </a>
28
+ </div>
29
+ <ul style="margin: 0;">
30
+ <li>To run Kimi K2.6 in full precision lossless, run Q8 (UD-Q8_K_XL), which is 595GB and only 10GB bigger than Q4 (UD-Q4_K_XL).</li>
31
+ <li>See our <a href="https://unsloth.ai/docs/models/kimi-k2.6">Kimi K2.6 guide</a> for quantization analysis and instructions.</li>
32
+ </ul>
33
+ </div>
34
+
35
+ <br>
36
+
37
+ # Kimi-K2.6
38
+
39
+ ![kimi k2.6](https://unsloth.ai/docs/~gitbook/image?url=https%3A%2F%2F3215535692-files.gitbook.io%2F%7E%2Ffiles%2Fv0%2Fb%2Fgitbook-x-prod.appspot.com%2Fo%2Fspaces%252FxhOjnexMCB3dmuQFQ2Zq%252Fuploads%252FdBDLDaRXybr9JMCs33bC%252Fkimibench.jpg%3Falt%3Dmedia%26token%3D040ea87d-09e8-452c-bfb2-4231305a20d2&width=768&dpr=3&quality=100&sign=fb360710&sv=2)
40
+
41
+ Kimi K2.6 is an open-source, native multimodal agentic model that advances practical capabilities in long-horizon coding, coding-driven design, proactive autonomous execution, and swarm-based task orchestration.
42
+
43
+ ### Key Features
44
+ - **Long-Horizon Coding**: K2.6 achieves significant improvements on complex, end-to-end coding tasks, generalizing robustly across programming languages (Rust, Go, Python) and domains spanning front-end, DevOps, and performance optimization.
45
+ - **Coding-Driven Design**: K2.6 is capable of transforming simple prompts and visual inputs into production-ready interfaces and lightweight full-stack workflows, generating structured layouts, interactive elements, and rich animations with deliberate aesthetic precision.
46
+ - **Elevated Agent Swarm**: Scaling horizontally to 300 sub-agents executing 4,000 coordinated steps, K2.6 can dynamically decompose tasks into parallel, domain-specialized subtasks, delivering end-to-end outputs from documents to websites to spreadsheets in a single autonomous run.
47
+ - **Proactive & Open Orchestration**: For autonomous tasks, K2.6 demonstrates strong performance in powering persistent, 24/7 background agents that proactively manage schedules, execute code, and orchestrate cross-platform operations without human oversight.
48
+
49
+ ## 2. Model Summary
50
+
51
+ <div align="center">
52
+
53
+
54
+ | | |
55
+ |:---:|:---:|
56
+ | **Architecture** | Mixture-of-Experts (MoE) |
57
+ | **Total Parameters** | 1T |
58
+ | **Activated Parameters** | 32B |
59
+ | **Number of Layers** (Dense layer included) | 61 |
60
+ | **Number of Dense Layers** | 1 |
61
+ | **Attention Hidden Dimension** | 7168 |
62
+ | **MoE Hidden Dimension** (per Expert) | 2048 |
63
+ | **Number of Attention Heads** | 64 |
64
+ | **Number of Experts** | 384 |
65
+ | **Selected Experts per Token** | 8 |
66
+ | **Number of Shared Experts** | 1 |
67
+ | **Vocabulary Size** | 160K |
68
+ | **Context Length** | 256K |
69
+ | **Attention Mechanism** | MLA |
70
+ | **Activation Function** | SwiGLU |
71
+ | **Vision Encoder** | MoonViT |
72
+ | **Parameters of Vision Encoder** | 400M |
73
+ </div>
74
+
75
+ ## 3. Evaluation Results
76
+
77
+ <div align="center">
78
+ <table>
79
+ <thead>
80
+ <tr>
81
+ <th align="center">Benchmark</th>
82
+ <th align="center"><sup>Kimi K2.6</sup></th>
83
+ <th align="center"><sup>GPT-5.4 <br><sup>(xhigh)</sup></sup></th>
84
+ <th align="center"><sup>Claude Opus 4.6 <br><sup>(max effort)</sup></sup></th>
85
+ <th align="center"><sup>Gemini 3.1 Pro<br><sup>(thinking high)</sup></sup></th>
86
+ <th align="center"><sup>Kimi K2.5</sup></th>
87
+ </tr>
88
+ </thead>
89
+ <tbody>
90
+ <tr>
91
+ <td align="center" colspan=6><strong>Agentic</strong></td>
92
+ </tr>
93
+ <tr>
94
+ <td align="center" style="vertical-align: middle">HLE-Full<br>(w/ tools)</td>
95
+ <td align="center" style="vertical-align: middle">54.0</td>
96
+ <td align="center" style="vertical-align: middle">52.1</td>
97
+ <td align="center" style="vertical-align: middle">53.0</td>
98
+ <td align="center" style="vertical-align: middle">51.4</td>
99
+ <td align="center" style="vertical-align: middle">50.2</td>
100
+ </tr>
101
+ <tr>
102
+ <td align="center" style="vertical-align: middle">BrowseComp</td>
103
+ <td align="center" style="vertical-align: middle">83.2</td>
104
+ <td align="center" style="vertical-align: middle" rowspan="2">82.7</td>
105
+ <td align="center" style="vertical-align: middle" rowspan="2">83.7</td>
106
+ <td align="center" style="vertical-align: middle" rowspan="2">85.9</td>
107
+ <td align="center" style="vertical-align: middle">74.9</td>
108
+ </tr>
109
+ <tr>
110
+ <td align="center" style="vertical-align: middle">BrowseComp<br>(Agent Swarm)</td>
111
+ <td align="center" style="vertical-align: middle">86.3</td>
112
+ <td align="center" style="vertical-align: middle">78.4</td>
113
+ </tr>
114
+ <tr>
115
+ <td align="center" style="vertical-align: middle">DeepSearchQA<br>(f1-score)</td>
116
+ <td align="center" style="vertical-align: middle">92.5</td>
117
+ <td align="center" style="vertical-align: middle">78.6</td>
118
+ <td align="center" style="vertical-align: middle">91.3</td>
119
+ <td align="center" style="vertical-align: middle">81.9</td>
120
+ <td align="center" style="vertical-align: middle">89.0</td>
121
+ </tr>
122
+ <tr>
123
+ <td align="center" style="vertical-align: middle">DeepSearchQA<br>(accuracy)</td>
124
+ <td align="center" style="vertical-align: middle">83.0</td>
125
+ <td align="center" style="vertical-align: middle">63.7</td>
126
+ <td align="center" style="vertical-align: middle">80.6</td>
127
+ <td align="center" style="vertical-align: middle">60.2</td>
128
+ <td align="center" style="vertical-align: middle">77.1</td>
129
+ </tr>
130
+ <tr>
131
+ <td align="center" style="vertical-align: middle">WideSearch<br> (item-f1)</td>
132
+ <td align="center" style="vertical-align: middle">80.8</td>
133
+ <td align="center" style="vertical-align: middle">-</td>
134
+ <td align="center" style="vertical-align: middle">-</td>
135
+ <td align="center" style="vertical-align: middle">-</td>
136
+ <td align="center" style="vertical-align: middle">72.7</td>
137
+ </tr>
138
+ <tr>
139
+ <td align="center" style="vertical-align: middle">Toolathlon</td>
140
+ <td align="center" style="vertical-align: middle">50.0</td>
141
+ <td align="center" style="vertical-align: middle">54.6</td>
142
+ <td align="center" style="vertical-align: middle">47.2</td>
143
+ <td align="center" style="vertical-align: middle">48.8</td>
144
+ <td align="center" style="vertical-align: middle">27.8</td>
145
+ </tr>
146
+ <tr>
147
+ <td align="center" style="vertical-align: middle">MCPMark</td>
148
+ <td align="center" style="vertical-align: middle">55.9</td>
149
+ <td align="center" style="vertical-align: middle">62.5*</td>
150
+ <td align="center" style="vertical-align: middle">56.7*</td>
151
+ <td align="center" style="vertical-align: middle">55.9*</td>
152
+ <td align="center" style="vertical-align: middle">29.5</td>
153
+ </tr>
154
+ <tr>
155
+ <td align="center" style="vertical-align: middle">Claw Eval (pass^3)</td>
156
+ <td align="center" style="vertical-align: middle">62.3</td>
157
+ <td align="center" style="vertical-align: middle">60.3</td>
158
+ <td align="center" style="vertical-align: middle">70.4</td>
159
+ <td align="center" style="vertical-align: middle">57.8</td>
160
+ <td align="center" style="vertical-align: middle">52.3</td>
161
+ </tr>
162
+ <tr>
163
+ <td align="center" style="vertical-align: middle">Claw Eval (pass@3)</td>
164
+ <td align="center" style="vertical-align: middle">80.9</td>
165
+ <td align="center" style="vertical-align: middle">78.4</td>
166
+ <td align="center" style="vertical-align: middle">82.4</td>
167
+ <td align="center" style="vertical-align: middle">82.9</td>
168
+ <td align="center" style="vertical-align: middle">75.4</td>
169
+ </tr>
170
+ <tr>
171
+ <td align="center" style="vertical-align: middle">APEX-Agents</td>
172
+ <td align="center" style="vertical-align: middle">27.9</td>
173
+ <td align="center" style="vertical-align: middle">33.3</td>
174
+ <td align="center" style="vertical-align: middle">33.0</td>
175
+ <td align="center" style="vertical-align: middle">32.0</td>
176
+ <td align="center" style="vertical-align: middle">11.5</td>
177
+ </tr>
178
+ <tr>
179
+ <td align="center" style="vertical-align: middle">OSWorld-Verified</td>
180
+ <td align="center" style="vertical-align: middle">73.1</td>
181
+ <td align="center" style="vertical-align: middle">75.0</td>
182
+ <td align="center" style="vertical-align: middle">72.7</td>
183
+ <td align="center" style="vertical-align: middle">-</td>
184
+ <td align="center" style="vertical-align: middle">63.3</td>
185
+ </tr>
186
+ <tr>
187
+ <td align="center" colspan=6><strong>Coding</strong></td>
188
+ </tr>
189
+ <tr>
190
+ <td align="center" style="vertical-align: middle">Terminal-Bench 2.0<br>(Terminus-2)</td>
191
+ <td align="center" style="vertical-align: middle">66.7</td>
192
+ <td align="center" style="vertical-align: middle">65.4*</td>
193
+ <td align="center" style="vertical-align: middle">65.4</td>
194
+ <td align="center" style="vertical-align: middle">68.5</td>
195
+ <td align="center" style="vertical-align: middle">50.8</td>
196
+ </tr>
197
+ <tr>
198
+ <td align="center" style="vertical-align: middle">SWE-Bench Pro</td>
199
+ <td align="center" style="vertical-align: middle">58.6</td>
200
+ <td align="center" style="vertical-align: middle">57.7</td>
201
+ <td align="center" style="vertical-align: middle">53.4</td>
202
+ <td align="center" style="vertical-align: middle">54.2</td>
203
+ <td align="center" style="vertical-align: middle">50.7</td>
204
+ </tr>
205
+ <tr>
206
+ <td align="center" style="vertical-align: middle">SWE-Bench Multilingual</td>
207
+ <td align="center" style="vertical-align: middle">76.7</td>
208
+ <td align="center" style="vertical-align: middle">-</td>
209
+ <td align="center" style="vertical-align: middle">77.8</td>
210
+ <td align="center" style="vertical-align: middle">76.9*</td>
211
+ <td align="center" style="vertical-align: middle">73.0</td>
212
+ </tr>
213
+ <tr>
214
+ <td align="center" style="vertical-align: middle">SWE-Bench Verified</td>
215
+ <td align="center" style="vertical-align: middle">80.2</td>
216
+ <td align="center" style="vertical-align: middle">-</td>
217
+ <td align="center" style="vertical-align: middle">80.8</td>
218
+ <td align="center" style="vertical-align: middle">80.6</td>
219
+ <td align="center" style="vertical-align: middle">76.8</td>
220
+ </tr>
221
+ <tr>
222
+ <td align="center" style="vertical-align: middle">SciCode</td>
223
+ <td align="center" style="vertical-align: middle">52.2</td>
224
+ <td align="center" style="vertical-align: middle">56.6</td>
225
+ <td align="center" style="vertical-align: middle">51.9</td>
226
+ <td align="center" style="vertical-align: middle">58.9</td>
227
+ <td align="center" style="vertical-align: middle">48.7</td>
228
+ </tr>
229
+ <tr>
230
+ <td align="center" style="vertical-align: middle">OJBench (python)</td>
231
+ <td align="center" style="vertical-align: middle">60.6</td>
232
+ <td align="center" style="vertical-align: middle">-</td>
233
+ <td align="center" style="vertical-align: middle">60.3</td>
234
+ <td align="center" style="vertical-align: middle">70.7</td>
235
+ <td align="center" style="vertical-align: middle">54.7</td>
236
+ </tr>
237
+ <tr>
238
+ <td align="center" style="vertical-align: middle">LiveCodeBench (v6)</td>
239
+ <td align="center" style="vertical-align: middle">89.6</td>
240
+ <td align="center" style="vertical-align: middle">-</td>
241
+ <td align="center" style="vertical-align: middle">88.8</td>
242
+ <td align="center" style="vertical-align: middle">91.7</td>
243
+ <td align="center" style="vertical-align: middle">85.0</td>
244
+ </tr>
245
+ <tr>
246
+ <td align="center" colspan=6><strong>Reasoning &amp; Knowledge</strong></td>
247
+ </tr>
248
+ <tr>
249
+ <td align="center" style="vertical-align: middle">HLE-Full</td>
250
+ <td align="center" style="vertical-align: middle">34.7</td>
251
+ <td align="center" style="vertical-align: middle">39.8</td>
252
+ <td align="center" style="vertical-align: middle">40.0</td>
253
+ <td align="center" style="vertical-align: middle">44.4</td>
254
+ <td align="center" style="vertical-align: middle">30.1</td>
255
+ </tr>
256
+ <tr>
257
+ <td align="center" style="vertical-align: middle">AIME 2026</td>
258
+ <td align="center" style="vertical-align: middle">96.4</td>
259
+ <td align="center" style="vertical-align: middle">99.2</td>
260
+ <td align="center" style="vertical-align: middle">96.7</td>
261
+ <td align="center" style="vertical-align: middle">98.3</td>
262
+ <td align="center" style="vertical-align: middle">95.8</td>
263
+ </tr>
264
+ <tr>
265
+ <td align="center" style="vertical-align: middle">HMMT 2026 (Feb)</td>
266
+ <td align="center" style="vertical-align: middle">92.7</td>
267
+ <td align="center" style="vertical-align: middle">97.7</td>
268
+ <td align="center" style="vertical-align: middle">96.2</td>
269
+ <td align="center" style="vertical-align: middle">94.7</td>
270
+ <td align="center" style="vertical-align: middle">87.1</td>
271
+ </tr>
272
+ <tr>
273
+ <td align="center" style="vertical-align: middle">IMO-AnswerBench</td>
274
+ <td align="center" style="vertical-align: middle">86.0</td>
275
+ <td align="center" style="vertical-align: middle">91.4</td>
276
+ <td align="center" style="vertical-align: middle">75.3</td>
277
+ <td align="center" style="vertical-align: middle">91.0*</td>
278
+ <td align="center" style="vertical-align: middle">81.8</td>
279
+ </tr>
280
+ <tr>
281
+ <td align="center" style="vertical-align: middle">GPQA-Diamond</td>
282
+ <td align="center" style="vertical-align: middle">90.5</td>
283
+ <td align="center" style="vertical-align: middle">92.8</td>
284
+ <td align="center" style="vertical-align: middle">91.3</td>
285
+ <td align="center" style="vertical-align: middle">94.3</td>
286
+ <td align="center" style="vertical-align: middle">87.6</td>
287
+ </tr>
288
+ <tr>
289
+ <td align="center" colspan=6><strong>Vision</strong></td>
290
+ </tr>
291
+ <tr>
292
+ <td align="center" style="vertical-align: middle">MMMU-Pro</td>
293
+ <td align="center" style="vertical-align: middle">79.4</td>
294
+ <td align="center" style="vertical-align: middle">81.2</td>
295
+ <td align="center" style="vertical-align: middle">73.9</td>
296
+ <td align="center" style="vertical-align: middle">83.0*</td>
297
+ <td align="center" style="vertical-align: middle">78.5</td>
298
+ </tr>
299
+ <tr>
300
+ <td align="center" style="vertical-align: middle">MMMU-Pro (w/ python)</td>
301
+ <td align="center" style="vertical-align: middle">80.1</td>
302
+ <td align="center" style="vertical-align: middle">82.1</td>
303
+ <td align="center" style="vertical-align: middle">77.3</td>
304
+ <td align="center" style="vertical-align: middle">85.3*</td>
305
+ <td align="center" style="vertical-align: middle">77.7</td>
306
+ </tr>
307
+ <tr>
308
+ <td align="center" style="vertical-align: middle">CharXiv (RQ)</td>
309
+ <td align="center" style="vertical-align: middle">80.4</td>
310
+ <td align="center" style="vertical-align: middle">82.8*</td>
311
+ <td align="center" style="vertical-align: middle">69.1</td>
312
+ <td align="center" style="vertical-align: middle">80.2*</td>
313
+ <td align="center" style="vertical-align: middle">77.5</td>
314
+ </tr>
315
+ <tr>
316
+ <td align="center" style="vertical-align: middle">CharXiv (RQ) (w/ python)</td>
317
+ <td align="center" style="vertical-align: middle">86.7</td>
318
+ <td align="center" style="vertical-align: middle">90.0*</td>
319
+ <td align="center" style="vertical-align: middle">84.7</td>
320
+ <td align="center" style="vertical-align: middle">89.9*</td>
321
+ <td align="center" style="vertical-align: middle">78.7</td>
322
+ </tr>
323
+ <tr>
324
+ <td align="center" style="vertical-align: middle">MathVision</td>
325
+ <td align="center" style="vertical-align: middle">87.4</td>
326
+ <td align="center" style="vertical-align: middle">92.0*</td>
327
+ <td align="center" style="vertical-align: middle">71.2*</td>
328
+ <td align="center" style="vertical-align: middle">89.8*</td>
329
+ <td align="center" style="vertical-align: middle">84.2</td>
330
+ </tr>
331
+ <tr>
332
+ <td align="center" style="vertical-align: middle">MathVision (w/ python)</td>
333
+ <td align="center" style="vertical-align: middle">93.2</td>
334
+ <td align="center" style="vertical-align: middle">96.1*</td>
335
+ <td align="center" style="vertical-align: middle">84.6*</td>
336
+ <td align="center" style="vertical-align: middle">95.7*</td>
337
+ <td align="center" style="vertical-align: middle">85.0</td>
338
+ </tr>
339
+ <tr>
340
+ <td align="center" style="vertical-align: middle">BabyVision</td>
341
+ <td align="center" style="vertical-align: middle">39.8</td>
342
+ <td align="center" style="vertical-align: middle">49.7</td>
343
+ <td align="center" style="vertical-align: middle">14.8</td>
344
+ <td align="center" style="vertical-align: middle">51.6</td>
345
+ <td align="center" style="vertical-align: middle">36.5</td>
346
+ </tr>
347
+ <tr>
348
+ <td align="center" style="vertical-align: middle">BabyVision (w/ python)</td>
349
+ <td align="center" style="vertical-align: middle">68.5</td>
350
+ <td align="center" style="vertical-align: middle">80.2*</td>
351
+ <td align="center" style="vertical-align: middle">38.4*</td>
352
+ <td align="center" style="vertical-align: middle">68.3*</td>
353
+ <td align="center" style="vertical-align: middle">40.5</td>
354
+ </tr>
355
+ <tr>
356
+ <td align="center" style="vertical-align: middle">V* (w/ python)</td>
357
+ <td align="center" style="vertical-align: middle">96.9</td>
358
+ <td align="center" style="vertical-align: middle">98.4*</td>
359
+ <td align="center" style="vertical-align: middle">86.4*</td>
360
+ <td align="center" style="vertical-align: middle">96.9*</td>
361
+ <td align="center" style="vertical-align: middle">86.9</td>
362
+ </tr>
363
+ </tbody>
364
+ </table>
365
+ </div>
366
+
367
+ <details>
368
+ <summary><b>Footnotes</b></summary>
369
+
370
+ 1. **General Testing Details**
371
+ - We report results for Kimi K2.6 and Kimi K2.5 with thinking mode enabled, Claude Opus 4.6 with max effort, GPT-5.4 with xhigh reasoning effort, and Gemini 3.1 Pro with a high thinking level.
372
+ - Unless otherwise specified, all Kimi K2.6 experiments were conducted with temperature = 1.0, top-p = 1.0, and a context length of 262,144 tokens.
373
+ - Benchmarks without publicly available scores were re-evaluated under the same conditions used for Kimi K2.6 and are marked with an asterisk (`*`). Except where noted with an asterisk, all other results are cited from official reports.
374
+ 2. **Reasoning Benchmarks**
375
+ - IMO-AnswerBench scores for GPT-5.4 and Claude 4.6 were obtained from [z.ai/blog/glm-5.1](https://z.ai/blog/glm-5.1).
376
+ - Humanity's Last Exam (HLE) and other reasoning tasks were evaluated with a maximum generation length of 98,304 tokens. By default, we report results on the HLE full set. For the text-only subset, Kimi K2.6 achieves 36.4% accuracy without tools and 55.5% with tools.
377
+ 3. **Tool-Augmented / Agentic Tasks**
378
+ - Kimi K2.6 was equipped with search, code-interpreter, and web-browsing tools for HLE with tools, BrowseComp, DeepSearchQA, and WideSearch.
379
+ - For HLE-Full with tools, the maximum generation length is 262,144 tokens with a per-step limit of 49,152 tokens. We employ a simple context management strategy: once the context window exceeds the threshold, only the most recent round of tool-related messages is retained.
380
+ - For BrowseComp, we report scores obtained with context management using the same discard-all strategy as Kimi K2.5 and DeepSeek-V3.2.
381
+ - For DeepSearchQA, no context management was applied to Kimi K2.6 tests, and tasks exceeding the supported context length were directly counted as failed. Scores for Claude Opus 4.6, GPT-5.4, and Gemini 3.1 Pro on DeepSearchQA are cited from the [Claude Opus 4.7 System Card](https://cdn.sanity.io/files/4zrzovbb/website/037f06850df7fbe871e206dad004c3db5fd50340.pdf).
382
+ - For WideSearch, we report results under the "hide tool result" context management setting. Once the context window exceeds the threshold, only the most recent round of tool-related messages is retained.
383
+ - The test system prompts are identical to those used in the [Kimi K2.5 technical report](https://arxiv.org/pdf/2602.02276).
384
+ - Claw Eval was conducted using version 1.1 with max-tokens-per-step = 16384.
385
+ - For APEX-Agents, we evaluate 452 tasks from the public 480-task release, as done by [Artificial Analysis](https://artificialanalysis.ai/evaluations/apex-agents-aa)(excluding Investment Banking Worlds 244 and 246, which have external runtime dependencies)
386
+ 4. **Coding Tasks**
387
+ - Terminal-Bench 2.0 scores were obtained with the default agent framework (Terminus-2) and the provided JSON parser, operating in preserve thinking mode.
388
+ - For the SWE-Bench series of evaluations (including Verified, Multilingual, and Pro), we used an in-house evaluation framework adapted from SWE-agent. This framework includes a minimal set of tools—bash tool, createfile tool, insert tool, view tool, strreplace tool, and submit tool.
389
+ - All reported scores for coding tasks are averaged over 10 independent runs.
390
+ 5. **Vision Benchmarks**
391
+ - Max-tokens = 98,304, averaged over three runs (avg@3).
392
+ - Settings with Python tool use max-tokens-per-step = 65,536 and max-steps = 50 for multi-step reasoning.
393
+ - MMMU-Pro follows the official protocol, preserving input order and prepending images.
394
+
395
+ </details>
396
+
397
+
398
+ ## 4. Native INT4 Quantization
399
+ Kimi-K2.6 adopts the same native int4 quantization method as [Kimi-K2-Thinking](https://huggingface.co/moonshotai/Kimi-K2-Thinking#4-native-int4-quantization).
400
+
401
+ ## 5. Deployment
402
+
403
+ > [!Note]
404
+ > You can access Kimi-K2.6's API on https://platform.moonshot.ai and we provide OpenAI/Anthropic-compatible API for you. To verify the deployment is correct, we also provide the [Kimi Vendor Verifier](https://kimi.com/blog/kimi-vendor-verifier.html).
405
+ Currently, Kimi-K2.6 is recommended to run on the following inference engines:
406
+ * vLLM
407
+ * SGLang
408
+ * KTransformers
409
+
410
+ Kimi-K2.6 has the same architecture as Kimi-K2.5, and the deployment method can be directly reused.
411
+
412
+ The version requirement for `transformers` is `>=4.57.1, <5.0.0`.
413
+
414
+ Deployment examples can be found in the [Model Deployment Guide](docs/deploy_guidance.md).
415
+
416
+
417
+ ---
418
+ ## 6. Model Usage
419
+
420
+ The usage demos below demonstrate how to call our official API.
421
+
422
+ For third-party APIs deployed with vLLM or SGLang, please note that:
423
+ > [!Note]
424
+ > - Chat with video content is an experimental feature and is only supported in our official API for now.
425
+ >
426
+ > - The recommended `temperature` will be `1.0` for Thinking mode and `0.6` for Instant mode.
427
+ >
428
+ > - The recommended `top_p` is `0.95`.
429
+ >
430
+ > - To use instant mode, you need to pass `{'chat_template_kwargs': {"thinking": False}}` in `extra_body`.
431
+
432
+ ### Chat Completion
433
+
434
+ This is a simple chat completion script which shows how to call K2.6 API in Thinking and Instant modes.
435
+
436
+ ```python
437
+ import openai
438
+ import base64
439
+ import requests
440
+ def simple_chat(client: openai.OpenAI, model_name: str):
441
+ messages = [
442
+ {'role': 'system', 'content': 'You are Kimi, an AI assistant created by Moonshot AI.'},
443
+ {
444
+ 'role': 'user',
445
+ 'content': [
446
+ {'type': 'text', 'text': 'which one is bigger, 9.11 or 9.9? think carefully.'}
447
+ ],
448
+ },
449
+ ]
450
+ response = client.chat.completions.create(
451
+ model=model_name, messages=messages, stream=False, max_tokens=4096
452
+ )
453
+ print('====== Below is reasoning content in Thinking Mode ======')
454
+ print(f'reasoning content: {response.choices[0].message.reasoning}')
455
+ print('====== Below is response in Thinking Mode ======')
456
+ print(f'response: {response.choices[0].message.content}')
457
+
458
+ # To use instant mode, pass {"thinking" = {"type":"disabled"}}
459
+ response = client.chat.completions.create(
460
+ model=model_name,
461
+ messages=messages,
462
+ stream=False,
463
+ max_tokens=4096,
464
+ extra_body={'thinking': {'type': 'disabled'}}, # this is for official API
465
+ # extra_body= {'chat_template_kwargs': {"thinking": False}} # this is for vLLM/SGLang
466
+ )
467
+ print('====== Below is response in Instant Mode ======')
468
+ print(f'response: {response.choices[0].message.content}')
469
+ ```
470
+
471
+
472
+ ### Chat Completion with visual content
473
+
474
+ K2.6 supports Image and Video input.
475
+
476
+ The following example demonstrates how to call K2.6 API with image input:
477
+
478
+ ```python
479
+ import openai
480
+ import base64
481
+ import requests
482
+
483
+ def chat_with_image(client: openai.OpenAI, model_name: str):
484
+ url = 'https://huggingface.co/moonshotai/Kimi-K2.6/resolve/main/figures/kimi-logo.png'
485
+ image_base64 = base64.b64encode(requests.get(url).content).decode()
486
+ messages = [
487
+ {
488
+ 'role': 'user',
489
+ 'content': [
490
+ {'type': 'text', 'text': 'Describe this image in detail.'},
491
+ {
492
+ 'type': 'image_url',
493
+ 'image_url': {'url': f'data:image/png;base64, {image_base64}'},
494
+ },
495
+ ],
496
+ }
497
+ ]
498
+
499
+ response = client.chat.completions.create(
500
+ model=model_name, messages=messages, stream=False, max_tokens=8192
501
+ )
502
+ print('====== Below is reasoning content in Thinking Mode ======')
503
+ print(f'reasoning content: {response.choices[0].message.reasoning}')
504
+ print('====== Below is response in Thinking Mode ======')
505
+ print(f'response: {response.choices[0].message.content}')
506
+
507
+ # Also support instant mode if you pass {"thinking" = {"type":"disabled"}}
508
+ response = client.chat.completions.create(
509
+ model=model_name,
510
+ messages=messages,
511
+ stream=False,
512
+ max_tokens=4096,
513
+ extra_body={'thinking': {'type': 'disabled'}}, # this is for official API
514
+ # extra_body= {'chat_template_kwargs': {"thinking": False}} # this is for vLLM/SGLang
515
+ )
516
+ print('====== Below is response in Instant Mode ======')
517
+ print(f'response: {response.choices[0].message.content}')
518
+
519
+ return response.choices[0].message.content
520
+ ```
521
+
522
+ The following example demonstrates how to call K2.6 API with video input:
523
+
524
+ ```python
525
+ import openai
526
+ import base64
527
+ import requests
528
+
529
+ def chat_with_video(client: openai.OpenAI, model_name:str):
530
+ url = 'https://huggingface.co/moonshotai/Kimi-K2.6/resolve/main/figures/demo_video.mp4'
531
+ video_base64 = base64.b64encode(requests.get(url).content).decode()
532
+ messages = [
533
+ {
534
+ "role": "user",
535
+ "content": [
536
+ {"type": "text","text": "Describe the video in detail."},
537
+ {
538
+ "type": "video_url",
539
+ "video_url": {"url": f"data:video/mp4;base64,{video_base64}"},
540
+ },
541
+ ],
542
+ }
543
+ ]
544
+
545
+ response = client.chat.completions.create(model=model_name, messages=messages)
546
+ print('====== Below is reasoning content in Thinking Mode ======')
547
+ print(f'reasoning content: {response.choices[0].message.reasoning}')
548
+ print('====== Below is response in Thinking Mode ======')
549
+ print(f'response: {response.choices[0].message.content}')
550
+
551
+ # Also support instant mode if pass {"thinking" = {"type":"disabled"}}
552
+ response = client.chat.completions.create(
553
+ model=model_name,
554
+ messages=messages,
555
+ stream=False,
556
+ max_tokens=4096,
557
+ extra_body={'thinking': {'type': 'disabled'}}, # this is for official API
558
+ # extra_body= {'chat_template_kwargs': {"thinking": False}} # this is for vLLM/SGLang
559
+ )
560
+ print('====== Below is response in Instant Mode ======')
561
+ print(f'response: {response.choices[0].message.content}')
562
+ return response.choices[0].message.content
563
+ ```
564
+
565
+ ### Preserve Thinking
566
+ Kimi K2.6 supports `preserve_thinking` mode, which retains full reasoning content across multi-turn interactions and enhances performance in coding agent scenarios.
567
+
568
+ This feature is disabled by default. The following example demonstrates how to call K2.6 API in `preserve_thinking` mode:
569
+
570
+ ```python
571
+ def chat_with_preserve_thinking(client: openai.OpenAI, model_name: str):
572
+ messages = [
573
+ {
574
+ "role": "user",
575
+ "content": "Tell me three random numbers."
576
+ },
577
+ {
578
+ "role": "assistant",
579
+ "reasoning_content": "I'll start by listing five numbers: 473, 921, 235, 215, 222, and I'll tell you the first three.",
580
+ "content": "473, 921, 235"
581
+ },
582
+ {
583
+ "role": "user",
584
+ "content": "What are the other two numbers you have in mind?"
585
+ }
586
+ ]
587
+
588
+ response = client.chat.completions.create(
589
+ model=model_name,
590
+ messages=messages,
591
+ stream=False,
592
+ max_tokens=4096,
593
+ extra_body={'thinking': {'type': 'enabled', 'keep': 'all'}}, # this is for official API
594
+ # extra_body={"chat_template_kwargs": {"thinking":True, "preserve_thinking": True}}, # this is for vLLM/SGLang
595
+ # We recommend enabling preserve_thinking only in think mode.
596
+ )
597
+ # the assistant should mention 215 and 222 that appear in the prior reasoning content
598
+ print(f"response: {response.choices[0].message.reasoning}")
599
+ return response.choices[0].message.content
600
+
601
+ ```
602
+
603
+ ### Interleaved Thinking and Multi-Step Tool Call
604
+
605
+ K2.6 shares the same design of Interleaved Thinking and Multi-Step Tool Call as K2 Thinking. For usage example, please refer to the [K2 Thinking documentation](https://platform.moonshot.ai/docs/guide/use-kimi-k2-thinking-model#complete-example).
606
+
607
+ ### Coding Agent Framework
608
+
609
+ Kimi K2.6 works best with Kimi Code CLI as its agent framework — give it a try at https://www.kimi.com/code.
610
+
611
+
612
+ ---
613
+
614
+ ## 7. License
615
+
616
+ Both the code repository and the model weights are released under the [Modified MIT License](LICENSE).
617
+
618
+ ---
619
+
620
+ ## 8. Third Party Notices
621
+
622
+ See [THIRD PARTY NOTICES](THIRD_PARTY_NOTICES.md)
623
+
624
+ ---
625
+
626
+ ## 9. Contact Us
627
+
628
+ If you have any questions, please reach out at [support@moonshot.ai](mailto:support@moonshot.ai).
UD-Q2_K_XL/Kimi-K2.6-UD-Q2_K_XL-00001-of-00008.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6218da3cc528de6a7fb48f8c7fc4ad45f7dbeeba244b3df07c4c98b20b35121c
3
+ size 6913152
UD-Q2_K_XL/Kimi-K2.6-UD-Q2_K_XL-00002-of-00008.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2c125f7bd8526b1c74a748bba1afc3fc8f0ea3a869cb6ee67821e3b8940d250
3
+ size 48644655008