quantize

quantize = ["Q3_K_M", "Q4_K_M", "Q5_K_M", "Q6_K", "Q8_0", "F16", "BF16"]

train

640000 samples(40000x2x8),AI-Lua-Dec-0.jsonl.gz/AI-Lua-Dec-1.jsonl.gz/AI-Lua-Dec-3.jsonl.gz

lua51/lua52/lua53/lua54

input

use luac -l <file> to get input

think

guess constants /locals/upvalues

output

most likely unusable, possibly Lua code.

device

Online GPU is Expensive !

类别 配置详情
GPU RTX 4090 (24GB) * 1
CPU 16 vCPU Intel(R) Xeon(R) Platinum 8352V CPU @ 2.10GHz
内存 120GB
硬盘 30 GB + 50GB
时长 1 Day
Downloads last month
29
GGUF
Model size
2B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nwdxlgzs/XL-AiLuaDec-1.7B-FFT-GGUF

Finetuned
Qwen/Qwen3-1.7B
Quantized
(1)
this model

Dataset used to train nwdxlgzs/XL-AiLuaDec-1.7B-FFT-GGUF