name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o8iv58c | Qwen instruct models have always been an oddball. Even the earlier Qwen bases with the exception of maybe Qwen 7B from waaay back in the day very clearly had some instruct in their data mix. | 1 | 0 | 2026-03-04T02:17:21 | Electroboots | false | null | 0 | o8iv58c | false | /r/LocalLLaMA/comments/1rjyngn/are_true_base_models_dead/o8iv58c/ | false | 1 |
t1_o8iv4n3 | Hey I crossed upon this and was curious about trying it out, do you still have this up? | 1 | 0 | 2026-03-04T02:17:15 | Meowliketh | false | null | 0 | o8iv4n3 | false | /r/LocalLLaMA/comments/1gjfajq/i_got_laid_off_so_i_have_to_start_applying_to_as/o8iv4n3/ | false | 1 |
t1_o8iv1kw | why? I don't get it. it seems to me the first table is is evidence that the naive strategy really works well: just get the biggest unsloth quant that fits you (they are increasingly better and seem the most reliable quants).
But what would you do with the efficiency score? It is likely dataset specific, so OP did wel... | 1 | 0 | 2026-03-04T02:16:45 | erubim | false | null | 0 | o8iv1kw | false | /r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8iv1kw/ | false | 1 |
t1_o8iv1g4 | I can run this machine headless 99% of the time. | 1 | 0 | 2026-03-04T02:16:44 | AdCreative8703 | false | null | 0 | o8iv1g4 | false | /r/LocalLLaMA/comments/1rk90zw/what_to_pair_with_3080ti_for_qwen_35_27b/o8iv1g4/ | false | 1 |
t1_o8iuzwl | For sure; i forget exactly, but mac m4 max has ~500gb/s memory bandwidth; believe 3090 is something like twice that.
so MoE makes the most sense for macs with unified memory, but dense model (smaller) makes more sense for discrete graphics.
Curious what t/s you get with 27b on 3090?
m4 max 128GB gets ~15t/s for the ... | 1 | 0 | 2026-03-04T02:16:29 | slypheed | false | null | 0 | o8iuzwl | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8iuzwl/ | false | 1 |
t1_o8iul35 | I switched over to llama server for Linux, and I'm now getting 75 TPS. I guess AMD just got to keep improving the Linux drivers.
https://preview.redd.it/41bfyyxctxmg1.png?width=762&format=png&auto=webp&s=1d176d2f0b71a1be79325afe0b5d3fa485923ee1
| 1 | 0 | 2026-03-04T02:14:03 | JackTheif52 | false | null | 0 | o8iul35 | false | /r/LocalLLaMA/comments/1rfrsr6/rx_7900_xtx_24g_rocm_72_with_r1_32b_awq_vs_gptq/o8iul35/ | false | 1 |
t1_o8iub0x | Yeah dude. Local llms are the future. Fuck the Anthropic and OpenAI techno feudalism! | 1 | 0 | 2026-03-04T02:12:23 | CalvaoDaMassa | false | null | 0 | o8iub0x | false | /r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8iub0x/ | false | 1 |
t1_o8iu952 | Every dang day. The new Qwen3-Coder-Next beats Sonnet 3.5 and Sonnet 3.7 in my personal benchmarks (just bug fixing my code, developing new features). I'm about to dive into Qwen3.5-122B-A10B this week to see if I can just use one model for both coding & chat... | 1 | 0 | 2026-03-04T02:12:04 | txgsync | false | null | 0 | o8iu952 | false | /r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8iu952/ | false | 1 |
t1_o8iu5s8 | Bro bro, in 2026 no human is reading your cover letters and savoring your flow and actual voice. | 1 | 0 | 2026-03-04T02:11:30 | 1-800-methdyke | false | null | 0 | o8iu5s8 | false | /r/LocalLLaMA/comments/1rjsf7f/i_spent_6_hours_last_night_failing_to_finetune/o8iu5s8/ | false | 1 |
t1_o8iu2oq | unlikely, manufacturers for complete systems like apple should have years of parts warehoused. | 1 | 0 | 2026-03-04T02:10:59 | J0kooo | false | null | 0 | o8iu2oq | false | /r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/o8iu2oq/ | false | 1 |
t1_o8ittsz | Would love to chat more more. Using Claude for PE and other related work, but trying to migrate more to local. | 1 | 0 | 2026-03-04T02:09:33 | trailsman | false | null | 0 | o8ittsz | false | /r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8ittsz/ | false | 1 |
t1_o8itpny | I just deployed both 35B and 122B to some production servers this week, and for the folks who use LLMs for recall on stored information, there is a large difference between the two.
I guess if you are just using it for agentic loops, calling tools, etc, the difference may not be worth it. | 1 | 0 | 2026-03-04T02:08:53 | xfalcox | false | null | 0 | o8itpny | false | /r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8itpny/ | false | 1 |
t1_o8itdvz | Are these models fully uncensored? I'm trying to figure out their use case. | 1 | 0 | 2026-03-04T02:06:58 | PromiseMePls | false | null | 0 | o8itdvz | false | /r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8itdvz/ | false | 1 |
t1_o8it9xx | GGUF?
| 1 | 0 | 2026-03-04T02:06:19 | sunshinecheung | false | null | 0 | o8it9xx | false | /r/LocalLLaMA/comments/1rk8knf/qwen3518breapa3bcoding_50_expertpruned/o8it9xx/ | false | 1 |
t1_o8it89f | RemindMe! 11 days 13 hours 46 minutes | 1 | 0 | 2026-03-04T02:06:03 | pot_sniffer | false | null | 0 | o8it89f | false | /r/LocalLLaMA/comments/1rk74ap/qwen359b_uncensored_aggressive_release_gguf/o8it89f/ | false | 1 |
t1_o8it78k | Yes | 1 | 0 | 2026-03-04T02:05:53 | PromiseMePls | false | null | 0 | o8it78k | false | /r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8it78k/ | false | 1 |
t1_o8isv09 | my meat brain can not do this advanced meth stuff. | 1 | 0 | 2026-03-04T02:03:54 | Succubus-Empress | false | null | 0 | o8isv09 | false | /r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8isv09/ | false | 1 |
t1_o8islr8 | I can't wait to release the software I'm working on that will put those nice LLMs like Llama 70b Mixtral 8x7b right on your simple GPU. I can't say anything. It's not vaporware. I'm working on it now. I'll post to this forum when I can spill all the beans and offer you the freedom from frontiers.....
https://preview.... | 1 | 0 | 2026-03-04T02:02:24 | Tough_Frame4022 | false | null | 0 | o8islr8 | false | /r/LocalLLaMA/comments/1rk45ko/is_anyone_else_just_blown_away_that_this_local/o8islr8/ | false | 1 |
t1_o8isjt1 | [deleted] | 1 | 0 | 2026-03-04T02:02:05 | [deleted] | true | null | 0 | o8isjt1 | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8isjt1/ | false | 1 |
t1_o8isgyh | How much improvement did it make in code generation quality tho? | 1 | 0 | 2026-03-04T02:01:37 | segmond | false | null | 0 | o8isgyh | false | /r/LocalLLaMA/comments/1rjt4hh/mcp_server_that_indexes_codebases_into_a/o8isgyh/ | false | 1 |
t1_o8isgtl | Except ROCm doesn't support my 7600. RIP. | 1 | 0 | 2026-03-04T02:01:36 | National_Meeting_749 | false | null | 0 | o8isgtl | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8isgtl/ | false | 1 |
t1_o8isfgb | Then on that note then I have been corrected, my difficulty may have been caused my an older version considering ROCm (to my knowledge) was a project that was started by amd before AI became a big thing but never got much traction so it got open sourced and only recently due to the AI boom did someone start updating it... | 1 | 0 | 2026-03-04T02:01:23 | Ill-Oil-2027 | false | null | 0 | o8isfgb | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8isfgb/ | false | 1 |
t1_o8isbc2 | $3,599 is the starting price for the Max, not the full configuration, the post did originally say "Starting price." Anyway, just edited to make it clearer. | 1 | 0 | 2026-03-04T02:00:42 | luke_pacman | false | null | 0 | o8isbc2 | false | /r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/o8isbc2/ | false | 1 |
t1_o8is49d | I have LM studio and I’m trying to connect it to local host to open code. But I’m the API calls mis match for some reason? | 1 | 0 | 2026-03-04T01:59:33 | 16GB_of_ram | false | null | 0 | o8is49d | false | /r/LocalLLaMA/comments/1rjve9e/possible_to_run_local_model_for_opencode_with_m3/o8is49d/ | false | 1 |
t1_o8irw4c | Thanks for all the help, I managed to install it in the desktop folder but yeah, I installed two models and my disk space is almost fully occupied. Lets see if I can make it work like that | 1 | 0 | 2026-03-04T01:58:14 | lubezki | false | null | 0 | o8irw4c | false | /r/LocalLLaMA/comments/1mitsok/which_ai_image_generators_are_good_for/o8irw4c/ | false | 1 |
t1_o8iruwq | ComfyUI supports ROCm for image generation and MMAudio for both Linux and Windows. It is very stable and fast. | 1 | 0 | 2026-03-04T01:58:02 | AbsolutelyStateless | false | null | 0 | o8iruwq | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8iruwq/ | false | 1 |
t1_o8irogy | Neither polyphonic nor doesn't play 😅 but cool clickdummy | 1 | 0 | 2026-03-04T01:57:00 | kyr0x0 | false | null | 0 | o8irogy | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8irogy/ | false | 1 |
t1_o8irk5x | That does not surprise me at all, lol! I've given up on trying to convince colleagues to use AI in ministry. I think I understand the risks (allowing the AI to dictate/ decide the bottom line of your message/ application being one of the most dangerous) but if you take responsibility for prayerful study and knowing whe... | 1 | 0 | 2026-03-04T01:56:18 | FigZestyclose7787 | false | null | 0 | o8irk5x | false | /r/LocalLLaMA/comments/1rk7un4/im_running_a_graph_workflow_with_multiple/o8irk5x/ | false | 1 |
t1_o8iriqr | You see this character: .
The . is used to end sentences. It’s called a full stop or a period. See how I ended that last sentence with one? It makes your paragraphs easier to understand for the reader.
Please try using them.
Thank you,
People reading your shit. | 1 | 0 | 2026-03-04T01:56:04 | __JockY__ | false | null | 0 | o8iriqr | false | /r/LocalLLaMA/comments/1rk2pll/step_flash_35_toolcall_and_thinking_godforsaken/o8iriqr/ | false | 1 |
t1_o8irdjd | I also uploaded a 25B (30% pruned) version which I have not tested yet: https://huggingface.co/Flagstone8878/Qwen3.5-25B-REAP-A3B-Coding | 1 | 0 | 2026-03-04T01:55:13 | 17hoehbr | false | null | 0 | o8irdjd | false | /r/LocalLLaMA/comments/1rk8knf/qwen3518breapa3bcoding_50_expertpruned/o8irdjd/ | false | 1 |
t1_o8irbz9 | Ok. You mentioned -nkvo flag. First time i hear it. What does it do and how do you use it? One last question someone said use headless mode to save 1-2 GB. Are you talking about vram or normal ram saving? | 1 | 0 | 2026-03-04T01:54:58 | wisepal_app | false | null | 0 | o8irbz9 | false | /r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8irbz9/ | false | 1 |
t1_o8irbf1 | > ROCm imo is to much of a hassle to get functioning
I've been using ROCm for both LLMs and diffusion and the only time I had any difficulty was when the 9000-series models were brand new. It works great. | 1 | 0 | 2026-03-04T01:54:53 | AbsolutelyStateless | false | null | 0 | o8irbf1 | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8irbf1/ | false | 1 |
t1_o8ir6dc | Yeah I’m excited to throw some of these slimmer quants at my current task set. Hopefully ik will fix the current mmproj issues with 3.5 I wanna come home dude haha. | 1 | 0 | 2026-03-04T01:54:03 | dinerburgeryum | false | null | 0 | o8ir6dc | false | /r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8ir6dc/ | false | 1 |
t1_o8ir15d | No 🤣 And never ask for permission | 1 | 0 | 2026-03-04T01:53:12 | kyr0x0 | false | null | 0 | o8ir15d | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8ir15d/ | false | 1 |
t1_o8iqveg | They're focused on telling others how good they are instead of actually building anything at all. Check social media and you'll see 🙈. | 1 | 0 | 2026-03-04T01:52:14 | kbderrr | false | null | 0 | o8iqveg | false | /r/LocalLLaMA/comments/1pfad2a/why_india_is_far_behind_in_ai_research/o8iqveg/ | false | 1 |
t1_o8iqtl3 | My understanding is Macs have been limited by TOPS for prompt processing, so long prompts can take a very long time. | 1 | 0 | 2026-03-04T01:51:56 | BrianJThomas | false | null | 0 | o8iqtl3 | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8iqtl3/ | false | 1 |
t1_o8iqrki | I have an M1 Max 64GB that I bought end of 2021 so this is the first model that feels upgrade-worthy, but I'd need to get the 128GB machine to be worth bothering with the upgrade at all. | 1 | 0 | 2026-03-04T01:51:37 | iansltx_ | false | null | 0 | o8iqrki | false | /r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/o8iqrki/ | false | 1 |
t1_o8iqr0b | Can 122B run un 96 unified ram m3 ultra | 1 | 0 | 2026-03-04T01:51:31 | beau_pi | false | null | 0 | o8iqr0b | false | /r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8iqr0b/ | false | 1 |
t1_o8iqn1o | Thank you. This is definitely helpful.
Gives some clear pointers on when this migjt be a good fit.
Well...the price points definitely makes me pause....a lot 😉 | 1 | 0 | 2026-03-04T01:50:52 | Aprocastrinator | false | null | 0 | o8iqn1o | false | /r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o8iqn1o/ | false | 1 |
t1_o8iqm65 | Hallucinated. | 1 | 0 | 2026-03-04T01:50:43 | iansltx_ | false | null | 0 | o8iqm65 | false | /r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/o8iqm65/ | false | 1 |
t1_o8iqjg5 | Interesting.
I've mainly just used free cloud compute because what I'm doing is child's play for those systems. I've got a bunch more inputs and outputs than that, but I'm desperately trying to avoid making it vision based.
But I've got a much more modern processor than that, so I gonna at least see how much perform... | 1 | 0 | 2026-03-04T01:50:16 | National_Meeting_749 | false | null | 0 | o8iqjg5 | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8iqjg5/ | false | 1 |
t1_o8iq9q5 | 1) fully textured
2) cleaned
3) rigged
If you can get the first two, you’d have something good. Get the third, it’s great. | 1 | 0 | 2026-03-04T01:48:40 | youre__ | false | null | 0 | o8iq9q5 | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8iq9q5/ | false | 1 |
t1_o8iq6dw | 😂 | 1 | 0 | 2026-03-04T01:48:07 | jeff_actuate | false | null | 0 | o8iq6dw | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8iq6dw/ | false | 1 |
t1_o8iq5f5 | .edu pricing is where it's at with Macs.
14" MBP:
$2,409 for binned M5 Pro/48GB/1TB
$2,599 for M5 Pro/48GB/1TB
$2,779 for M5 Pro/64GB/1TB
$3,929 for M5 Max/64GB/2TB
$4,649 for M5 Max/128GB/2TB
It used to be a bigger premium on the Pro + RAM upgrades. The sweet spot now appears to be the non-binned M5 Pro wit... | 1 | 0 | 2026-03-04T01:47:58 | MrPecunius | false | null | 0 | o8iq5f5 | false | /r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/o8iq5f5/ | false | 1 |
t1_o8iq3rj | I think it's just there as a frame of reference for their harness. Typically they use their own harness but if you are skeptical about that you would want to compare it against Claude models using baseline Claude Code harness. | 1 | 0 | 2026-03-04T01:47:42 | nullmove | false | null | 0 | o8iq3rj | false | /r/LocalLLaMA/comments/1rk5qzz/qwen3codernext_scored_40_on_latest_swerebench/o8iq3rj/ | false | 1 |
t1_o8iq3pf | Will the recent DRAM shortage also hit Mac's product line? | 1 | 0 | 2026-03-04T01:47:41 | Ralph_mao | false | null | 0 | o8iq3pf | false | /r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/o8iq3pf/ | false | 1 |
t1_o8iq08g | You know what they say, when one door closes, another door opens... on the plane to your next job at 30000 feet with you on board... | 1 | 0 | 2026-03-04T01:47:07 | Cool-Chemical-5629 | false | null | 0 | o8iq08g | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8iq08g/ | false | 1 |
t1_o8iplun | A MacBook Air? And it is effective? What are you running on it? I mean this sincerely. I just didn't think they could handle it.... | 1 | 0 | 2026-03-04T01:44:43 | QuantumFrothLatte | false | null | 0 | o8iplun | false | /r/LocalLLaMA/comments/1ap1l81/how_to_host_your_llms_for_free/o8iplun/ | false | 1 |
t1_o8ipjh7 | Does this support FIM? I tried Qwen3.5 4B and it seems to break on FIM and only provides empty responses. | 1 | 0 | 2026-03-04T01:44:19 | PANIC_EXCEPTION | false | null | 0 | o8ipjh7 | false | /r/LocalLLaMA/comments/1rim0b3/jancode4b_a_small_codetuned_model_of_janv3/o8ipjh7/ | false | 1 |
t1_o8ipfr8 | The smallest Q4 I guess. Idk if Q3 is viable considering the number of parameters (27B). | 1 | 0 | 2026-03-04T01:43:42 | TitwitMuffbiscuit | false | null | 0 | o8ipfr8 | false | /r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8ipfr8/ | false | 1 |
t1_o8ip578 | Not that I know of. | 1 | 0 | 2026-03-04T01:41:57 | TitwitMuffbiscuit | false | null | 0 | o8ip578 | false | /r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8ip578/ | false | 1 |
t1_o8ip1vt | Alright so this week then? | 1 | 0 | 2026-03-04T01:41:25 | No_Afternoon_4260 | false | null | 0 | o8ip1vt | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o8ip1vt/ | false | 1 |
t1_o8ip0qh | Alrighty then. | 0 | 0 | 2026-03-04T01:41:13 | 3spky5u-oss | false | null | 0 | o8ip0qh | false | /r/LocalLLaMA/comments/1rk6rro/super_35_4b/o8ip0qh/ | false | 0 |
t1_o8ioxf8 | update:
i managed to run a Qwen3.5 0.8B model on V100 vllm with \`--skip-mm-profiling\` and \`--enforce-eager\` and \`--gpu-memory-utilization 0.8\` argument, but wield thing happens:
1. The memory consumption is **abusrd**. A very simple prompt of " who are you" eat up 6GB of free memory for pp, actually it should ... | 1 | 0 | 2026-03-04T01:40:40 | Substantial_Log_1707 | false | null | 0 | o8ioxf8 | false | /r/LocalLLaMA/comments/1rjjvqo/vllm_on_v100_for_qwen_newer_models/o8ioxf8/ | false | 1 |
t1_o8iowka | Oh crap.
But awesome. | 1 | 0 | 2026-03-04T01:40:32 | J_GUMBAINIA | false | null | 0 | o8iowka | false | /r/LocalLLaMA/comments/1rk74ap/qwen359b_uncensored_aggressive_release_gguf/o8iowka/ | false | 1 |
t1_o8iotja | I just re read it. Sounds like some hippy witchcraft incantations.
Welcome to a world where a website called Hugging Face hosts large language models. | 1 | 0 | 2026-03-04T01:40:01 | Badger-Purple | false | null | 0 | o8iotja | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8iotja/ | false | 1 |
t1_o8iot2j | Every time someone says they use qwen2.5, either they are a bot or took advice from one with absolutely 0 research or brain power put in. | 1 | 0 | 2026-03-04T01:39:56 | Emotional-Baker-490 | false | null | 0 | o8iot2j | false | /r/LocalLLaMA/comments/1rk17h6/i_stopped_vibechecking_my_llms_and_started_using/o8iot2j/ | false | 1 |
t1_o8ionnv | It is a rabbit hole and it's worse with benchmarks. Like, what's the one that is not completely saturated by recent models and representative of the type of tasks I run, is it qualitative or is there bad/vague questions on the dataset, what's the latest, the quickest to run. Eval is hard. | 1 | 0 | 2026-03-04T01:39:03 | TitwitMuffbiscuit | false | null | 0 | o8ionnv | false | /r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8ionnv/ | false | 1 |
t1_o8iomh1 | A few feature wish lists:
1. Native quad-remeshing. Triangle meshes are a nightmare to sculpt or animate.
2. No baking shadows into the texture.
3. Model generation with a basic skeleton and decent skin weights. | 1 | 0 | 2026-03-04T01:38:51 | aiyakisoba | false | null | 0 | o8iomh1 | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8iomh1/ | false | 1 |
t1_o8ioh6p | i have 16 gb vram and 96 gb ddr5 ram. which quant do you suggest and with which flags? | 1 | 0 | 2026-03-04T01:38:00 | wisepal_app | false | null | 0 | o8ioh6p | false | /r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8ioh6p/ | false | 1 |
t1_o8ioex4 | Thanks, downloading to check | 1 | 0 | 2026-03-04T01:37:36 | Legitimate-ChosenOne | false | null | 0 | o8ioex4 | false | /r/LocalLLaMA/comments/1rk74ap/qwen359b_uncensored_aggressive_release_gguf/o8ioex4/ | false | 1 |
t1_o8inu96 | Significantly more parameters = gooder. | 1 | 0 | 2026-03-04T01:34:13 | TheRealMasonMac | false | null | 0 | o8inu96 | false | /r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8inu96/ | false | 1 |
t1_o8inr5v | !remindme 48h | 1 | 0 | 2026-03-04T01:33:42 | FusionCow | false | null | 0 | o8inr5v | false | /r/LocalLLaMA/comments/1rk74ap/qwen359b_uncensored_aggressive_release_gguf/o8inr5v/ | false | 1 |
t1_o8inqa4 | WTH?! This is ridiculous... | 1 | 0 | 2026-03-04T01:33:34 | Fun_Smoke4792 | false | null | 0 | o8inqa4 | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8inqa4/ | false | 1 |
t1_o8ino5o | Is there a list of what topics do these 0/100 refusals cover? | 1 | 0 | 2026-03-04T01:33:13 | Complex-Maybe3123 | false | null | 0 | o8ino5o | false | /r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/o8ino5o/ | false | 1 |
t1_o8innm1 | I was like, wait a minute...
Anyway, thanks for experimenting. | 1 | 0 | 2026-03-04T01:33:08 | TitwitMuffbiscuit | false | null | 0 | o8innm1 | false | /r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8innm1/ | false | 1 |
t1_o8innif | I have almost weekly debates with a friend who is in ministry about the use of AI. I can confidently say he would hate this.
I am really hopeful the tech can be put to good use in that area, so I'd love to hear more about what you are doing. | 1 | 0 | 2026-03-04T01:33:07 | ryanp102694 | false | null | 0 | o8innif | false | /r/LocalLLaMA/comments/1rk7un4/im_running_a_graph_workflow_with_multiple/o8innif/ | false | 1 |
t1_o8inmwo | I will be messaging you in 7 days on [**2026-03-11 01:32:05 UTC**](http://www.wolframalpha.com/input/?i=2026-03-11%2001:32:05%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8inh6v/?context=3)
[**CLICK THIS... | 1 | 0 | 2026-03-04T01:33:01 | RemindMeBot | false | null | 0 | o8inmwo | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8inmwo/ | false | 1 |
t1_o8inkf2 | [removed] | 1 | 0 | 2026-03-04T01:32:37 | [deleted] | true | null | 0 | o8inkf2 | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8inkf2/ | false | 1 |
t1_o8injeg | Or, if launched, just suck unlike Qwen3.5 which has pushed the frontiers of small / medium LLMs. | 1 | 0 | 2026-03-04T01:32:27 | MrRandom04 | false | null | 0 | o8injeg | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8injeg/ | false | 1 |
t1_o8inh6v | RemindMe! 1 Week "Check the repo" | 1 | 0 | 2026-03-04T01:32:05 | shikima | false | null | 0 | o8inh6v | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8inh6v/ | false | 1 |
t1_o8ingjf | What are your use-cases? | 1 | 0 | 2026-03-04T01:31:59 | overand | false | null | 0 | o8ingjf | false | /r/LocalLLaMA/comments/1rk6rro/super_35_4b/o8ingjf/ | false | 1 |
t1_o8in7dj | Love these analysis. Did AesSedai not quant a 27B? I recall his IQ4 being the best for the 35B model | 1 | 0 | 2026-03-04T01:30:28 | metigue | false | null | 0 | o8in7dj | false | /r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8in7dj/ | false | 1 |
t1_o8in6tn | >30gb if im not mistaken
That's not nearly enough. As soon as you start downloading models, disk space will evaporate fast. SD1.5 models are about 4 GB in size, per model file. SDXL and Pony models can be 6 GB to 12 GB in size. Flux.1 and Z-Image models can be anywhere between 8 GB to 24+ GB per file.
My own insta... | 1 | 0 | 2026-03-04T01:30:23 | scorp123_CH | false | null | 0 | o8in6tn | false | /r/LocalLLaMA/comments/1mitsok/which_ai_image_generators_are_good_for/o8in6tn/ | false | 1 |
t1_o8in09j | Any luck at getting all layers to GPU for an RTX5080? | 1 | 0 | 2026-03-04T01:29:17 | InternationalNebula7 | false | null | 0 | o8in09j | false | /r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8in09j/ | false | 1 |
t1_o8imzmp | No, i think anthropic, openai, google will do that way in future. | 1 | 0 | 2026-03-04T01:29:11 | Guilty_Nothing_2858 | false | null | 0 | o8imzmp | false | /r/LocalLLaMA/comments/1rk6ulw/prediction_nextgen_frontier_llms_will_be/o8imzmp/ | false | 1 |
t1_o8imta3 | I'm running headless, thankfully. | 1 | 0 | 2026-03-04T01:28:08 | InternationalNebula7 | false | null | 0 | o8imta3 | false | /r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8imta3/ | false | 1 |
t1_o8imsjj | My args:
./llama-server \
--model "$MODEL" \
--fit on \
--tensor-split 0.45,0.55 \
--no-mmap \
--flash-attn on \
--checkpoint-every-nb 3 \
--ctx-size 253952 \
--parallel 1 \
--threads 12 \
--threads-batch 12 \
--cache-ram -1 \
--batch-size 204... | 1 | 0 | 2026-03-04T01:28:01 | Xp_12 | false | null | 0 | o8imsjj | false | /r/LocalLLaMA/comments/1rjxmvo/qwen35_checkpointing_fix_pr_testing/o8imsjj/ | false | 1 |
t1_o8imrc7 | The 27B was able to one shot a breakout game in HTML in my testing, in fact it managed to one shot Tetris and Snake as well… Doom well it tried. I think the MoE model is great for something like a chatbot where fast responses trump precision, but for coding or structured outputs the dense models (or a large parameter M... | 1 | 0 | 2026-03-04T01:27:49 | Idarubicin | false | null | 0 | o8imrc7 | false | /r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8imrc7/ | false | 1 |
t1_o8imnxr | Yea guilty. I kept the attention, output and embedding tensors in Q8 (and ssm_out in bf16) since I’m on a 24+16G build and often do long horizon work. Still, I’ll experiment with mradermacher’s Q4 based on your efficiency chart. Thanks as always for putting this together! | 1 | 0 | 2026-03-04T01:27:16 | dinerburgeryum | false | null | 0 | o8imnxr | false | /r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8imnxr/ | false | 1 |
t1_o8iml0c | I download model. I copy paste vllm command from model card, everything works. | 1 | 0 | 2026-03-04T01:26:48 | laterbreh | false | null | 0 | o8iml0c | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8iml0c/ | false | 1 |
t1_o8imihg | How can you guys let 4b models run on your android phone? I know google has this AI edge gallery app, which can run Gemma 3 4b very well, but it didn't support guff models, and the app is in beta that it doesn't even have a chat history.
Qwen3.5 4b on pockets would be a game changer. | 1 | 0 | 2026-03-04T01:26:23 | JinPing89 | false | null | 0 | o8imihg | false | /r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8imihg/ | false | 1 |
t1_o8imihx | Apple should now focus on building their machines for AI/LLMs. Everyone knows you can run premier pro on a MacBook Air, not rocket science in today’s standards. | 1 | 0 | 2026-03-04T01:26:23 | Whyme-__- | false | null | 0 | o8imihx | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8imihx/ | false | 1 |
t1_o8im8mk | He can get a job at any place. Zuck is probably sending out a small arm of recruiters | 1 | 0 | 2026-03-04T01:24:47 | sunflowerapp | false | null | 0 | o8im8mk | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8im8mk/ | false | 1 |
t1_o8im7ss | This is a bit slop.
For one, hitting the max RAM bandwidth requires getting the full-fat Max chip, which will run you $4199. The standard Max chip is slower and doesn't let you hit 128GB of RAM.
Still a great machine. But basically you're buying something with 2.5x the cost of a Strix Halo for 2.5x the speed of a Str... | 1 | 0 | 2026-03-04T01:24:39 | iansltx_ | false | null | 0 | o8im7ss | false | /r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/o8im7ss/ | false | 1 |
t1_o8im739 | For an RTX 5080, is there any reason to go full NVFP4? | 1 | 0 | 2026-03-04T01:24:32 | InternationalNebula7 | false | null | 0 | o8im739 | false | /r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8im739/ | false | 1 |
t1_o8im51y | [removed] | 1 | 0 | 2026-03-04T01:24:11 | [deleted] | true | null | 0 | o8im51y | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8im51y/ | false | 1 |
t1_o8im2bh | I am very interested in this, but not as a game developer, but as a player. It would be interesting for me if the model could fully generate environment and basic physics. In general, give a large language model to create a world and rules of this world, to fill the virtual world with content. Perhaps make this proces... | 1 | 0 | 2026-03-04T01:23:43 | Penis-Thicc-9586 | false | null | 0 | o8im2bh | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8im2bh/ | false | 1 |
t1_o8im1pw | You publish your distilled intent for llms to train on? lol | 1 | 0 | 2026-03-04T01:23:37 | numberwitch | false | null | 0 | o8im1pw | false | /r/LocalLLaMA/comments/1rk6ulw/prediction_nextgen_frontier_llms_will_be/o8im1pw/ | false | 1 |
t1_o8ilx0y | I will be messaging you in 12 hours on [**2026-03-04 13:21:58 UTC**](http://www.wolframalpha.com/input/?i=2026-03-04%2013:21:58%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/LocalLLaMA/comments/1rk74ap/qwen359b_uncensored_aggressive_release_gguf/o8ilrhz/?context=3)
[**CLICK THIS ... | 1 | 0 | 2026-03-04T01:22:52 | RemindMeBot | false | null | 0 | o8ilx0y | false | /r/LocalLLaMA/comments/1rk74ap/qwen359b_uncensored_aggressive_release_gguf/o8ilx0y/ | false | 1 |
t1_o8ilrhz | !remindme 12h | 1 | 0 | 2026-03-04T01:21:58 | Charming_Skirt3363 | false | null | 0 | o8ilrhz | false | /r/LocalLLaMA/comments/1rk74ap/qwen359b_uncensored_aggressive_release_gguf/o8ilrhz/ | false | 1 |
t1_o8ilnk8 | $3599 for a machine with state of the art cpu and 128 gb ram, is that a real price it's going on sale somewhere, or just hallucinated by the llm you used to write this post? | 1 | 0 | 2026-03-04T01:21:20 | Marshall_Lawson | false | null | 0 | o8ilnk8 | false | /r/LocalLLaMA/comments/1rk7n3u/apple_m5_pro_m5_max_just_announced_heres_what_it/o8ilnk8/ | false | 1 |
t1_o8ilgr5 | Stefanie | 1 | 0 | 2026-03-04T01:20:13 | somatt | false | null | 0 | o8ilgr5 | false | /r/LocalLLaMA/comments/1rk4gh5/sad_day_for_open_source_gwens_boss_has_left/o8ilgr5/ | false | 1 |
t1_o8il0xd | As an engine developer who wants to create some proof-of-concept games to validate my work but isn't really a creative type, YES I am absolutely interested in this.
I would probably use this for adding custom background assets and such, especially if it could handle texturing as well. I'd love to see it be able to tak... | 1 | 0 | 2026-03-04T01:17:39 | nullptr777 | false | null | 0 | o8il0xd | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8il0xd/ | false | 1 |
t1_o8il0tg | It's a chat template kwarg.
`chat_template_kwargs = { "enable_thinking": false }` needs to make it to your LLM server, either as a launch argument or via the API. Personally, I use the API; here's how you set that up in open-webui:
https://preview.redd.it/0dszvuhgjxmg1.png?width=1080&format=png&auto=webp&s=e7d344550a... | 1 | 0 | 2026-03-04T01:17:38 | Thunderstarer | false | null | 0 | o8il0tg | false | /r/LocalLLaMA/comments/1rk2jnj/has_anyone_found_a_way_to_stop_qwen_35_35b_3b/o8il0tg/ | false | 1 |
t1_o8iky27 | It should be possible to run it via CPU processing as long as the game doesn't use to much CPU on its own, I tried a training test using the CPU only branch of pytorch (only ~500MB of packages instead of 3-4GB of packages), I was training a fresh model to play 2048, 16 inputs, and then 4 outputs (model parameters are s... | 1 | 0 | 2026-03-04T01:17:11 | Ill-Oil-2027 | false | null | 0 | o8iky27 | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8iky27/ | false | 1 |
t1_o8ikmtf | Rather buy 4 blackwell 6000 pros, put it a system with DDR5 and keep the rest of the change. | 1 | 0 | 2026-03-04T01:15:21 | segmond | false | null | 0 | o8ikmtf | false | /r/LocalLLaMA/comments/1rjwjf3/hardware_usaca_8gpu_a100_40gb_sxm4_cluster_2x/o8ikmtf/ | false | 1 |
t1_o8ikfzz | Building now. Will report back. | 1 | 0 | 2026-03-04T01:14:16 | Xp_12 | false | null | 0 | o8ikfzz | false | /r/LocalLLaMA/comments/1rjxmvo/qwen35_checkpointing_fix_pr_testing/o8ikfzz/ | false | 1 |
t1_o8ikf4e | 30gb if im not mistaken. Yeah I am supposed to be able to install there. I will create a folder on my desktop and try installing there | 1 | 0 | 2026-03-04T01:14:08 | lubezki | false | null | 0 | o8ikf4e | false | /r/LocalLLaMA/comments/1mitsok/which_ai_image_generators_are_good_for/o8ikf4e/ | false | 1 |
t1_o8ikddw | You're a gem mate. some of us really need to see stuff like this. Thanks.
This might be just the post i needed to jump-start me back into figuring out how to run similar comparative tests. I started looking into this casually several months back but got distracted away and never went back to it. What I'd love to be ab... | 1 | 0 | 2026-03-04T01:13:51 | munkiemagik | false | null | 0 | o8ikddw | false | /r/LocalLLaMA/comments/1rk5qmr/qwen3527b_q4_quantization_comparison/o8ikddw/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.