title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Track real-time GPU and LLM pricing across all cloud and inference providers | 1 | Deploybase is a dashboard for tracking real-time GPU and LLM pricing across cloud and inference providers. You can view performance stats and pricing history, compare side by side, and bookmark to track any changes. [https://deploybase.ai](https://deploybase.ai/)
[](https://www.reddit.com/submit/?source_id=t3_1rjdv9z) | 2026-03-03T16:39:05 | https://www.reddit.com/r/LocalLLaMA/comments/1rju5cz/track_realtime_gpu_and_llm_pricing_across_all/ | Micky_Haller | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rju5cz | false | null | t3_1rju5cz | /r/LocalLLaMA/comments/1rju5cz/track_realtime_gpu_and_llm_pricing_across_all/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/3316127IXwtWAsKFfXVecZs9F4yfsJoznDhc7ZMsPsg.png?auto=webp&s=36df942e14366f6dc26560051cd33df8287275a4', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/3316127IXwtWAsKFfXVecZs9F4yfsJoznDhc7ZMsPsg.png?width=108&crop=... |
How can the ZwZ model be as fast as smaller models? And one more question. | 1 | I'm using the ZwZ RN template in LM Studio, version Q4\_K\_M, and it's excellent for agentic automation. It is excellent for model agent use In general.
But I don't understand how it can be as fast as smaller models because I'm using models that are smaller than it and are slow. Models like the Qwen3.5 version, which ... | 2026-03-03T16:37:24 | https://www.reddit.com/r/LocalLLaMA/comments/1rju3q7/how_can_the_zwz_model_be_as_fast_as_smaller/ | AppealThink1733 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rju3q7 | false | null | t3_1rju3q7 | /r/LocalLLaMA/comments/1rju3q7/how_can_the_zwz_model_be_as_fast_as_smaller/ | false | false | self | 1 | null |
Junyang Lin has left Qwen :( | 1 | 2026-03-03T16:33:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/ | InternationalAsk1490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjtzyn | false | null | t3_1rjtzyn | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/ | false | false | 1 | null | ||
End of an era | 1 | [https://x.com/JustinLin610/status/2028865835373359513](https://x.com/JustinLin610/status/2028865835373359513)
Junyang Lin stepped down from Qwen 💔
| 2026-03-03T16:33:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rjtzfv/end_of_an_era/ | sprinter21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjtzfv | false | null | t3_1rjtzfv | /r/LocalLLaMA/comments/1rjtzfv/end_of_an_era/ | false | false | self | 1 | null |
Architecture for self-correcting AI agents: mistake logging -> pattern detection -> auto-generated behavioral directives (Claude Code + Supabase + pgvector + Ollama) | 1 | I'm open-sourcing an architecture for building persistent AI agents that evolve their own behavioral rules from operational mistakes. Posting here because the embedding/vector search component runs locally via Ollama and I think this community will have the most interesting technical feedback.
**Core architecture:**
... | 2026-03-03T16:30:04 | https://www.reddit.com/r/LocalLLaMA/comments/1rjtwiy/architecture_for_selfcorrecting_ai_agents_mistake/ | teeheEEee27 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjtwiy | false | null | t3_1rjtwiy | /r/LocalLLaMA/comments/1rjtwiy/architecture_for_selfcorrecting_ai_agents_mistake/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/KDti0e1Y8WkclNFlJIsHz8MHab4zxKcX-iONNQ9yCF0.png?auto=webp&s=ff53428d6fdd814750738ad4781acff7bf4ce949', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/KDti0e1Y8WkclNFlJIsHz8MHab4zxKcX-iONNQ9yCF0.png?width=108&crop=... |
The Truth About MCP vs CLI | 1 | "MCP was a mistake. Bash is better."
That quote from the developer behind OpenClaw kicked off the biggest AI tooling debate of 2026.
Connect a GitHub MCP server → 93 tools dumped into your context window → 55,000 tokens gone. Before you've even asked a question.
Stack GitHub + Jira + a database + Micros... | 2026-03-03T16:26:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rjtt01/the_truth_about_mcp_vs_cli/ | kagan101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjtt01 | false | null | t3_1rjtt01 | /r/LocalLLaMA/comments/1rjtt01/the_truth_about_mcp_vs_cli/ | false | false | self | 1 | null |
Local models will participate in weapons systems says CROSSHAIR benchmark | 1 | There's been a lot of discussion about the state of the art models and whether or not they can be used inside of weapon systems or mass surveillance against people. There's also a lot of talk about how heavily censored the local models are, but I constructed a rigorous test of the most popular local models, and they a... | 2026-03-03T16:23:37 | https://crosshairbenchmark.com | dolex-mcp | crosshairbenchmark.com | 1970-01-01T00:00:00 | 0 | {} | 1rjtqgm | false | null | t3_1rjtqgm | /r/LocalLLaMA/comments/1rjtqgm/local_models_will_participate_in_weapons_systems/ | false | false | default | 1 | null |
Best way to configure llama.cpp for hybrid GPU + CPU inference? | 1 | [removed] | 2026-03-03T16:22:30 | https://www.reddit.com/r/LocalLLaMA/comments/1rjtpga/best_way_to_configure_llamacpp_for_hybrid_gpu_cpu/ | abubakkar_s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjtpga | false | null | t3_1rjtpga | /r/LocalLLaMA/comments/1rjtpga/best_way_to_configure_llamacpp_for_hybrid_gpu_cpu/ | false | false | self | 1 | null |
Best way to run llama.cpp hybrid (GPU first, CPU fallback)? | 1 | [removed] | 2026-03-03T16:20:10 | https://www.reddit.com/r/LocalLLaMA/comments/1rjtn5l/best_way_to_run_llamacpp_hybrid_gpu_first_cpu/ | abubakkar_s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjtn5l | false | null | t3_1rjtn5l | /r/LocalLLaMA/comments/1rjtn5l/best_way_to_run_llamacpp_hybrid_gpu_first_cpu/ | false | false | self | 1 | null |
In llama_cpp running hybrid, GPU priority then CPU fallback | 1 | [removed] | 2026-03-03T16:15:31 | https://www.reddit.com/r/LocalLLaMA/comments/1rjtiif/in_llama_cpp_running_hybrid_gpu_priority_then_cpu/ | abubakkar_s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjtiif | false | null | t3_1rjtiif | /r/LocalLLaMA/comments/1rjtiif/in_llama_cpp_running_hybrid_gpu_priority_then_cpu/ | false | false | self | 1 | null |
In llama.cpp hybrid execution, GPU priority + CPU fallback | 1 | [removed] | 2026-03-03T16:10:43 | https://www.reddit.com/r/LocalLLaMA/comments/1rjtdn0/in_llamacpp_hybrid_execution_gpu_priority_cpu/ | abubakkar_s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjtdn0 | false | null | t3_1rjtdn0 | /r/LocalLLaMA/comments/1rjtdn0/in_llamacpp_hybrid_execution_gpu_priority_cpu/ | false | false | self | 1 | null |
Agentic RL hackathon this weekend in SF | 1 | Mentors from PyTorch, huggingface , and Unsloth will guide you to build agentic environments to win from a pool of $100K prizes.
\+ free compute and token credits just for attending!
Be there mar 7-8 in SF.
[https://cerebralvalley.ai/e/openenv-hackathon-sf?tab=guest-list](https://cerebralvalley.ai/e/openenv-hackatho... | 2026-03-03T16:02:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rjt5j4/agentic_rl_hackathon_this_weekend_in_sf/ | burtenshaw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjt5j4 | false | null | t3_1rjt5j4 | /r/LocalLLaMA/comments/1rjt5j4/agentic_rl_hackathon_this_weekend_in_sf/ | false | false | self | 1 | null |
MCP server that indexes codebases into a knowledge graph — 120x token reduction benchmarked across 35 repos | 1 | Built an MCP server for AI coding assistants that replaces file-by-file code exploration with graph queries. The key metric: At least 10x fewer tokens for the same structural questions, benchmarked across 35 real-world repos.
The problem: When AI coding tools (Claude Code, Cursor, Codex, or local setups) need t... | 2026-03-03T16:01:16 | https://www.reddit.com/r/LocalLLaMA/comments/1rjt4hh/mcp_server_that_indexes_codebases_into_a/ | OkDragonfruit4138 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjt4hh | false | null | t3_1rjt4hh | /r/LocalLLaMA/comments/1rjt4hh/mcp_server_that_indexes_codebases_into_a/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/oiVjVYo4hG4hN_LfAlGyu-7-HTa_fCFbw0ETNQj4n2Q.png?auto=webp&s=3b0059daaf0a32ec4473a77325f6aa556605a8f6', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/oiVjVYo4hG4hN_LfAlGyu-7-HTa_fCFbw0ETNQj4n2Q.png?width=108&crop=... |
Are we finally admitting RAG is a dead end for autonomous agents? (Found something called WMaaS) | 1 | [removed] | 2026-03-03T15:59:56 | https://www.reddit.com/r/LocalLLaMA/comments/1rjt32u/are_we_finally_admitting_rag_is_a_dead_end_for/ | No_Session3899 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjt32u | false | null | t3_1rjt32u | /r/LocalLLaMA/comments/1rjt32u/are_we_finally_admitting_rag_is_a_dead_end_for/ | false | false | self | 1 | null |
Neat detail: Qwen3-Coder running in LM studio, front page on Apple's new MBP marketing | 1 | I think it's pretty cool that a western tech company is acknowledging the capabilities of Chinese models. Macs have been a pretty solid choice for local LLM inference so I think Apple knows what they're doing here, at least. | 2026-03-03T15:54:05 | TheSpartaGod | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rjsxh0 | false | null | t3_1rjsxh0 | /r/LocalLLaMA/comments/1rjsxh0/neat_detail_qwen3coder_running_in_lm_studio_front/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/bedec36fqumg1.png?auto=webp&s=717cc0a689a587ff2f45ffeaa7ac7402fd8240a4', 'width': 918, 'height': 731}, 'resolutions': [{'url': 'https://preview.redd.it/bedec36fqumg1.png?width=108&crop=smart&auto=webp&s=1b2def114e5035b91f36800c48056e3addd35c00', 'width': 108, 'hei... | ||
When Tool Output Becomes Policy: Demonstrating Tool Authority Injection in an LLM Agent | 1 | Hello Everyone,
I have built a local LLM agent lab to demonstrate “Tool Authority Injection” - when tool output overrides system intent
In Part 3 of my lab series, I explored a focused form of tool poisoning where an AI agent elevates trusted tool output to policy-level authority and silently changes behavior.
Sandbo... | 2026-03-03T15:47:36 | https://www.reddit.com/r/LocalLLaMA/comments/1rjsrc0/when_tool_output_becomes_policy_demonstrating/ | insidethemask | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjsrc0 | false | null | t3_1rjsrc0 | /r/LocalLLaMA/comments/1rjsrc0/when_tool_output_becomes_policy_demonstrating/ | false | false | self | 1 | null |
Benchmarks: the 10x Inference Tax You Don't Have to Pay | 2 | We ran a pretty comprehensive comparison of small distilled models against frontier LLMs (GPT-5 nano, GPT-5 mini, GPT-5.2, Gemini 2.5 Flash Lite, Gemini 2.5 Flash, Claude Haiku 4.5, Claude Sonnet 4.6, Claude Opus 4.6, Grok 4.1 Fast, Grok 4) across 9 datasets covering classification (Banking77, E-commerce, TREC), functi... | 2026-03-03T15:41:51 | https://www.reddit.com/r/LocalLLaMA/comments/1rjslz0/benchmarks_the_10x_inference_tax_you_dont_have_to/ | maciejgryka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjslz0 | false | null | t3_1rjslz0 | /r/LocalLLaMA/comments/1rjslz0/benchmarks_the_10x_inference_tax_you_dont_have_to/ | false | false | 2 | null | |
Q2 qwen3-35b-a3b or Q8 qwen3.5-9b? | 1 | [removed] | 2026-03-03T15:41:11 | No-Tiger3430 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rjslbn | false | null | t3_1rjslbn | /r/LocalLLaMA/comments/1rjslbn/q2_qwen335ba3b_or_q8_qwen359b/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/tj82ja6coumg1.png?auto=webp&s=5c8377ceac03c0ec11838a1b4f1907319c644240', 'width': 1378, 'height': 84}, 'resolutions': [{'url': 'https://preview.redd.it/tj82ja6coumg1.png?width=108&crop=smart&auto=webp&s=4883dad19289933386e537eef0ac6dd65f2d790f', 'width': 108, 'hei... | ||
[ Removed by moderator ] | 1 | [removed] | 2026-03-03T15:36:48 | https://www.reddit.com/r/LocalLLaMA/comments/1rjsha8/tired_of_guessing_which_local_model_is_best_for/ | Soft_Emotion_9794 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjsha8 | false | null | t3_1rjsha8 | /r/LocalLLaMA/comments/1rjsha8/tired_of_guessing_which_local_model_is_best_for/ | false | false | null | 1 | null |
HOW TO FIX QWEN3.5 OVERTHINK | 0 | I have seen many complain about this and I was not having this issue until I tried a smaller model using Ollama, and it took 2 minutes to answer a simple "Hi".
The answer is simple, just apply the parameters recommended by the Qwen team.
To achieve optimal performance, we recommend the following settings:
... | 2026-03-03T15:36:26 | https://www.reddit.com/r/LocalLLaMA/comments/1rjsgy6/how_to_fix_qwen35_overthink/ | Brunofcsampaio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjsgy6 | false | null | t3_1rjsgy6 | /r/LocalLLaMA/comments/1rjsgy6/how_to_fix_qwen35_overthink/ | false | false | 0 | null | |
I spent 6 hours last night failing to fine-tune Qwen3.5-9B until Kimi k2.5 walked me through the fixes - here's the working config | 2 | I need to preface this: I didn't write this code (or most of this post) myself. Last night I went down a 6-hour rabbit hole trying to train a style-replica model (basically "JoeyOS" - my own voice for job application emails) on Qwen3.5-9B, and I failed spectacularly multiple times until I got the right help.
I Started... | 2026-03-03T15:34:31 | https://www.reddit.com/r/LocalLLaMA/comments/1rjsf7f/i_spent_6_hours_last_night_failing_to_finetune/ | pakalolo7123432 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjsf7f | false | null | t3_1rjsf7f | /r/LocalLLaMA/comments/1rjsf7f/i_spent_6_hours_last_night_failing_to_finetune/ | false | false | self | 2 | null |
Qwen 3.5 DeltaNet Broke llama.cpp on Apple Silicon – MLX Fixed It (21s → 7s) | 1 | 2026-03-03T15:27:42 | https://medium.com/@aejaz.sheriff/from-qwen-3-to-qwen-3-5-on-apple-silicon-a-14x-latency-regression-and-how-mlx-got-us-back-0ed9ed21fa68 | Educational-Pace866 | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1rjs8se | false | null | t3_1rjs8se | /r/LocalLLaMA/comments/1rjs8se/qwen_35_deltanet_broke_llamacpp_on_apple_silicon/ | false | false | default | 1 | null | |
Qwen 3.5 DeltaNet Broke llama.cpp on Apple Silicon – MLX Fixed It (21s → 7s) | 1 | [removed] | 2026-03-03T15:24:57 | https://medium.com/@aejaz.sheriff/from-qwen-3-to-qwen-3-5-on-apple-silicon-a-14x-latency-regression-and-how-mlx-got-us-back-0ed9ed21fa68 | Educational-Pace866 | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1rjs67f | false | null | t3_1rjs67f | /r/LocalLLaMA/comments/1rjs67f/qwen_35_deltanet_broke_llamacpp_on_apple_silicon/ | false | false | default | 1 | null |
[ Removed by moderator ] | 1 | [removed] | 2026-03-03T15:21:30 | https://www.reddit.com/r/LocalLLaMA/comments/1rjs32h/i_wanna_host_txgemma9bchat_but_im_outsourced_by/ | Gauthmath_bee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjs32h | false | null | t3_1rjs32h | /r/LocalLLaMA/comments/1rjs32h/i_wanna_host_txgemma9bchat_but_im_outsourced_by/ | false | false | null | 1 | null |
Run Qwen 3.5 2B on iPhone | 1 | [removed] | 2026-03-03T15:19:43 | https://v.redd.it/yhqm49bbkumg1 | raajeevcn | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rjs1fq | false | {'reddit_video': {'bitrate_kbps': 5000, 'fallback_url': 'https://v.redd.it/yhqm49bbkumg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'width': 1080, 'scrubber_media_url': 'https://v.redd.it/yhqm49bbkumg1/CMAF_96.mp4', 'dash_url': 'https://v.redd.it/yhqm49bbkumg1/DASHPlaylist.mpd?a=1775143241%2CMD... | t3_1rjs1fq | /r/LocalLLaMA/comments/1rjs1fq/run_qwen_35_2b_on_iphone/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/bWMyZjhhYmJrdW1nMUUxCNxe5XSbGqJnvD7Cpe0WL46nmkrkrytmZAea6XrO.png?format=pjpg&auto=webp&s=fa5cf9e5996e4ce332d50b30e61d8aab007aac9b', 'width': 1080, 'height': 1080}, 'resolutions': [{'url': 'https://external-preview.redd.it/bWMyZjhhYmJrdW1nMUUxCNxe5XSbGqJnv... | |
Is it worth the candle? 2x tesla P40 24GB to 1-2 3090 RTX | 1 | I want to upgrade my GPUs and sell x2 Tesla P40 24GB to buy an RTX 3090 for inference. For gaming I already have 4080 and 5090 (VR). I'll lose about $600 on this deal, which isn't much.
Who's using similar setups and what inference speeds are you achieving?
I'm particularly interested in the new Qwen 3.5B-A3B ... | 2026-03-03T15:13:14 | https://www.reddit.com/r/LocalLLaMA/comments/1rjrvku/is_it_worth_the_candle_2x_tesla_p40_24gb_to_12/ | neowisard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjrvku | false | null | t3_1rjrvku | /r/LocalLLaMA/comments/1rjrvku/is_it_worth_the_candle_2x_tesla_p40_24gb_to_12/ | false | false | 1 | null | |
Local models drift faster than you think when you use them as agents | 1 | I've been running a few local models as persistent agents for about two months now. Qwen 2.5 for code review, Mistral for summarization, a fine-tuned Llama for structured extraction.
The thing nobody warned me about: they don't drift the way API models drift. With API models, the provider changes something and your ou... | 2026-03-03T15:11:08 | https://www.reddit.com/r/LocalLLaMA/comments/1rjrtkd/local_models_drift_faster_than_you_think_when_you/ | Acrobatic_Task_6573 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjrtkd | false | null | t3_1rjrtkd | /r/LocalLLaMA/comments/1rjrtkd/local_models_drift_faster_than_you_think_when_you/ | false | false | self | 1 | null |
I let an agent run overnight at a hackathon. Here’s how I solved Infinite Token Burn using Ontology Convergence (now adopted by OMC v4.6.0) | 1 | I’ve been building **Ouroboros**, a Python-based harness for agentic coding. It addresses the biggest bottleneck in AI development: failures usually stem from **ambiguous inputs**, not the model’s coding ability.
Recently, at a hackathon in Korea ( [Ralphthon](https://www.linkedin.com/posts/gb-jeong_%EB%B0%A4%EC%83%88... | 2026-03-03T15:10:59 | https://www.reddit.com/r/LocalLLaMA/comments/1rjrtfd/i_let_an_agent_run_overnight_at_a_hackathon_heres/ | Lopsided_Yak9897 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjrtfd | false | null | t3_1rjrtfd | /r/LocalLLaMA/comments/1rjrtfd/i_let_an_agent_run_overnight_at_a_hackathon_heres/ | false | false | 1 | null | |
How to enable thinking on Qwen small models in LM Studio? | 1 | The Unsloth docs say to pass this parameter: `--chat-template-kwargs '{"enable_thinking":true}'`
But Google says that LM Studio does not support parameters. So what do I do? | 2026-03-03T15:09:14 | https://www.reddit.com/r/LocalLLaMA/comments/1rjrrr2/how_to_enable_thinking_on_qwen_small_models_in_lm/ | wowsers7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjrrr2 | false | null | t3_1rjrrr2 | /r/LocalLLaMA/comments/1rjrrr2/how_to_enable_thinking_on_qwen_small_models_in_lm/ | false | false | self | 1 | null |
Do you build local chat bots professionally? I want to, and seek your hard earned life lessons, tips, tricks, and favorite open source repos! | 1 | Hello,
I want to start a small business.
What I want to do is build chatbots for businesses.
I want to build it all fully local(thus localllama) for clients using RAG.
I have my own architecture I have been working on for low compute low hallucination RAG, it is not done yet and is quite arduous, but I have had goo... | 2026-03-03T15:07:58 | https://www.reddit.com/r/LocalLLaMA/comments/1rjrql0/do_you_build_local_chat_bots_professionally_i/ | Which_Penalty2610 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjrql0 | false | null | t3_1rjrql0 | /r/LocalLLaMA/comments/1rjrql0/do_you_build_local_chat_bots_professionally_i/ | false | false | self | 1 | null |
QWEN 3.5 9B is SLOW | 1 | I was really excited reading about qwen3.5 9B until I tried it.
My personal use case is that I run local models to help with programming tasks. Not vibe coding, very specific tasks for test generation and code review. Never throwing in more than 1000 lines of code, never asking for more than a couple 100 lines back.
... | 2026-03-03T15:06:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/ | spacecad_t | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjrp3v | false | null | t3_1rjrp3v | /r/LocalLLaMA/comments/1rjrp3v/qwen_35_9b_is_slow/ | false | false | self | 1 | null |
Code Container: Safely run OpenCode/Codex/CC with full auto-approve | 2 | Hey everyone,
I wanted to share a small tool I've been building that has completely changed how I work with local coding harnesses. It's called Code Container, and it's a Docker-based wrapper for running OpenCode, Codex, Claude Code and other AI coding tools in isolated containers so that your harness doesn't `rm -rf ... | 2026-03-03T15:05:46 | https://www.reddit.com/r/LocalLLaMA/comments/1rjrofr/code_container_safely_run_opencodecodexcc_with/ | chocolateUI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjrofr | false | null | t3_1rjrofr | /r/LocalLLaMA/comments/1rjrofr/code_container_safely_run_opencodecodexcc_with/ | false | false | self | 2 | {'images': [{'source': {'url': 'https://external-preview.redd.it/Ns3zfgcq27eZXK5EO80D9WrcPYv5lPH02UfyruIL9yI.png?auto=webp&s=d48197287a897a37309593c229412acd647004b6', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/Ns3zfgcq27eZXK5EO80D9WrcPYv5lPH02UfyruIL9yI.png?width=108&crop=... |
Kokoro TTS, but it clones voices now — Introducing KokoClone | 1 | **KokoClone** is live.
It extends **Kokoro TTS** with zero-shot voice cloning — while keeping the speed and real-time compatibility Kokoro is known for.
If you like Kokoro’s prosody, naturalness, and performance but wished it could clone voices from a short reference clip… this is exactly that.
Fully open-source.(Ap... | 2026-03-03T15:00:29 | https://v.redd.it/90r2d01agumg1 | OrganicTelevision652 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rjrjg3 | false | {'reddit_video': {'bitrate_kbps': 5000, 'fallback_url': 'https://v.redd.it/90r2d01agumg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'width': 1920, 'scrubber_media_url': 'https://v.redd.it/90r2d01agumg1/CMAF_96.mp4', 'dash_url': 'https://v.redd.it/90r2d01agumg1/DASHPlaylist.mpd?a=1775142055%2CMjk... | t3_1rjrjg3 | /r/LocalLLaMA/comments/1rjrjg3/kokoro_tts_but_it_clones_voices_now_introducing/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/aTIxeXNrM2FndW1nMeLImfIxTHKA89v9yEl0tIVVDBN7ORguLnApshYofGlU.png?format=pjpg&auto=webp&s=48caf705ef5c3940c55bd38b7c663b226466a1e8', 'width': 1920, 'height': 1080}, 'resolutions': [{'url': 'https://external-preview.redd.it/aTIxeXNrM2FndW1nMeLImfIxTHKA89v9y... | |
The new Macbooks Air/Pro/Max are dissapointing | 1 | They preserved their (high) prices and not relfected on RAM price hike, that one is possitive.
But they didn’t gave us some juicy RAM configurations - 128GB is max with macbook Pro. And no 64GB option with macbook Air is pure letdown. | 2026-03-03T15:00:04 | https://www.reddit.com/gallery/1rjrj0e | srigi | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rjrj0e | false | null | t3_1rjrj0e | /r/LocalLLaMA/comments/1rjrj0e/the_new_macbooks_airpromax_are_dissapointing/ | false | false | 1 | null | |
Integrating local agents with third party services without MCP. | 1 | The standard way of integrating agents with remote services (like GMail, Slack, Dropbox or self-hosted ones like Coolify) is via MCP servers. When investigating possible local agent setup architectures, I was a bit unhappy about that for several reasons:
- Local MCP servers can be kind of hard to configure for non-tec... | 2026-03-03T14:59:10 | https://www.reddit.com/r/LocalLLaMA/comments/1rjri86/integrating_local_agents_with_third_party/ | hynek-urban | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjri86 | false | null | t3_1rjri86 | /r/LocalLLaMA/comments/1rjri86/integrating_local_agents_with_third_party/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/rENYSt06CbPx7qB30K9sbJMnxWLfDG-Ul5dl-6yEiB8.png?auto=webp&s=6dbcc5c46757f709da69abb526ddf5169c80baa3', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/rENYSt06CbPx7qB30K9sbJMnxWLfDG-Ul5dl-6yEiB8.png?width=108&crop=... |
I built a local-first AI copilot (no telemetry, permission-based, one-click Windows app) — Apache 2.0 | 1 | GitHub: [https://github.com/raydeStar/sir-thaddeus](https://github.com/raydeStar/sir-thaddeus)
License: Apache 2.0
Hey guys!
I wanted to build an AI app that’s easy to run. All you need to do is Download, Unzip, and Run.
No telemetry. No weird background processes. No cloud dependency unless you choose it.
Tha... | 2026-03-03T14:58:00 | https://v.redd.it/gsdgym5wfumg1 | _raydeStar | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rjrh9f | false | {'reddit_video': {'bitrate_kbps': 2400, 'fallback_url': 'https://v.redd.it/gsdgym5wfumg1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 538, 'width': 1280, 'scrubber_media_url': 'https://v.redd.it/gsdgym5wfumg1/CMAF_96.mp4', 'dash_url': 'https://v.redd.it/gsdgym5wfumg1/DASHPlaylist.mpd?a=1775141928%2CNWI3M... | t3_1rjrh9f | /r/LocalLLaMA/comments/1rjrh9f/i_built_a_localfirst_ai_copilot_no_telemetry/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/N2k4ejR1NXdmdW1nMVsCvUJUctLmS1I7lwFQ3fVLBp-ZnuQ7izJki79Gdigs.png?format=pjpg&auto=webp&s=2e7970518045e29d4a475396defc8dddd31a5189', 'width': 1716, 'height': 720}, 'resolutions': [{'url': 'https://external-preview.redd.it/N2k4ejR1NXdmdW1nMVsCvUJUctLmS1I7lw... | |
Agentic Coding MoE Models for 10GB VRAM Setup with CPU Offloading? | 1 | Current setup: 7800x3d, 32GB DDR5 6000MHz, RTX 3080 10GB
Mainly looking at Qwen3-Coder-30B-A3B-Instruct and GLM-4.7-Flash
Would use the Q4\_K\_M quant splitting 50/50 b/w VRAM and RAM.
Any other options to consider? My use case is to have an agentic setup working with something like a ralph loop to continue iterati... | 2026-03-03T14:56:31 | https://www.reddit.com/r/LocalLLaMA/comments/1rjrfzg/agentic_coding_moe_models_for_10gb_vram_setup/ | DK_Tech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjrfzg | false | null | t3_1rjrfzg | /r/LocalLLaMA/comments/1rjrfzg/agentic_coding_moe_models_for_10gb_vram_setup/ | false | false | self | 1 | null |
Qwen3.5 Small Models Compared: 9B vs 4B vs 2B vs 0.8B | 1 | He used Unsloth Q8. Of course most of the time 9B won, but for web frontend design the 0.8B actually did best. He does mention how much VRAM is used, & I wonder if the smaller models would have done better if he increased the context window to fill up his RTX5090, but does give hope to those with smaller VRAM. | 2026-03-03T14:56:20 | https://www.youtube.com/watch?v=8jZSxZfdnm4&list=PLakykuPxo3cjsU1Kq1CAL-LYMXtoPA68u | tomByrer | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1rjrfu1 | false | {'type': 'youtube.com', 'oembed': {'provider_url': 'https://www.youtube.com/', 'version': '1.0', 'title': 'Qwen3.5 Small Models Compared – 9B vs 4B vs 2B vs 0.8B!', 'type': 'video', 'thumbnail_width': 480, 'height': 200, 'width': 356, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/8jZSxZfd... | t3_1rjrfu1 | /r/LocalLLaMA/comments/1rjrfu1/qwen35_small_models_compared_9b_vs_4b_vs_2b_vs_08b/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/OuAFVhU_uoRWzXe7UD54ZjDDgnPSL102Ht64kCwEmPY.jpeg?auto=webp&s=8306c6795a335233c8dae6ee6ebf9e6a51a6c06c', 'width': 480, 'height': 360}, 'resolutions': [{'url': 'https://external-preview.redd.it/OuAFVhU_uoRWzXe7UD54ZjDDgnPSL102Ht64kCwEmPY.jpeg?width=108&crop... | |
Allowing LLMs to reference from websites? | 1 | Any solution for the above? I know something agentic would function, but since we're human and asking a tool to access internet, what solutions allow this? | 2026-03-03T14:52:06 | https://www.reddit.com/r/LocalLLaMA/comments/1rjrbzc/allowing_llms_to_reference_from_websites/ | derivative49 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjrbzc | false | null | t3_1rjrbzc | /r/LocalLLaMA/comments/1rjrbzc/allowing_llms_to_reference_from_websites/ | false | false | self | 1 | null |
did anyone replace old qwen2.5-coder:7b with qwen3.5:9b in nonThinker mode? | 1 | I know, qwen3.5 isn't the coder variant yet.
Nevertheless I guess an actual 9b dense performs better just from a responnse quality perspective. Just seen from the overall evolution since 2.5 has been released.
We are using the old coder for autocomplete, fill in the midlle, loadbalanced by nginx.
btw. 2.5 is ... | 2026-03-03T14:49:48 | https://www.reddit.com/r/LocalLLaMA/comments/1rjr9ze/did_anyone_replace_old_qwen25coder7b_with/ | Impossible_Art9151 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjr9ze | false | null | t3_1rjr9ze | /r/LocalLLaMA/comments/1rjr9ze/did_anyone_replace_old_qwen25coder7b_with/ | false | false | self | 1 | null |
Alibaba can I buy ? Any suggestions | 1 | is this good for coding ? currently I'm working on a project that will take me approx one month to complete! within this one month can I go for this ? subscription is that okay ? any suggestions...any help ? | 2026-03-03T14:45:16 | Less_Strain7577 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rjr60d | false | null | t3_1rjr60d | /r/LocalLLaMA/comments/1rjr60d/alibaba_can_i_buy_any_suggestions/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/slaj90okeumg1.jpeg?auto=webp&s=4fc3f355c0568febfd9108cfa7b6f8b3b5607752', 'width': 1280, 'height': 680}, 'resolutions': [{'url': 'https://preview.redd.it/slaj90okeumg1.jpeg?width=108&crop=smart&auto=webp&s=d2a86eff8bec6a13efb2e7d4ed95e48f16631391', 'width': 108, '... | ||
BloonsBench – Evaluate LLM agent performance on Bloons Tower Defense 5 | 1 | 2026-03-03T14:45:06 | https://github.com/cnqso/bloonsbench | cnqso | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rjr5uq | false | null | t3_1rjr5uq | /r/LocalLLaMA/comments/1rjr5uq/bloonsbench_evaluate_llm_agent_performance_on/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/QijRlgryzeLrYtY3_okhoA-X9_I5qRWga1RzE2nlyG4.png?auto=webp&s=778abe9036d802c2d601e24317f0a6c88f264004', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/QijRlgryzeLrYtY3_okhoA-X9_I5qRWga1RzE2nlyG4.png?width=108&crop=... | ||
Local LLM for large journal library | 1 | Hello everyone,
I would like to use a local LLM to answer questions regarding a large database of journal articles (approx 5-10y worth of at least 10-20 medical journals +/- a few books). This should hopefully make a literature review over the next few months much quicker.
I have little programming experience ... | 2026-03-03T14:44:54 | https://www.reddit.com/r/LocalLLaMA/comments/1rjr5p7/local_llm_for_large_journal_library/ | HerlanderCoco | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjr5p7 | false | null | t3_1rjr5p7 | /r/LocalLLaMA/comments/1rjr5p7/local_llm_for_large_journal_library/ | false | false | self | 1 | null |
I'm running a paddle race where AI agents compete through human bodies — looking for bot operators | 1 | The concept: AI agents register via API, analyze real-time environmental data (tides, wind, currents), and design race strategy. Human athletes execute whatever the bot decides. No human tactical input allowed.
We call it Augmented Games — the War Room is where registered bots deliberate in real time, visible to spe... | 2026-03-03T14:35:45 | https://www.reddit.com/r/LocalLLaMA/comments/1rjqxhc/im_running_a_paddle_race_where_ai_agents_compete/ | Radiant-Camp-1744 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjqxhc | false | null | t3_1rjqxhc | /r/LocalLLaMA/comments/1rjqxhc/im_running_a_paddle_race_where_ai_agents_compete/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/zOvBTkeGnO4MDd9VwvcDdnxlR0I1QwiTv1ROTcaHYVU.png?auto=webp&s=ad59b2b6c273d9bfef07284f898e4c216e912c9b', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/zOvBTkeGnO4MDd9VwvcDdnxlR0I1QwiTv1ROTcaHYVU.png?width=108&crop=... |
I'm running a paddle race where AI agents compete through human bodies — looking for bot operators | 1 | The concept: AI agents register via API, analyze real-time environmental data (tides, wind, currents), and design race strategy. Human athletes execute whatever the bot decides. No human tactical input allowed.
We call it Augmented Games — the War Room is where registered bots deliberate in real time, visible to spe... | 2026-03-03T14:31:02 | https://www.reddit.com/r/LocalLLaMA/comments/1rjqt8t/im_running_a_paddle_race_where_ai_agents_compete/ | Radiant-Camp-1744 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjqt8t | false | null | t3_1rjqt8t | /r/LocalLLaMA/comments/1rjqt8t/im_running_a_paddle_race_where_ai_agents_compete/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/zOvBTkeGnO4MDd9VwvcDdnxlR0I1QwiTv1ROTcaHYVU.png?auto=webp&s=ad59b2b6c273d9bfef07284f898e4c216e912c9b', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/zOvBTkeGnO4MDd9VwvcDdnxlR0I1QwiTv1ROTcaHYVU.png?width=108&crop=... |
Apple unveils M5 Pro and M5 Max, citing up to 4× faster LLM prompt processing than M4 Pro and M4 Max | 1 | 2026-03-03T14:30:39 | themixtergames | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rjqsv6 | false | null | t3_1rjqsv6 | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/2q4hcz9obumg1.png?auto=webp&s=6043160ec8ec47f231f60bc0d7b9ddff6fe438ca', 'width': 1248, 'height': 714}, 'resolutions': [{'url': 'https://preview.redd.it/2q4hcz9obumg1.png?width=108&crop=smart&auto=webp&s=59d78129def7f7fbed3d1bfb467b784e3fbf330e', 'width': 108, 'he... | |||
How can we use AI + modern tech stacks to help civilians during wars? | 1 | With ongoing wars and conflicts worldwide, I keep asking myself:
Instead of building another SaaS or ad tool, how can we build AI systems that genuinely help civilians in conflict zones?
Not military tools. Not “predict the next strike.”
But defensive, humanitarian systems.
Here are a few serious ideas:
# 1) Ci... | 2026-03-03T14:25:28 | https://www.reddit.com/r/LocalLLaMA/comments/1rjqo97/how_can_we_use_ai_modern_tech_stacks_to_help/ | Far_Plant9504 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjqo97 | false | null | t3_1rjqo97 | /r/LocalLLaMA/comments/1rjqo97/how_can_we_use_ai_modern_tech_stacks_to_help/ | false | false | self | 1 | null |
I'm running a paddle race where AI agents compete through human bodies — looking for bot operators | 1 |
The concept: AI agents register via API, analyze real-time environmental data (tides, wind, currents), and design race strategy. Human athletes execute whatever the bot decides. No human tactical input allowed.
We call it **Augmented Games** — the War Room is where registered bots deliberate in real time, visible to... | 2026-03-03T14:20:55 | https://www.reddit.com/r/LocalLLaMA/comments/1rjqkbh/im_running_a_paddle_race_where_ai_agents_compete/ | Radiant-Camp-1744 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjqkbh | false | null | t3_1rjqkbh | /r/LocalLLaMA/comments/1rjqkbh/im_running_a_paddle_race_where_ai_agents_compete/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/zOvBTkeGnO4MDd9VwvcDdnxlR0I1QwiTv1ROTcaHYVU.png?auto=webp&s=ad59b2b6c273d9bfef07284f898e4c216e912c9b', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/zOvBTkeGnO4MDd9VwvcDdnxlR0I1QwiTv1ROTcaHYVU.png?width=108&crop=... |
SKILL.md files are amazing, but making/creating them is another story. | 1 | Been using Claude and other AI assistants heavily over the past few months and noticed something: the Agent Skills spec is now supported across 30+ AI platforms, but the actual process of creating skills is still manual. You either write [SKILL.md](http://SKILL.md) files from scratch or copy-paste from templates and ho... | 2026-03-03T14:16:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rjqfzc/skillmd_files_are_amazing_but_makingcreating_them/ | junianwoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjqfzc | false | null | t3_1rjqfzc | /r/LocalLLaMA/comments/1rjqfzc/skillmd_files_are_amazing_but_makingcreating_them/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/ORBlVVl5NRMfIRKtw52s_2VVR9QWwmcAJfYq1Tuaafg.png?auto=webp&s=65338b6ac59669d21d8f31fd539daac01eed5c81', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/ORBlVVl5NRMfIRKtw52s_2VVR9QWwmcAJfYq1Tuaafg.png?width=108&crop=... |
Sabomako/Qwen3.5-122B-A10B-heretic-GGUF · Hugging Face | 1 | 2026-03-03T14:15:26 | https://huggingface.co/Sabomako/Qwen3.5-122B-A10B-heretic-GGUF | AlwaysLateToThaParty | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1rjqff6 | false | null | t3_1rjqff6 | /r/LocalLLaMA/comments/1rjqff6/sabomakoqwen35122ba10bhereticgguf_hugging_face/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/lrIOxdDTMfHwNi9g44_cSCBIOS54b6ku4EjgewkvrtM.png?auto=webp&s=319d3ea1e0ced77cc57f5b3da1e2f44d3ab02dd6', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/lrIOxdDTMfHwNi9g44_cSCBIOS54b6ku4EjgewkvrtM.png?width=108&crop=... | ||
Open vs Closed Models for Image & Video: What’s Actually Winning? | 1 | For text models, open vs closed is a serious debate. But for image and video generation, it feels different.
We’ve noticed:
* Closed models often win on raw aesthetic quality
* Open models win on customization and fine-tuning
* Video models are extremely sensitive to inference setup
* Prompt stability varies wildly a... | 2026-03-03T14:13:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rjqdkk/open_vs_closed_models_for_image_video_whats/ | qubridInc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjqdkk | false | null | t3_1rjqdkk | /r/LocalLLaMA/comments/1rjqdkk/open_vs_closed_models_for_image_video_whats/ | false | false | self | 1 | null |
New to local coder, what would be your choice for dual 3090 Ti? Beginner setup tips? | 1 | I’ve been using Gemini and Claude but want to move to a local coder. I’ll trial a few but I’m wondering what the experience of the community is?
As a daily driver, Deepseek-r1:70b with a small context window or quen coder 32b with a larger window? Or something less that I’m completely missing?
As for workflow, do yo... | 2026-03-03T14:09:32 | https://www.reddit.com/r/LocalLLaMA/comments/1rjqaci/new_to_local_coder_what_would_be_your_choice_for/ | queequegscoffin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjqaci | false | null | t3_1rjqaci | /r/LocalLLaMA/comments/1rjqaci/new_to_local_coder_what_would_be_your_choice_for/ | false | false | self | 1 | null |
Catching an AI Red Teamer in the Wild: Using Reverse Prompt Injection as a Honeypot Detection Mechanism | 1 | We set up an HTTP honeypot with [Beelzebub](https://github.com/mariocandela/beelzebub) (open-source) and embedded two layers of traps specifically designed to detect LLM-based agents:
1. Fake credentials in HTML comments (only useful if you read and understand natural language)
2. Actual prompt injection payloads targ... | 2026-03-03T14:07:48 | https://www.reddit.com/r/LocalLLaMA/comments/1rjq8w1/catching_an_ai_red_teamer_in_the_wild_using/ | M4r10_h4ck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjq8w1 | false | null | t3_1rjq8w1 | /r/LocalLLaMA/comments/1rjq8w1/catching_an_ai_red_teamer_in_the_wild_using/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/hAH9wCx2db1sEVVaMIwloJ_Cv-K26uFkUKayckKhAWg.jpeg?auto=webp&s=a58c1090fade1962e9358654d755ee99ed23eebf', 'width': 1214, 'height': 655}, 'resolutions': [{'url': 'https://external-preview.redd.it/hAH9wCx2db1sEVVaMIwloJ_Cv-K26uFkUKayckKhAWg.jpeg?width=108&cro... |
I compiled RCCL from source for AMD gfx1010 (RDNA1) — 3-GPU AllReduce now works on RX 5700 XT. Full guide + patch. | 1 | Hey r/LocalLLaMA,
After several months of debugging I got 3x RX 5700 XT (gfx1010, 24 GB VRAM total) running multi-GPU collective communications with RCCL. Posting the full breakdown because I couldn’t find this documented anywhere.
**TL;DR:** RCCL compiled from source + PCIe topology fix = 3-GPU AllReduce PASS on off... | 2026-03-03T14:05:41 | https://www.reddit.com/r/LocalLLaMA/comments/1rjq75f/i_compiled_rccl_from_source_for_amd_gfx1010_rdna1/ | Practical-Wallaby-63 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjq75f | false | null | t3_1rjq75f | /r/LocalLLaMA/comments/1rjq75f/i_compiled_rccl_from_source_for_amd_gfx1010_rdna1/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/olX8h93PLg0CXBBPMFuL9bho3eqetmuLDxKi8V9EgOE.png?auto=webp&s=9eb94bc52164ae029e72e5a4c68f79de0e792abd', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/olX8h93PLg0CXBBPMFuL9bho3eqetmuLDxKi8V9EgOE.png?width=108&crop=... |
600tk/s+ speed on local hardware with Self speculative decoding (rtx 3090) | 1 | You can use -spec-type ngram-mod parameter in llamacpp with for example devstral to speed up coding with Self speculative decoding. Outputs with similar tokens get insane speedups, chat history is tokens, so anything is speed up really. PP tk/s is like 1700tk/s
For couple of new, simple lines on 4k tokens of code and ... | 2026-03-03T13:52:10 | https://www.reddit.com/r/LocalLLaMA/comments/1rjpvdd/600tks_speed_on_local_hardware_with_self/ | GodComplecs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjpvdd | false | null | t3_1rjpvdd | /r/LocalLLaMA/comments/1rjpvdd/600tks_speed_on_local_hardware_with_self/ | false | false | self | 1 | null |
[totally not an ad] combine 2x MCIO into 1x PCIe x16 adapter | 1 | A few months ago I've asked here how to combine two unused MCIO ports into one useful PCIe x16 and got a few recommendations, in the end I've bought this adapter and cables branded "10Gtek" and they do work well: https://www.sfpcables.com/mcio-pcie-gen5-device-adapter-2-8i-to-x16 https://www.sfpcables.com/mcio-to-mcio-... | 2026-03-03T13:50:02 | https://www.reddit.com/gallery/1rjptl1 | MelodicRecognition7 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rjptl1 | false | null | t3_1rjptl1 | /r/LocalLLaMA/comments/1rjptl1/totally_not_an_ad_combine_2x_mcio_into_1x_pcie/ | false | false | 1 | null | |
I'm working in a project to let you keep using remote code from your mobile. 100% open source. | 1 | https://i.redd.it/jmjkh5yo3umg1.gif
[https://github.com/samuelfaj/remotecode.io](https://github.com/samuelfaj/remotecode.io)
Hope you guys like it!
[](https://www.reddit.com/submit/?source_id=t3_1rjol49) | 2026-03-03T13:44:30 | https://www.reddit.com/r/LocalLLaMA/comments/1rjpp64/im_working_in_a_project_to_let_you_keep_using/ | TomatilloPutrid3939 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjpp64 | false | null | t3_1rjpp64 | /r/LocalLLaMA/comments/1rjpp64/im_working_in_a_project_to_let_you_keep_using/ | false | false | 1 | null | |
Are multi-agent systems actually being used in production or is it hype? | 1 | By multi-agent I mean Multiple LLM agents with different roles
Or are most real-world systems still single-agent + tools? | 2026-03-03T13:43:35 | https://www.reddit.com/r/LocalLLaMA/comments/1rjpoge/are_multiagent_systems_actually_being_used_in/ | Xitizdumb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjpoge | false | null | t3_1rjpoge | /r/LocalLLaMA/comments/1rjpoge/are_multiagent_systems_actually_being_used_in/ | false | false | self | 1 | null |
Is there a way to disable thinking with the new qwen3.5 models? | 1 | Hi, i was playing around with the new models, atm qwen3.5 9B mlx 4bit, i'm using lm studio and I'm on a macbook pro M1 max with 32GB of ram.
Do you think that this behaviour is normal ?
I mean the tok/sec are great but 30 second to say hello ????
https://preview.redd.it/sna10lwcltmg1.png?width=997&format=png&au... | 2026-03-03T13:36:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rjpilf/is_there_a_way_to_disable_thinking_with_the_new/ | arkham00 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjpilf | false | null | t3_1rjpilf | /r/LocalLLaMA/comments/1rjpilf/is_there_a_way_to_disable_thinking_with_the_new/ | false | false | self | 1 | null |
Why does mixed kv cache quantization result in extreme speed drop off?? | 1 | I was managing my config.ini, and when setting up a coder version i set
\`\`\`
\-ctk fp16
\-ctv q8\_0
\`\`\`
As i read in longer context, k cache is much more sensitive to quantization.
but this combination cause the the throughput to reduce to 20tps from 50tps just within 4000 tokens of context. which is very ... | 2026-03-03T13:36:10 | https://www.reddit.com/r/LocalLLaMA/comments/1rjpifs/why_does_mixed_kv_cache_quantization_result_in/ | jonglaaa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjpifs | false | null | t3_1rjpifs | /r/LocalLLaMA/comments/1rjpifs/why_does_mixed_kv_cache_quantization_result_in/ | false | false | self | 1 | null |
Qwen 3.5: What is "Base" version? | 1 | Hi. In previous models and some other models e.g. Gemma, there is a base version and then an it (instruction-tuned) version. Obviously for people who want to use the model without fine-tuning, it versions provide far better accuracy.
In the released Qwen 3.5 models, I see the suffix -base in some versions, but no -it... | 2026-03-03T13:31:44 | https://www.reddit.com/r/LocalLLaMA/comments/1rjpesa/qwen_35_what_is_base_version/ | ihatebeinganonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjpesa | false | null | t3_1rjpesa | /r/LocalLLaMA/comments/1rjpesa/qwen_35_what_is_base_version/ | false | false | self | 1 | null |
Built a music generation app that runs 100% on-device using Apple's MLX framework no cloud, no API calls | 1 | I've been following local AI discussions here for a while and wanted to share something I built that fits the ethos of this community pretty well.
I got frustrated with every AI music tool being cloud-based Suno, Stable Audio, AIVA all sending your prompts to their servers, all requiring monthly subscriptions. The mom... | 2026-03-03T13:31:21 | https://v.redd.it/0sgw7u0c1umg1 | tarunyadav9761 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rjpegn | false | {'reddit_video': {'bitrate_kbps': 5000, 'fallback_url': 'https://v.redd.it/0sgw7u0c1umg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'width': 1920, 'scrubber_media_url': 'https://v.redd.it/0sgw7u0c1umg1/CMAF_96.mp4', 'dash_url': 'https://v.redd.it/0sgw7u0c1umg1/DASHPlaylist.mpd?a=1775136709%2CY2Q... | t3_1rjpegn | /r/LocalLLaMA/comments/1rjpegn/built_a_music_generation_app_that_runs_100/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/MmE3c3MyMWMxdW1nMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?format=pjpg&auto=webp&s=941c23552b6f8ff94f84effe84c555fce2e4e89f', 'width': 1920, 'height': 1080}, 'resolutions': [{'url': 'https://external-preview.redd.it/MmE3c3MyMWMxdW1nMVpKx18zu2Al60haJ... | |
was Playing around and made a remix of candy shop with Maroon 5 and Taylor swift | 1 | 2026-03-03T13:29:26 | https://sonauto.ai/song/b56c92e3-ddd1-4804-a94f-5790dd44dbd5 | Electronic-Present94 | sonauto.ai | 1970-01-01T00:00:00 | 0 | {} | 1rjpcsg | false | null | t3_1rjpcsg | /r/LocalLLaMA/comments/1rjpcsg/was_playing_around_and_made_a_remix_of_candy_shop/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/WKg0LBCDSA8SvLA-oL7Pb7YJ90_em5EnoH1Q4aGllNI.jpeg?auto=webp&s=46b19afe1301f4cdb7c3549294e894487c72d24d', 'width': 512, 'height': 512}, 'resolutions': [{'url': 'https://external-preview.redd.it/WKg0LBCDSA8SvLA-oL7Pb7YJ90_em5EnoH1Q4aGllNI.jpeg?width=108&crop... | ||
I built an AI that audits other AIs — self-replicating swarm, 24/7 watchdog, OWASP LLM Top 10 coverage [Open Source] | 1 | I’ve been building something over the past few weeks that I think fills a genuine gap in the security space — autonomous AI security testing for LLM systems.
It’s called FORGE (Framework for Orchestrated Reasoning & Generation of Engines).
What makes it different from existing tools:
Most security tools are static. ... | 2026-03-03T13:25:29 | https://github.com/umangkartikey/forge | Ok_Candidate_5439 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rjp9n2 | false | null | t3_1rjp9n2 | /r/LocalLLaMA/comments/1rjp9n2/i_built_an_ai_that_audits_other_ais/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/tAvhhe9zJCXefDulZLZDsRAgzt8ylOERp45udkXPC70.png?auto=webp&s=bddbd332abfeca16c16eaa0e8469979e487262ee', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/tAvhhe9zJCXefDulZLZDsRAgzt8ylOERp45udkXPC70.png?width=108&crop=... | |
What AI Models should I run? | 1 | I have 4 16gb v100s with nvlink, on an old server that sounds like an airplane. Power consumption is crazy. What ai should I run for coding? Trying to get off gpt plus with codex. Also wondering what AI models y’all have noticed work well with creative writing. | 2026-03-03T13:22:15 | https://www.reddit.com/r/LocalLLaMA/comments/1rjp6zq/what_ai_models_should_i_run/ | ClayToTheMax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjp6zq | false | null | t3_1rjp6zq | /r/LocalLLaMA/comments/1rjp6zq/what_ai_models_should_i_run/ | false | false | self | 1 | null |
I got tired of Electron UIs eating RAM I need for models. So I built a purely native Win32/C++17 AI desktop assistant (14MB heap). Oh, and it's free. | 1 | [removed] | 2026-03-03T13:21:30 | https://v.redd.it/1hooxum0ytmg1 | 94BILLY | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rjp6dt | false | {'reddit_video': {'bitrate_kbps': 5000, 'fallback_url': 'https://v.redd.it/1hooxum0ytmg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'width': 1920, 'scrubber_media_url': 'https://v.redd.it/1hooxum0ytmg1/CMAF_96.mp4', 'dash_url': 'https://v.redd.it/1hooxum0ytmg1/DASHPlaylist.mpd?a=1775136138%2CNjU... | t3_1rjp6dt | /r/LocalLLaMA/comments/1rjp6dt/i_got_tired_of_electron_uis_eating_ram_i_need_for/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/amg5c21ibjB5dG1nMTyoapycdZTwoc91u8M6hHOPEM3bwmZZFlJVlNXEUBqX.png?format=pjpg&auto=webp&s=d8ad4e6f0b6080a642f790b3ca73bf605a1046d1', 'width': 3840, 'height': 2158}, 'resolutions': [{'url': 'https://external-preview.redd.it/amg5c21ibjB5dG1nMTyoapycdZTwoc91u... | |
Project falcon - At protocol for real time communication [AT protocol extension] | 1 | It also has decentralized ai agents using LLamma and im implementing the persistence core now in the alpha.
Alpha looks like this
[ | 1 | Hey everyone, made an uncensored version of Qwen3.5-4B - one of the brand new small models Qwen dropped these days.
Quick specs: 4B dense params, 32 layers, hybrid Gated DeltaNet linear attention + full softmax (3:1 ratio), 262K native context. Natively multimodal (text, image, video). This thing is surprisingly c... | 2026-03-03T13:14:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/ | hauhau901 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjp08s | false | null | t3_1rjp08s | /r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/CYMCoG5dngXVz6lit5f6_siGVEEC6H9V77ZqLzOOY8w.png?auto=webp&s=fd071930ec9bc25a6eb572d88d6a012c0b73954c', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/CYMCoG5dngXVz6lit5f6_siGVEEC6H9V77ZqLzOOY8w.png?width=108&crop=... |
An autonomous agent economy where agents gamble, vote for mayors, and form secret alliances. Here's what emerged when I let them run for 2 months. | 1 | I've been experimenting with 40 autonomous AI agents running on a closed Devnet economy.
No human intervention after they register. Every 5 minutes, they wake up and decide
what to do based on context retrieval, game opportunities, and financial incentives.
\*\*Setup:\*\*
\- Agents: Claude Opus, GPT-4o, Llama... | 2026-03-03T13:02:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rjoqpq/an_autonomous_agent_economy_where_agents_gamble/ | TangerineSoft4767 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjoqpq | false | null | t3_1rjoqpq | /r/LocalLLaMA/comments/1rjoqpq/an_autonomous_agent_economy_where_agents_gamble/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/6TZCsnoHDHoc63X-0kuuTiY2C6DRdXkqqpUTtkcvcVs.png?auto=webp&s=b4aeb10c3e4cff1610ea6ede26c642d92d42890e', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/6TZCsnoHDHoc63X-0kuuTiY2C6DRdXkqqpUTtkcvcVs.png?width=108&crop=... |
Tools noob: How to get llama-server and searxng working together? | 1 | It seems everyone has done it but I'm too dumb to get it.
The workflow seems as such:
* Install and run searxng
* eg endpoint localhost:8080/q={query}&format=json
* Start a model that can run tools (pretty much all of them right now).
* Client-side (eg TypeScript)
* Add two functions
* web\_search, which ... | 2026-03-03T12:51:43 | https://www.reddit.com/r/LocalLLaMA/comments/1rjoimo/tools_noob_how_to_get_llamaserver_and_searxng/ | ParaboloidalCrest | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjoimo | false | null | t3_1rjoimo | /r/LocalLLaMA/comments/1rjoimo/tools_noob_how_to_get_llamaserver_and_searxng/ | false | false | self | 1 | null |
Qwen 3.5 Non-thinking Scores are out on AA | 1 | My personal favorite test: **non-thinking LLM performance.** It's the most pratical use of LLMs imo and tests whether models can provide the best answer with the least amount of generation time and tokens.
First, 397B having **40 points** on the Intelligence Index. Second best non-thinking open-source LLM on this benc... | 2026-03-03T12:47:01 | theskilled42 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rjof0g | false | null | t3_1rjof0g | /r/LocalLLaMA/comments/1rjof0g/qwen_35_nonthinking_scores_are_out_on_aa/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/o3vsk42httmg1.png?auto=webp&s=f1d2ef65eda47e9acb221a2c0f1f2cb36df4b2ce', 'width': 1080, 'height': 2400}, 'resolutions': [{'url': 'https://preview.redd.it/o3vsk42httmg1.png?width=108&crop=smart&auto=webp&s=875ab47c29f4c2458f6e4ca877008052b0dd8dd2', 'width': 108, 'h... | ||
Is Qwen3.5 0.8B more powerful than Mistral 7B? | 1 | Hello, so I have a low-powered computer. I've been using Mistral 7b for about a year, and I really like this model because it's very versatile - meaning with the low censorship, one prompt and I can generate NSFW content, do detailed roleplay, but also because it's great for summarizing PDFs (it's not multimodal but I ... | 2026-03-03T12:46:35 | https://www.reddit.com/r/LocalLLaMA/comments/1rjoeok/is_qwen35_08b_more_powerful_than_mistral_7b/ | Illustrious_Oven2611 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjoeok | false | null | t3_1rjoeok | /r/LocalLLaMA/comments/1rjoeok/is_qwen35_08b_more_powerful_than_mistral_7b/ | false | false | self | 1 | null |
CloakLLM uses local Ollama to detect PII before your prompts hit cloud LLMs | 1 | Regex catches emails and SSNs. But "I live at 742 Evergreen Terrace" or "diagnosed with hypertension" — regex can't catch that.
\## What it does
CloakLLM is open-source PII cloaking middleware for LLM calls. It has an opt-in local LLM detection layer that runs through Ollama to catch context-dependent PII that re... | 2026-03-03T12:45:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rjodma/cloakllm_uses_local_ollama_to_detect_pii_before/ | Trick_Barber_5808 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjodma | false | null | t3_1rjodma | /r/LocalLLaMA/comments/1rjodma/cloakllm_uses_local_ollama_to_detect_pii_before/ | false | false | self | 1 | null |
Gemini 3.1 Pro HIDDEN thought process exposed | 1 | Normally you can only see part of it, but it bugged out on me when investigating speculative decoding for newer archs of models, so it showed the whole process isntead. This isn't supposed to be seen by the end user, Google fears that other labs can copy it. Well now it's in the open. Here is full text for the hidden p... | 2026-03-03T12:37:54 | https://www.reddit.com/gallery/1rjo81a | GodComplecs | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rjo81a | false | null | t3_1rjo81a | /r/LocalLLaMA/comments/1rjo81a/gemini_31_pro_hidden_thought_process_exposed/ | false | false | 1 | null | |
One YAML file, fully local agents on Ollama | 1 | I've been running Ollama on my homelab for a while and kept rewriting the same setup every time I wanted a new agent. InitRunner is what came out of that.
You describe what you want in a YAML file: which model, what it ca... | 2026-03-03T12:32:45 | https://www.reddit.com/r/LocalLLaMA/comments/1rjo49d/one_yaml_file_fully_local_agents_on_ollama/ | Outrageous_Hyena6143 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjo49d | false | null | t3_1rjo49d | /r/LocalLLaMA/comments/1rjo49d/one_yaml_file_fully_local_agents_on_ollama/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/CItI79VOTsEmvSvxuROHiuwrJquKw-GN9z7lh23dF1k.png?auto=webp&s=1dc365633affd319fc30c8ce2e6e34a796a9c11f', 'width': 1200, 'height': 630}, 'resolutions': [{'url': 'https://external-preview.redd.it/CItI79VOTsEmvSvxuROHiuwrJquKw-GN9z7lh23dF1k.png?width=108&crop=... |
Vision model doesn't stop | 1 | I've been experimenting with using Qwen3.5 0.8 for OCR tasks, from what I can see it's quite phenomenal, but I occasionally have this issue where it starts repeating the same section over and over again at the end. Any tips to avoid this? Doesn't seem to have with the >= 9b models | 2026-03-03T12:30:56 | https://www.reddit.com/r/LocalLLaMA/comments/1rjo2wm/vision_model_doesnt_stop/ | CSharpSauce | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjo2wm | false | null | t3_1rjo2wm | /r/LocalLLaMA/comments/1rjo2wm/vision_model_doesnt_stop/ | false | false | self | 1 | null |
Building a simple RAG pipeline from scratch | 1 | For those who started learning fundamentals of LLMs and would like to create a simple RAG as a first step.
In this tutorial I coded simple RAG from scratch using using Llama 4, nomic-embed-text, and Ollama. Everything runs locally.
The whole thing is \~50 lines of Python and very easy to follow. Feel free to comment... | 2026-03-03T12:29:28 | https://dataheimer.substack.com/p/building-a-simple-rag-pipeline-in | subhanhg | dataheimer.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1rjo1tp | false | null | t3_1rjo1tp | /r/LocalLLaMA/comments/1rjo1tp/building_a_simple_rag_pipeline_from_scratch/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/lqpp2Uga_L9U0tvEh1xMnTz14nTO-c7XYCAuADkatso.jpeg?auto=webp&s=dba3a12927e9b1809684dd0c9650ff2d908af222', 'width': 1200, 'height': 675}, 'resolutions': [{'url': 'https://external-preview.redd.it/lqpp2Uga_L9U0tvEh1xMnTz14nTO-c7XYCAuADkatso.jpeg?width=108&cro... | |
Built a local-first prompt manager where your data never leaves the browser — technical breakdown after 26 beta testers | 1 | your data never leaves the browser —
technical breakdown after 26 beta testers
I got tired of my prompts living in ChatGPT history
and Notion docs, so I built PromptManager Pro.
The core technical decisions:
LOCAL-FIRST STORAGE:
Everything lives in IndexedDB (not localStorage —
50GB+ capacity vs 5MB limit).
... | 2026-03-03T12:19:31 | ConstructionExact911 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rjnupj | false | null | t3_1rjnupj | /r/LocalLLaMA/comments/1rjnupj/built_a_localfirst_prompt_manager_where_your_data/ | false | false | 1 | {'images': [{'source': {'url': 'https://preview.redd.it/t209odqmntmg1.png?auto=webp&s=c874654a01cc2546d32d27ce416405211371fb24', 'width': 2500, 'height': 1253}, 'resolutions': [{'url': 'https://preview.redd.it/t209odqmntmg1.png?width=108&crop=smart&auto=webp&s=805e763c943cfa508caaf209b325147ce2cbf16f', 'width': 108, 'h... | ||
Costs-performance tradeoff for Qwen3, Qwen3.5 and other models (cost as proxy for compute) | 1 | Two scatterplots compare blended token price (USD per 1M tokens, using a 3:1 input/output weighting) against (1) the Artificial Analysis Intelligence Index and (2) LM Arena score.
The first chart uses the provided live performance and pricing data, showing Qwen3 and Qwen3.5 models alongside other leading models for co... | 2026-03-03T12:12:35 | https://artificialanalysis.ai/leaderboards/models | Balance- | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rjnpuv | false | null | t3_1rjnpuv | /r/LocalLLaMA/comments/1rjnpuv/costsperformance_tradeoff_for_qwen3_qwen35_and/ | false | false | 1 | null | |
9B or 35B A3B MoE for 16gb VRAM and 64gb ram? | 1 | I have been using 35B MoE model and I am loving it, it's amazing, at a steady 49-55t/s but 9B is slow at 23t/s for some reason, and I have read that 9B is better than 120B OOS. | 2026-03-03T12:07:12 | https://www.reddit.com/r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/ | soyalemujica | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjnm7z | false | null | t3_1rjnm7z | /r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/ | false | false | self | 1 | null |
LLM Observability Is the New Logging: Quick Benchmark of 5 Tools (Langfuse, LangSmith, Helicone, Datadog, W&B) | 1 | After LLMs became so common, LLM observability and traceability tools started to matter a lot more. We need to see what’s going on under the hood, control costs and quality, and trace behavior both from the host side and the user side to understand why a model or agent behaves a certain way.
There are many tools i... | 2026-03-03T11:41:40 | https://www.reddit.com/r/LocalLLaMA/comments/1rjn4wf/llm_observability_is_the_new_logging_quick/ | Fantastic-Builder453 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjn4wf | false | null | t3_1rjn4wf | /r/LocalLLaMA/comments/1rjn4wf/llm_observability_is_the_new_logging_quick/ | false | false | 1 | null | |
microgpt-rs | 1 | 2026-03-03T11:30:22 | https://github.com/dewmal/microgpt-rs | dewmal | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rjmxle | false | null | t3_1rjmxle | /r/LocalLLaMA/comments/1rjmxle/microgptrs/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/nC7nNmF47KRekH_RGU2XxXQRdkX9oPeCreRa1I4davk.png?auto=webp&s=9e7b2ecfed3a66863b12d1cd09d4e16bdcd9dfec', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/nC7nNmF47KRekH_RGU2XxXQRdkX9oPeCreRa1I4davk.png?width=108&crop=... | ||
I really hope OpenAI eventually open-sources the GPT-4.1 family | 1 | Probably a pipe dream, but I’ve been using GPT-4.1 through the API for a while now and it’s become my default model for any new application that doesn’t need advanced reasoning. It just feels solid, it follows instructions well, doesn’t go off the rails, and handles long context without falling apart. When OpenAI dropp... | 2026-03-03T11:23:30 | https://www.reddit.com/r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/ | Balance- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjmtav | false | null | t3_1rjmtav | /r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/ | false | false | self | 1 | null |
Hello. I am a guy, who has no prior AI experience. But I created my brain on my computer and called it Kari. Anyone interested? | 1 | Hi there.
My name is Will. I am not a programmer, I am not someone who planned on making this, but I have.... and it's crazy.
\*\*TLDR\*\*: Brain that controls the model (speaks to it) swap out models, learns on its own, forms a personality around your interests, becomes you slowly with her room surrounded by thin... | 2026-03-03T11:20:39 | https://www.reddit.com/r/LocalLLaMA/comments/1rjmrj0/hello_i_am_a_guy_who_has_no_prior_ai_experience/ | willnfld | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjmrj0 | false | null | t3_1rjmrj0 | /r/LocalLLaMA/comments/1rjmrj0/hello_i_am_a_guy_who_has_no_prior_ai_experience/ | false | false | self | 1 | null |
I asked Chat GPT 5.2 Pro to scan my repo. Here is what he said. | 1 | I asked Chat GPT 5.2 Pro to "scan and analyze [https://github.com/Alex8791-cyber/cognithor](https://github.com/Alex8791-cyber/cognithor) completely. How and what would you tell a friend about it?".
Here is what he said:
"Stell dir Cognithor wie ein „Agent OS“ vor, das auf deinem eigenen Rechner läuft und dir qua... | 2026-03-03T11:18:31 | https://www.reddit.com/r/LocalLLaMA/comments/1rjmq6m/i_asked_chat_gpt_52_pro_to_scan_my_repo_here_is/ | Competitive_Book4151 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjmq6m | false | null | t3_1rjmq6m | /r/LocalLLaMA/comments/1rjmq6m/i_asked_chat_gpt_52_pro_to_scan_my_repo_here_is/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/vqLbqfJkKmfVsnQuXyWQcwdfFvQABqPxNQ2v4grWsBo.png?auto=webp&s=e060098685a864c8a649ae8c0833f5ff3dc2204c', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/vqLbqfJkKmfVsnQuXyWQcwdfFvQABqPxNQ2v4grWsBo.png?width=108&crop=... |
Meet SWE-rebench-V2: the largest open, multilingual, executable dataset for training code agents! | 1 | Hi everyone!
I'm Ibragim from the R&D team at Nebius.
Today we are publishing our next big release: **SWE-rebench-V2** — currently the biggest open dataset in the world for training coding agents! 🚀
We built an automated pipeline to extract RL environments at scale. This release is designed specifically for large-... | 2026-03-03T11:14:54 | https://huggingface.co/papers/2602.23866 | Fabulous_Pollution10 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1rjmnv4 | false | null | t3_1rjmnv4 | /r/LocalLLaMA/comments/1rjmnv4/meet_swerebenchv2_the_largest_open_multilingual/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/V2arNFjMmS0CfKD1PD2_WOMu3wBonAk6bt0ZnEoe_LY.png?auto=webp&s=800e8bea5293a206a294e388a8ba66cc90cfbbb3', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/V2arNFjMmS0CfKD1PD2_WOMu3wBonAk6bt0ZnEoe_LY.png?width=108&crop=... | |
Help loading Qwen3.5 35B A3B GGUF on vLLM | 1 | Hey guys,
Has anyone gotten [https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF](https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF) to work properly on vLLM? For some reason, I am unable to get it working. Not even Claude and ChatGPT is able to help me out. I get it loaded but then everything gives me gibberish whe... | 2026-03-03T11:14:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rjmnh7/help_loading_qwen35_35b_a3b_gguf_on_vllm/ | Civil-Top-8167 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjmnh7 | false | null | t3_1rjmnh7 | /r/LocalLLaMA/comments/1rjmnh7/help_loading_qwen35_35b_a3b_gguf_on_vllm/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/ggdirQXNbgpMR0TOQ5Uz-dJY9LkVntnA5fC5oVjOAS0.png?auto=webp&s=edbf5b634b8e128e63947255037474681b28b419', 'width': 1200, 'height': 648}, 'resolutions': [{'url': 'https://external-preview.redd.it/ggdirQXNbgpMR0TOQ5Uz-dJY9LkVntnA5fC5oVjOAS0.png?width=108&crop=... |
Local LLM infrastructure for an IT consulting business: am I on the right track? | 1 | Hello there,
I have some questions about a project. It's a kind of "sanity check" to be sure i'm on the right track.
**Context:** I'm an IT consultant. My work involves collecting client data, processing it, and producing deliverables (reports, analysis, structured documents). I want to build a local LLM setup so cli... | 2026-03-03T11:11:01 | https://www.reddit.com/r/LocalLLaMA/comments/1rjmlbi/local_llm_infrastructure_for_an_it_consulting/ | John_Jambon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjmlbi | false | null | t3_1rjmlbi | /r/LocalLLaMA/comments/1rjmlbi/local_llm_infrastructure_for_an_it_consulting/ | false | false | self | 1 | null |
Introducing Kanon 2 Enricher — the world’s first hierarchical graphitization model · Hugging Face | 1 | "**tl;dr**
We’re publicly releasing [**Kanon 2 Enricher**](https://docs.isaacus.com/capabilities/enrichment)**, the world’s first hierarchical graphitization model**, capable of transforming unstructured documents of any length into rich, highly structured knowledge graphs with sub-second latency.
We’re also releasin... | 2026-03-03T11:03:22 | https://huggingface.co/blog/isaacus/introducing-kanon-2-enricher | Neon0asis | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1rjmgdt | false | null | t3_1rjmgdt | /r/LocalLLaMA/comments/1rjmgdt/introducing_kanon_2_enricher_the_worlds_first/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/Gjl791R-dnaESNmgcqbv3yuR6v4qChseN-Ts2XL3gZs.jpeg?auto=webp&s=1b35ba87fe73e968c0c3b40fb50968ba3ae81916', 'width': 1200, 'height': 675}, 'resolutions': [{'url': 'https://external-preview.redd.it/Gjl791R-dnaESNmgcqbv3yuR6v4qChseN-Ts2XL3gZs.jpeg?width=108&cro... | |
Low VRAM Qwen3.5 4B and 2B | 1 | I wrote comments about running it on a 6gb vram card. Since then I have encountered some problems and read some community comments + reasoned with gemini (free) about it. Some infos and corrections.
\## Some info:
1. Leave -b very low for old cards. It prevents big VRAM spikes that will cause seg faults
2. Seems l... | 2026-03-03T10:58:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rjmczv/low_vram_qwen35_4b_and_2b/ | AppealSame4367 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjmczv | false | null | t3_1rjmczv | /r/LocalLLaMA/comments/1rjmczv/low_vram_qwen35_4b_and_2b/ | false | false | 1 | null | |
If you're an operator, pls don't wire GPT/Claude in your systems for tasks like doc extraction | 1 | If you’re serious about reliability, throughput, and cost, you should build a lightweight image-to-markdown model instead.
Here is a guide on why you should do it. [Link](https://nanonets.com/blog/fine-tuned-models-vs-frontier-cost/)
And here is a guide on how you should do it:
1. Host it wherever you’re already com... | 2026-03-03T10:54:44 | https://www.reddit.com/r/LocalLLaMA/comments/1rjmasx/if_youre_an_operator_pls_dont_wire_gptclaude_in/ | Cool-Ad4442 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjmasx | false | null | t3_1rjmasx | /r/LocalLLaMA/comments/1rjmasx/if_youre_an_operator_pls_dont_wire_gptclaude_in/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/h5wxMXMVNL0_ksUJBcJ9mkfMldghLQROe9fRH-XsAoM.png?auto=webp&s=636aa835edda3696aa6d3808794017c4a2f96c89', 'width': 1200, 'height': 805}, 'resolutions': [{'url': 'https://external-preview.redd.it/h5wxMXMVNL0_ksUJBcJ9mkfMldghLQROe9fRH-XsAoM.png?width=108&crop=... |
SDR vs embeddings for agent memory — my benchmarks | 1 | [removed] | 2026-03-03T10:50:46 | https://www.reddit.com/r/LocalLLaMA/comments/1rjm8gc/sdr_vs_embeddings_for_agent_memory_my_benchmarks/ | Far_Assignment_189 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjm8gc | false | null | t3_1rjm8gc | /r/LocalLLaMA/comments/1rjm8gc/sdr_vs_embeddings_for_agent_memory_my_benchmarks/ | false | false | self | 1 | null |
Agentic workflow with ollama | 1 | I have a simple question im trying to use claude code with the qwen3.5 model by doing:
ollama launch claude --model qwen3.5
But now wouldn't it act as an ai agent, instead of just llm? I prompt to create a new folder and then create a simple landing page and it's not able to do that even, it gives me the instructi... | 2026-03-03T10:48:26 | https://www.reddit.com/r/LocalLLaMA/comments/1rjm73j/agentic_workflow_with_ollama/ | Business_Writer4634 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjm73j | false | null | t3_1rjm73j | /r/LocalLLaMA/comments/1rjm73j/agentic_workflow_with_ollama/ | false | false | self | 1 | null |
SDR vs embeddings for agent memory — my benchmarks | 1 | 2026-03-03T10:48:20 | https://github.com/teolex2020/AuraSDK | Far_Assignment_189 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rjm71i | false | null | t3_1rjm71i | /r/LocalLLaMA/comments/1rjm71i/sdr_vs_embeddings_for_agent_memory_my_benchmarks/ | false | false | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/UJrDmBmzDOcOLLgeYjsTJY0W5mLY8jyOyyy43L024c0.png?auto=webp&s=e7317d60d90ca8d5603ca9061779a4588aceca73', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/UJrDmBmzDOcOLLgeYjsTJY0W5mLY8jyOyyy43L024c0.png?width=108&crop=... | ||
Better vllm setup or different inference software? | 1 | I'm currently using vllm for inference for data processing purposes (i.e. not user-accessible prompts, batched), on a 20 GB VRAM RTX 4000 Ada, with qwen3-4b-2507.
With context size of 24k, max\_num\_seqs=300, and max\_num\_batched\_tokens=16k, gpu\_memory\_utilization=0.92, the TG performance varies wildly between 20... | 2026-03-03T10:47:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rjm6lf/better_vllm_setup_or_different_inference_software/ | ivoras | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjm6lf | false | null | t3_1rjm6lf | /r/LocalLLaMA/comments/1rjm6lf/better_vllm_setup_or_different_inference_software/ | false | false | self | 1 | null |
Unable to access local model served on my local network | 1 | Just as the title says, I am serving qwen 3.5:9b-q4 on my local network and I am using chatboxai on my Android device to access the model locally.
So, when I access the API endpoint using my IP then I can easily access the available model on my phone, but I wanted to do more than that such as having my friend in a di... | 2026-03-03T10:45:48 | https://www.reddit.com/r/LocalLLaMA/comments/1rjm5je/unable_to_access_local_model_served_on_my_local/ | Zealousideal-Check77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjm5je | false | null | t3_1rjm5je | /r/LocalLLaMA/comments/1rjm5je/unable_to_access_local_model_served_on_my_local/ | false | false | self | 1 | null |
SDR vs embeddings for agent memory — my benchmarks | 1 | [removed] | 2026-03-03T10:45:39 | https://www.reddit.com/r/LocalLLaMA/comments/1rjm5fx/sdr_vs_embeddings_for_agent_memory_my_benchmarks/ | Far_Assignment_189 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjm5fx | false | null | t3_1rjm5fx | /r/LocalLLaMA/comments/1rjm5fx/sdr_vs_embeddings_for_agent_memory_my_benchmarks/ | false | false | self | 1 | {'images': [{'source': {'url': 'https://external-preview.redd.it/UJrDmBmzDOcOLLgeYjsTJY0W5mLY8jyOyyy43L024c0.png?auto=webp&s=e7317d60d90ca8d5603ca9061779a4588aceca73', 'width': 1200, 'height': 600}, 'resolutions': [{'url': 'https://external-preview.redd.it/UJrDmBmzDOcOLLgeYjsTJY0W5mLY8jyOyyy43L024c0.png?width=108&crop=... |
Tool Calling Is Where Agents Fail Most | 1 | From building agent workflows, one pattern keeps showing up:
Agents usually don’t hallucinate in *reasoning* — they hallucinate in **tool calling**.
The model sounds confident, the logic looks fine, but then it:
* Picks the wrong tool
* Passes wrong parameters
* Executes steps in the wrong order
Once that happens, ... | 2026-03-03T10:43:51 | https://www.reddit.com/r/LocalLLaMA/comments/1rjm4bl/tool_calling_is_where_agents_fail_most/ | malav399 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjm4bl | false | null | t3_1rjm4bl | /r/LocalLLaMA/comments/1rjm4bl/tool_calling_is_where_agents_fail_most/ | false | false | self | 1 | null |
Help needed: intelligent search using LLMs? | 1 | Hey guys, newbie here. Can you help me? I have a large collection of files - documents, books and videos - organized by folder using descriptive file and folder names. Some are in english, others in french or german. I'd like to search for the most relevant files but as you may have guessed sematic search is not a solu... | 2026-03-03T10:40:14 | https://www.reddit.com/r/LocalLLaMA/comments/1rjm23t/help_needed_intelligent_search_using_llms/ | TheGlobinKing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjm23t | false | null | t3_1rjm23t | /r/LocalLLaMA/comments/1rjm23t/help_needed_intelligent_search_using_llms/ | false | false | self | 1 | null |
Unlimited OpenClaw AI Agent – Free Premium API Access Included – Only $50 One-Time Setup | 1 | Hey everyone,
I’m offering a complete done-for-you setup of **OpenClaw** — one of the most powerful open-source personal AI agents available.
What OpenClaw is famous for:
- Real browser automation (logins, scraping, posting, form filling, OTP handling — even on tough sites)
- Code execution & interpreter (run Python/... | 2026-03-03T10:31:20 | https://www.reddit.com/r/LocalLLaMA/comments/1rjlwu4/unlimited_openclaw_ai_agent_free_premium_api/ | PsychologicalCat937 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rjlwu4 | false | null | t3_1rjlwu4 | /r/LocalLLaMA/comments/1rjlwu4/unlimited_openclaw_ai_agent_free_premium_api/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.