title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Using llama.cpp, how to access API? | 6 | Is there an api we can access when using alpaca.cpp? I want to be able to access it and run commands using another program. | null | https://www.reddit.com/r/LocalLLaMA/comments/123e02i/using_llamacpp_how_to_access_api/ | GeneProfessional2164 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 123e02i | false | null | t3_123e02i | /r/LocalLLaMA/comments/123e02i/using_llamacpp_how_to_access_api/ | false | false | self | 6 | null |
LLMs have me 100% Convinced that Predictive Coding is Not a Theory | 25 | *It all makes sense* to me now. The absolute fluency with which LLMs pass the Turing test has redefined [my view of my very self](https://en.wikipedia.org/wiki/Predictive_coding). [I'm certain I'm not the first](https://www.nature.com/articles/s41562-022-01516-2) to state this, but I don't believe the epiphany is wid... | null | https://www.reddit.com/r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/ | friedrichvonschiller | self.LocalLLaMA | 2023-03-29T02:44:53 | 0 | {} | 123i2t5 | false | null | t3_123i2t5 | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': 'Po4Yjbm-YpAdDn2Fr4t5YK3QEvOnEWwDbmz7CVTRXHo', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/OEchZnWvbrOQFEoeI8OqU_TrjLpea3sWs85JHOHj_7k.jpg?width=108&crop=smart&auto=webp&v=enabled&s=1b2ac5cdfb890d132bf6217a86989c989b4ce94a', 'width': 108}... |
Comparing LLaMA and Alpaca models deterministically | 67 | ***Update 2023-03-28: Added answers using a ChatGPT-like persona and some new questions! Removed generation stats to make room for that.***
After spending a whole day comparing different versions of the LLaMA and Alpaca models, I thought that maybe that's of use to someone else as well, even if incomplete - so I'm sha... | null | https://www.reddit.com/r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/ | WolframRavenwolf | self.LocalLLaMA | 2023-03-28T17:13:19 | 0 | {} | 123ktm7 | false | null | t3_123ktm7 | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/ | false | false | self | 67 | {'enabled': False, 'images': [{'id': 'B_YBpIqUU29HWaBHtLTMbzOnMTCDvbR2_Hw2y_jhlj4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/K8fmp6gTv7ZIlyVDf8nqp2Hifs9wCPxNjDKBB5WehDM.jpg?width=108&crop=smart&auto=webp&v=enabled&s=beccd1b7914f3fae5b12cd1e7b7ca4f575edbd2f', 'width': 108},... |
r/LocalLLaMA Subreddit Statistics | 1 | null | https://subredditstatistics.com/r/LocalLLaMA | neefs | subredditstatistics.com | 1970-01-01T00:00:00 | 0 | {} | 123m80g | false | null | t3_123m80g | /r/LocalLLaMA/comments/123m80g/rlocalllama_subreddit_statistics/ | false | false | default | 1 | null | |
Well...frick... | 18 | You all are right, things run much faster in WSL...I will make a video installation guide hopefully today.
I'll include all the linux commands you need, as well as how to make symbolic links to your models folder so you don't need to keep them inside your wsl, you can have them in your normal windows os anywhere you w... | null | https://www.reddit.com/r/LocalLLaMA/comments/123pp3k/wellfrick/ | Inevitable-Start-653 | self.LocalLLaMA | 2023-03-28T20:54:51 | 0 | {} | 123pp3k | false | null | t3_123pp3k | /r/LocalLLaMA/comments/123pp3k/wellfrick/ | false | false | self | 18 | null |
A simple voice to text python script using vosk (runs offline) | 8 | import json
import queue
import sys
import threading
import time
import pyaudio
import vosk
from pynput import keyboard
#diable vosk logging
from vosk import SetLogLevel
SetLogLevel(-1)
# Flag to indicate if a key has been pressed
key_pressed = False
... | null | https://www.reddit.com/r/LocalLLaMA/comments/123pxny/a_simple_voice_to_text_python_script_using_vosk/ | MoneyPowerNexis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 123pxny | false | null | t3_123pxny | /r/LocalLLaMA/comments/123pxny/a_simple_voice_to_text_python_script_using_vosk/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'wPi7XWQB6ffa19nq_tx3OmYB8fG1Zsn2FTIQ-7r1Hr4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/3DqxVg8WKHUa1ksCtIpA7d-GXsK80g1FNK1KL77amhE.jpg?width=108&crop=smart&auto=webp&v=enabled&s=5b83987ce11310cbca82094db116a09eb0247bec', 'width': 108},... |
LLAMA Experience so far | 14 | **Setup:**
Laptop with RTX2060 (6 GB VRAM) and 32 GB RAM + ~32GB of additional space (used mostly when loading Llama 13b on Windows)
 
**Used for:** some questioning, but mostly chat and roleplay (might do a more structured questioning of it when things are more settled for me, whenever that may be- I just ... | null | https://www.reddit.com/r/LocalLLaMA/comments/123yp41/llama_experience_so_far/ | reduserGf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 123yp41 | false | null | t3_123yp41 | /r/LocalLLaMA/comments/123yp41/llama_experience_so_far/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'T0K27a62kQxG2wMxHSyhBAmYRrs3-3G_rphNQnrwoXE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/A-vb0Xq41438jaXpplQle-qHAuPoLzPfO7bkuE7XB3I.jpg?width=108&crop=smart&auto=webp&v=enabled&s=4906bea19d1095f32b7bce5297371dc10f7f0775', 'width': 108},... |
Increasing beams leads to repetitive output? | 3 | I've been playing around with 13B.
Just tried increasing the number of beams from 1 to 2, and I noticed a significant decrease in performance.
With one beam it produces coherent text as output, with two it starts repeating itself.
Anyone else experience this? | null | https://www.reddit.com/r/LocalLLaMA/comments/1242cbr/increasing_beams_leads_to_repetitive_output/ | MentesInquisitivas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1242cbr | false | null | t3_1242cbr | /r/LocalLLaMA/comments/1242cbr/increasing_beams_leads_to_repetitive_output/ | false | false | self | 3 | null |
Factuality of LLaMa-13B output | 5 | null | MentesInquisitivas | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1243vst | false | null | t3_1243vst | /r/LocalLLaMA/comments/1243vst/factuality_of_llama13b_output/ | false | false | 5 | {'enabled': True, 'images': [{'id': 'w_Sfn-aHPDDPMcx_n8PbLFyRRXaIkAN6fPj7sPwD3Jk', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/4i85lxglkeqa1.jpg?width=108&crop=smart&auto=webp&v=enabled&s=150bb27e3e2911ba37280416f8d532466b2cfea0', 'width': 108}, {'height': 152, 'url': 'https://preview... | |||
I am currently quantizing LLaMA-65B, 30B and 13B | logs and benchmarks | thinking about sharing models | 110 | Hey there fellow LLaMA enthusiasts!
I've been playing around with the [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) GitHub repo by *qwopqwop200* and decided to give quantizing LLaMA models a shot. The idea is to create multiple versions of LLaMA-65b, 30b, and 13b *[edit: also 7b]* models, each with d... | null | https://www.reddit.com/r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/ | Blacky372 | self.LocalLLaMA | 2023-03-30T09:57:26 | 0 | {} | 1248183 | false | null | t3_1248183 | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/ | false | false | self | 110 | {'enabled': False, 'images': [{'id': '1txjrFr2q5403CETB9soXOkqJLLTAmbxpTsX4YT9B8A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gGN8Ty_c9qEVT7XLK3YSaHtqMsf8sK8IMCql8fQPUtc.jpg?width=108&crop=smart&auto=webp&v=enabled&s=122865178d39b82d00f57c191bb81dbba8d01df2', 'width': 108},... |
Oobabooga WSL on Windows 10 Standard, 8bit, and 4bit plus LLaMA conversion instructions | 2 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/1248mhu/oobabooga_wsl_on_windows_10_standard_8bit_and/ | Inevitable-Start-653 | self.LocalLLaMA | 2023-03-28T02:00:07 | 0 | {} | 1248mhu | false | null | t3_1248mhu | /r/LocalLLaMA/comments/1248mhu/oobabooga_wsl_on_windows_10_standard_8bit_and/ | false | false | default | 2 | null |
7B Alpaca model (4-bit ggml) explains why it sometimes responds with "### Instruction:" | 1 | [deleted] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1248umn | false | null | t3_1248umn | /r/LocalLLaMA/comments/1248umn/7b_alpaca_model_4bit_ggml_explains_why_it/ | false | false | default | 1 | null | ||
Don't Buy an AMD 7000 Series for LLaMA Yet | 32 | I hate monopolies, and AMD hooked me with the VRAM and specs at a reasonable price. I'm here building llama.cpp with a 7900 XTX as a result.
There is [no support](https://docs.amd.com/bundle/Hardware_and_Software_Reference_Guide/page/Hardware_and_Software_Support.html) for the cards (not just unsupported, literally d... | null | https://www.reddit.com/r/LocalLLaMA/comments/124dc7i/dont_buy_an_amd_7000_series_for_llama_yet/ | friedrichvonschiller | self.LocalLLaMA | 2023-04-05T00:50:49 | 0 | {} | 124dc7i | false | null | t3_124dc7i | /r/LocalLLaMA/comments/124dc7i/dont_buy_an_amd_7000_series_for_llama_yet/ | false | false | self | 32 | null |
We made a mobile app using llama.cpp | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/124g54l/we_made_a_mobile_app_using_llamacpp/ | Reasonable_Day_9300 | self.LocalLLaMA | 2023-03-28T06:59:32 | 0 | {} | 124g54l | false | null | t3_124g54l | /r/LocalLLaMA/comments/124g54l/we_made_a_mobile_app_using_llamacpp/ | false | false | default | 1 | null |
Help installing alpaca 13B on steam deck | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/124jp7n/help_installing_alpaca_13b_on_steam_deck/ | -2b2t- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 124jp7n | false | null | t3_124jp7n | /r/LocalLLaMA/comments/124jp7n/help_installing_alpaca_13b_on_steam_deck/ | false | false | default | 1 | null |
We made a mobile app using llama.cpp ! | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/124jtt3/we_made_a_mobile_app_using_llamacpp/ | Reasonable_Day_9300 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 124jtt3 | false | null | t3_124jtt3 | /r/LocalLLaMA/comments/124jtt3/we_made_a_mobile_app_using_llamacpp/ | false | false | default | 1 | null |
CodeAlpaca | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/124ntjd/codealpaca/ | ihaag | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 124ntjd | false | null | t3_124ntjd | /r/LocalLLaMA/comments/124ntjd/codealpaca/ | false | false | default | 1 | null |
llama.cpp interactive/dialog how to get longer responses and less repetition of my words | 1 | [deleted] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 124ox4c | false | null | t3_124ox4c | /r/LocalLLaMA/comments/124ox4c/llamacpp_interactivedialog_how_to_get_longer/ | false | false | default | 1 | null | ||
Where can I find characters that were made by other people? | 8 | null | https://www.reddit.com/r/LocalLLaMA/comments/124w1fi/where_can_i_find_characters_that_were_made_by/ | Famberlight | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 124w1fi | false | null | t3_124w1fi | /r/LocalLLaMA/comments/124w1fi/where_can_i_find_characters_that_were_made_by/ | false | false | self | 8 | null | |
Oobabooga WSL on Windows 10 Standard, 8bit, and 4bit plus LLaMA conversion instructions, video instructions | 1 | null | https://www.reddit.com/r/Oobabooga/comments/1248me4/oobabooga_wsl_on_windows_10_standard_8bit_and/ | Inevitable-Start-653 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 124ya37 | false | null | t3_124ya37 | /r/LocalLLaMA/comments/124ya37/oobabooga_wsl_on_windows_10_standard_8bit_and/ | false | false | default | 1 | null | |
Alpaca-30B-4bit-128g does this?! | 1 | [removed] | null | 9cent0 | i.imgur.com | 1970-01-01T00:00:00 | 0 | {} | 124yt99 | false | null | t3_124yt99 | /r/LocalLLaMA/comments/124yt99/alpaca30b4bit128g_does_this/ | false | false | default | 1 | null | |
Why do my installation instructions keep getting taken down? | 3 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/124z3tv/why_do_my_installation_instructions_keep_getting/ | Inevitable-Start-653 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 124z3tv | false | null | t3_124z3tv | /r/LocalLLaMA/comments/124z3tv/why_do_my_installation_instructions_keep_getting/ | false | false | default | 3 | null |
Has anyone tried the 65B model with Alpaca.cpp on a M2 MacBook Pro? | 9 | Hi, I recently discovered Alpaca.cpp and have been enjoying it a lot. I run it on a M1 MacBook Air that has 16GB of RAM. The 13B model does run well on my computer but there are much better models available like the 30B and 65B. I have tried to run the 30B on my computer but it runs too slowly to be usable. One questio... | null | https://www.reddit.com/r/LocalLLaMA/comments/12517ab/has_anyone_tried_the_65b_model_with_alpacacpp_on/ | ma-2022 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12517ab | false | null | t3_12517ab | /r/LocalLLaMA/comments/12517ab/has_anyone_tried_the_65b_model_with_alpacacpp_on/ | false | false | self | 9 | null |
New LLM just dropped. Cerebras-GPT. Apache 2.0 license. Huggingface link included. | 1 | [deleted] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1253v5b | false | null | t3_1253v5b | /r/LocalLLaMA/comments/1253v5b/new_llm_just_dropped_cerebrasgpt_apache_20/ | false | false | default | 1 | null | ||
My first chat with llama-7b about catching mice was fun! | 15 | User:Tell me ways to catch a mouse loose in my house.
Bob: There are many ways to catch a mouse. In fact, mice are very good at evading humans. I can give you some suggestions on how to catch a mouse.
User:please do so
Bob: The first thing you can do is check the house for possible entry points.
User:go on
Bob: A ... | null | https://www.reddit.com/r/LocalLLaMA/comments/1254n2v/my_first_chat_with_llama7b_about_catching_mice/ | nykfank | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1254n2v | false | null | t3_1254n2v | /r/LocalLLaMA/comments/1254n2v/my_first_chat_with_llama7b_about_catching_mice/ | false | false | self | 15 | null |
Best settings for Alpaca.cpp according to Alpaca.cpp | 1 | [removed] | null | [deleted] | 2023-03-29T00:55:51 | 0 | {} | 1254y9v | false | null | t3_1254y9v | /r/LocalLLaMA/comments/1254y9v/best_settings_for_alpacacpp_according_to_alpacacpp/ | false | false | default | 1 | null | ||
Free anonymous Oobabooga install instructions download: WSL and non-WSL for Windows, no account - no YouTube - no ads - no ego. You own what you download. | 25 | ***Update Do this instead***
things move so fast the instructions are already out dated. Mr. Oobabooga had updated his repo with a one click installer....and it works!! omg it works so well too :3
https://github.com/oobabooga/text-generation-webui#installation
***Update Do this instead***
There are no links to a spe... | null | https://www.reddit.com/r/LocalLLaMA/comments/1255jsd/free_anonymous_oobabooga_install_instructions/ | Inevitable-Start-653 | self.LocalLLaMA | 2023-03-30T01:26:39 | 0 | {} | 1255jsd | false | null | t3_1255jsd | /r/LocalLLaMA/comments/1255jsd/free_anonymous_oobabooga_install_instructions/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': '4n4rb3aJD7QQb0YJLRqT_iVmsBzEqx1qgd5NkR84Wx0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/biONX7-EvUgwnWCKqHBs4dSa1vsu9SUCeyhSgJBV2zQ.jpg?width=108&crop=smart&auto=webp&v=enabled&s=98798ef4c280e1856e38dcf9c5cba1e828e96c3c', 'width': 108},... |
GPT4All, LLaMA 7B LoRA finetuned on ~400k GPT-3.5-Turbo prompt/generation pairs | 95 | null | https://twitter.com/andriy_mulyar/status/1640836003194630144 | itsreallyreallytrue | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 12562xt | false | {'oembed': {'author_name': 'AndriyMulyar', 'author_url': 'https://twitter.com/andriy_mulyar', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">I&#39;m excited to announce the release of GPT4All, a 7B param language model finetuned from a curated ... | t3_12562xt | /r/LocalLLaMA/comments/12562xt/gpt4all_llama_7b_lora_finetuned_on_400k/ | false | false | 95 | {'enabled': False, 'images': [{'id': 'oAYQNordIRBu5jB7zj0BJ_yEa7uobrR96i2F1DLKFIE', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/tPeG_1LJSWGk3jn0GalI4D_SQgGkPt-u378hqXWzbdY.jpg?width=108&crop=smart&auto=webp&v=enabled&s=8f5893d8389362a7e76bdc54d513ec672f3bdbc5', 'width': 108}]... | ||
Dirty data sets and LLaMA/ALPACA... | 9 | Hey everybody - been experimenting with LLaMA recently (running 13b on my 3080ti).
Inspired by how well LLaMA works, I decided to try my hands at using the Alpaca data to make a module for Euterpe inside NovelAI (it's based on fairseq 13b, an older facebook model, not a llama). In the process, I had to hand-clean Alpa... | null | https://www.reddit.com/r/LocalLLaMA/comments/12587y5/dirty_data_sets_and_llamaalpaca/ | deepinterstate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12587y5 | false | null | t3_12587y5 | /r/LocalLLaMA/comments/12587y5/dirty_data_sets_and_llamaalpaca/ | false | false | self | 9 | null |
LLaMA-Adapter: Efficient Fine-tuning of LLaMA | 13 | I found this.
This repo proposes LLaMA-Adapter, a lightweight adaption method for fine-tuning instruction-following LLaMA models 🔥, using 52K data provied by Stanford Alpaca. | null | https://github.com/ZrrSkywalker/LLaMA-Adapter | ninjasaid13 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1259yxg | false | null | t3_1259yxg | /r/LocalLLaMA/comments/1259yxg/llamaadapter_efficient_finetuning_of_llama/ | false | false | 13 | {'enabled': False, 'images': [{'id': 'QQvQeaMSJF8TI20ZcAVtIpS8RXEKcALtYpGzC0LpS9I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Fg0rnY4M8-3TbE_eAIUfxaueahbHFzdiYMNYjCqcPnw.jpg?width=108&crop=smart&auto=webp&v=enabled&s=5790282444d1785385f07f84a01f0687fa55d030', 'width': 108},... | |
There's a dolly-ggml repo on Hugging Face | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/125a3sj/theres_a_dollyggml_repo_on_hugging_face/ | _wsgeorge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 125a3sj | false | null | t3_125a3sj | /r/LocalLLaMA/comments/125a3sj/theres_a_dollyggml_repo_on_hugging_face/ | false | false | default | 1 | null |
The sound of AI hallucinating | 1 | [removed] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 125aycl | false | null | t3_125aycl | /r/LocalLLaMA/comments/125aycl/the_sound_of_ai_hallucinating/ | false | false | default | 1 | null | ||
Poor LLaMA Results? Use Prompt Design | 31 | The performance LLaMA will achieve with [guided prompting](https://www.mihaileric.com/posts/a-complete-introduction-to-prompt-engineering/) is optimal because you have set a pattern for the model. It will try to mimic your example, and it will do so unencumbered by any baggage imported after the victorious weights won... | null | https://www.reddit.com/r/LocalLLaMA/comments/125ccve/poor_llama_results_use_prompt_design/ | friedrichvonschiller | self.LocalLLaMA | 2023-03-30T18:06:01 | 0 | {} | 125ccve | false | null | t3_125ccve | /r/LocalLLaMA/comments/125ccve/poor_llama_results_use_prompt_design/ | false | false | self | 31 | null |
Cerebras-GPT: New Open Source Language Models from 111M to 13B Parameters Just Released! | 24 | null | https://www.cerebras.net/blog/cerebras-gpt-a-family-of-open-compute-efficient-large-language-models/ | Blacky372 | cerebras.net | 1970-01-01T00:00:00 | 0 | {} | 125cml9 | false | null | t3_125cml9 | /r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/ | false | false | 24 | {'enabled': False, 'images': [{'id': 'mS-VzyJMA4J-1dD9vbtGPfDWlQqrjWdRa_hBpRRnt4A', 'resolutions': [{'height': 82, 'url': 'https://external-preview.redd.it/XZxREBxetzAOxufuUCZyo8V-LWjpD87DGGuAAM258xQ.jpg?width=108&crop=smart&auto=webp&v=enabled&s=769a1260717a5da1d6091f95c04384b67a6e3e84', 'width': 108},... | ||
What local LLM models are now accessible to the internet and able to read images? | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/125ei8x/what_local_llm_models_are_now_accessible_to_the/ | nillouise | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 125ei8x | false | null | t3_125ei8x | /r/LocalLLaMA/comments/125ei8x/what_local_llm_models_are_now_accessible_to_the/ | false | false | default | 1 | null |
Can you run 4bit models on 2000 series cards? | 1 | Subj. I can setup 8bit model, but not 4bit, and I really close to pulling my hair out.
Can it be a hardware limitation? It is not mentioned anywhere!
I have 2060 12gb and win 10, tried wsl too... | null | https://www.reddit.com/r/LocalLLaMA/comments/125hnko/can_you_run_4bit_models_on_2000_series_cards/ | BalorNG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 125hnko | false | null | t3_125hnko | /r/LocalLLaMA/comments/125hnko/can_you_run_4bit_models_on_2000_series_cards/ | false | false | self | 1 | null |
Are llama 2bit quantized models publicly availible | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/125k1s9/are_llama_2bit_quantized_models_publicly_availible/ | pkuba208 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 125k1s9 | false | null | t3_125k1s9 | /r/LocalLLaMA/comments/125k1s9/are_llama_2bit_quantized_models_publicly_availible/ | false | false | default | 1 | null |
The Windows one-click installer has been updated (4-bit and 8-bit should work out of the box) | 22 | null | https://www.reddit.com/r/Oobabooga/comments/125e6it/the_windows_oneclick_installer_has_been_updated/ | Inevitable-Start-653 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 125m1q5 | false | null | t3_125m1q5 | /r/LocalLLaMA/comments/125m1q5/the_windows_oneclick_installer_has_been_updated/ | false | false | default | 22 | {'enabled': False, 'images': [{'id': 'M46eIN1cjQO9j5ikRy7VeISsga69BWjtI0eBkIJrZgI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Rrca-S42cWLRc4VXWUeW0liIw_PEcyeDnvSIiyWEhZA.jpg?width=108&crop=smart&auto=webp&v=enabled&s=6ea3ab8ca3395afcd41bdbf73d5b0c120b104b7e', 'width': 108},... | |
Are you accepting donations? | 16 | While big tech wring their hands dumbfounded on how to solve the alignment problem, people in this thread are already doing it. Everyone has a brain. Not pretty but much better than situations where only some have a brain or control the brain we use. Not the most utopian solution to the alignment problem. But I believe... | null | https://www.reddit.com/r/LocalLLaMA/comments/125q5zt/are_you_accepting_donations/ | gransee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 125q5zt | false | null | t3_125q5zt | /r/LocalLLaMA/comments/125q5zt/are_you_accepting_donations/ | false | false | self | 16 | null |
Issue using HuggingFace weights with llama.cpp | 3 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/125tygp/issue_using_huggingface_weights_with_llamacpp/ | Unusual_Guidance2095 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 125tygp | false | null | t3_125tygp | /r/LocalLLaMA/comments/125tygp/issue_using_huggingface_weights_with_llamacpp/ | false | false | default | 3 | null |
ColossalChat | 36 | an open-source solution for cloning ChatGPT with a complete RLHF pipeline.
https://github.com/hpcaitech/ColossalAI/tree/main/applications/Chat | null | CodOtherwise | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 125u56p | false | null | t3_125u56p | /r/LocalLLaMA/comments/125u56p/colossalchat/ | false | false | 36 | {'enabled': True, 'images': [{'id': 'DyKExv9K0B8nXwYU0E07xQFFh1WXgSa35F-sxHiKxjs', 'resolutions': [{'height': 175, 'url': 'https://preview.redd.it/zvukcut65rqa1.jpg?width=108&crop=smart&auto=webp&v=enabled&s=57b33aa00936af82cfb0a7c16ecc7600e30415ea', 'width': 108}, {'height': 351, 'url': 'https://previe... | ||
Alpaca 13b settings? | 6 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/125y2vw/alpaca_13b_settings/ | -2b2t- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 125y2vw | false | null | t3_125y2vw | /r/LocalLLaMA/comments/125y2vw/alpaca_13b_settings/ | false | false | default | 6 | null |
Summarizing Short Stories with LLaMA 13B | 31 | null | https://defenestrationism.net/angels-and-blueberries/ | friedrichvonschiller | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1260d5i | false | null | t3_1260d5i | /r/LocalLLaMA/comments/1260d5i/summarizing_short_stories_with_llama_13b/ | false | false | 31 | null | ||
Has an offline AI in some ways taught you just as much about yourself as the AI, based on the things you can say to it? | 8 | It's interesting saying the darkest imaginable thing just to see how the AI reacts, and at some point with continued prompts it may damage your soul, but I'm sure the novelty wears off before it becomes an issue.
I bet people are seeing repressed fetishes they did not realize they had. | null | https://www.reddit.com/r/LocalLLaMA/comments/1261dau/has_an_offline_ai_in_some_ways_taught_you_just_as/ | ThePseudoMcCoy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1261dau | false | null | t3_1261dau | /r/LocalLLaMA/comments/1261dau/has_an_offline_ai_in_some_ways_taught_you_just_as/ | false | false | self | 8 | null |
Native finetuning on dual rtx3090 | 18 | Can 7B or 13B be natively trained (i.e. no LoRA) on dual rtx3090's? Training time is not an issue, but fitting it on the given vram is.
Does anyone know? | null | https://www.reddit.com/r/LocalLLaMA/comments/1262kko/native_finetuning_on_dual_rtx3090/ | 2muchnet42day | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1262kko | false | null | t3_1262kko | /r/LocalLLaMA/comments/1262kko/native_finetuning_on_dual_rtx3090/ | false | false | self | 18 | null |
Wtf am I doing wrong on the install? Pulling my hair out to get it to work. | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/12633kx/wtf_am_i_doing_wrong_on_the_install_pulling_my/ | nero10578 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12633kx | false | null | t3_12633kx | /r/LocalLLaMA/comments/12633kx/wtf_am_i_doing_wrong_on_the_install_pulling_my/ | false | false | default | 1 | null |
This is probably the easiest way to install it locally. | 0 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/1265pps/this_is_probably_the_easiest_way_to_install_it/ | zeroninezerotow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1265pps | false | null | t3_1265pps | /r/LocalLLaMA/comments/1265pps/this_is_probably_the_easiest_way_to_install_it/ | false | false | default | 0 | null |
Chat Emojis from a Character Card! Click for JSON! | 4 | null | friedrichvonschiller | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1269knq | false | null | t3_1269knq | /r/LocalLLaMA/comments/1269knq/chat_emojis_from_a_character_card_click_for_json/ | false | false | 4 | {'enabled': True, 'images': [{'id': 'HANVtBnexbdGzHjFcZwysVu_JBMNQfFEXh_kzuyfwHU', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/4l6qfw9ulsqa1.png?width=108&crop=smart&auto=webp&v=enabled&s=e5aa1e9c2e76256083df5fcd1bb4dc03cc0b477e', 'width': 108}, {'height': 120, 'url': 'https://preview... | |||
Anyone else have llama.cpp (7B 4-bit ggml) change personalities mid-conversation? | 1 | null | _wsgeorge | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 126c1ca | false | null | t3_126c1ca | /r/LocalLLaMA/comments/126c1ca/anyone_else_have_llamacpp_7b_4bit_ggml_change/ | false | false | default | 1 | null | ||
Can I feed all documents related to a specific program to LLaMA and use it as an assistant? Also, can it be set to answer only specific program-related questions? | 1 | null | https://www.reddit.com/r/LocalLLaMA/comments/126cfqj/can_i_feed_all_documents_related_to_a_specific/ | plsdontargue | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 126cfqj | false | null | t3_126cfqj | /r/LocalLLaMA/comments/126cfqj/can_i_feed_all_documents_related_to_a_specific/ | false | false | default | 1 | null | |
LoRA training rank selection guideline? | 5 | I read the LoRA paper and blogs/articles talking about it. Yet, it seems, there is no discussing or guidelines as to how to choose appropriate rank.
Anyone who has experience training with LoRA, pitch in and share his/her views or findings on this. Basically, how to choose appropriate rank for training/finetuning, gi... | null | https://www.reddit.com/r/LocalLLaMA/comments/126d1qw/lora_training_rank_selection_guideline/ | Raise_Fickle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 126d1qw | false | null | t3_126d1qw | /r/LocalLLaMA/comments/126d1qw/lora_training_rank_selection_guideline/ | false | false | self | 5 | null |
My 3090 is a troll : why? | 74 | null | https://www.reddit.com/gallery/126g792 | aerilyn235 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 126g792 | false | null | t3_126g792 | /r/LocalLLaMA/comments/126g792/my_3090_is_a_troll_why/ | false | false | 74 | null | ||
Using twitter data | 7 | Hi Local Llama
I've been lurking for a while trying to get to grips with this all.
I have my twitter data and wanted to know how best to train a model on this data. I'm coming at this pretty green so please go easy on me. | null | https://www.reddit.com/r/LocalLLaMA/comments/126k42q/using_twitter_data/ | SupernovaTheGrey | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 126k42q | false | null | t3_126k42q | /r/LocalLLaMA/comments/126k42q/using_twitter_data/ | false | false | self | 7 | null |
Having trouble installing Alpaca! | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/126oaxb/having_trouble_installing_alpaca/ | LickedLollies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 126oaxb | false | null | t3_126oaxb | /r/LocalLLaMA/comments/126oaxb/having_trouble_installing_alpaca/ | false | false | default | 1 | null |
So I'm a bit confused... | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/126p159/so_im_a_bit_confused/ | LickedLollies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 126p159 | false | null | t3_126p159 | /r/LocalLLaMA/comments/126p159/so_im_a_bit_confused/ | false | false | default | 1 | null |
Increasing maximum context length? | 8 | Let's say I have RAM to spare and want to go above 2000tokens.
Is it something you can change in settings, or is it some hard "archetectural" limit?
After all, gpt4 has 8k and even 32k token limits, and I don't think this has to do with number of parameters? | null | https://www.reddit.com/r/LocalLLaMA/comments/126przf/increasing_maximum_context_length/ | BalorNG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 126przf | false | null | t3_126przf | /r/LocalLLaMA/comments/126przf/increasing_maximum_context_length/ | false | false | self | 8 | null |
Why is it hit and miss? | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/126rqge/why_is_it_hit_and_miss/ | LickedLollies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 126rqge | false | null | t3_126rqge | /r/LocalLLaMA/comments/126rqge/why_is_it_hit_and_miss/ | false | false | default | 1 | null |
Fine-tune a LLaMA into a history wiz by feeding him history books | 12 | Hi all,
I am a student with no particular skills in AI. For a school project, I would like to train a LLaMA model to turn it into a history wiz. I recently discovered about PEFT and found a few github repos that seem to obtain amazing results using this technique to teach a (human) language to llama.
Basically, what ... | null | https://www.reddit.com/r/LocalLLaMA/comments/126rs1n/finetune_a_llama_into_a_history_wiz_by_feeding/ | Ok-Access-7091 | self.LocalLLaMA | 2023-03-30T16:24:12 | 0 | {} | 126rs1n | false | null | t3_126rs1n | /r/LocalLLaMA/comments/126rs1n/finetune_a_llama_into_a_history_wiz_by_feeding/ | false | false | self | 12 | null |
Why is it an uphill battle? | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/126v83w/why_is_it_an_uphill_battle/ | LickedLollies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 126v83w | false | null | t3_126v83w | /r/LocalLLaMA/comments/126v83w/why_is_it_an_uphill_battle/ | false | false | default | 1 | null |
Where is Alpaca 30B? | 20 | Maybe we should have a sticky with the a list of all the top projects and latest iterations?
Things move so fast I can't wrap my head around what is even going on anymore. Everyone is talking about Alpaca 7B, but 7B sucks compared to 30B or even 13B. I thought the Alpaca technique was easily transferrable to the large... | null | https://www.reddit.com/r/LocalLLaMA/comments/126x4ii/where_is_alpaca_30b/ | SmithMano | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 126x4ii | false | null | t3_126x4ii | /r/LocalLLaMA/comments/126x4ii/where_is_alpaca_30b/ | false | false | self | 20 | null |
Adapting local Alpaca 7B install for 13B? | 3 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/126ycmq/adapting_local_alpaca_7b_install_for_13b/ | lucas-lejeune | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 126ycmq | false | null | t3_126ycmq | /r/LocalLLaMA/comments/126ycmq/adapting_local_alpaca_7b_install_for_13b/ | false | false | default | 3 | null |
GitHub - TonyNazzal/GPTQ-for-LLaMa at load_safetensors_direct_to_gpu | 1 | null | https://github.com/TonyNazzal/GPTQ-for-LLaMa/tree/load_safetensors_direct_to_gpu | mentosorangemint | github.com | 1970-01-01T00:00:00 | 0 | {} | 1272bo1 | false | null | t3_1272bo1 | /r/LocalLLaMA/comments/1272bo1/github_tonynazzalgptqforllama_at_load_safetensors/ | false | false | default | 1 | null | |
Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality | 47 | null | https://vicuna.lmsys.org/ | monkmartinez | vicuna.lmsys.org | 1970-01-01T00:00:00 | 0 | {} | 12738yl | false | null | t3_12738yl | /r/LocalLLaMA/comments/12738yl/vicuna_an_opensource_chatbot_impressing_gpt4_with/ | false | false | default | 47 | null | |
Introducing the OIG Dataset: A Massive Open Source Instruction Dataset with ~43M Instructions! | 70 | Recently, LAION and other members of the open source community released a chatbot dataset named OIG to promote equal access to chatbot technology. The dataset was made available for anyone to use and contribute improvements to. It was a great initiative that showcased the collaborative spirit of the community!
[ht... | null | https://www.reddit.com/r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/ | Lorenzo9196 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1278p6v | false | null | t3_1278p6v | /r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/ | false | false | self | 70 | {'enabled': False, 'images': [{'id': 'HL6mrP_zaWwGfulYnTAIdSkfemEyz7MqHoMPhEvdcLg', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/fSLUgkI7axE57z9eCu_SFtvgUb3Zrs01wYTM_i0uuNc.jpg?width=108&crop=smart&auto=webp&v=enabled&s=109d4c89faf0e022c0600a62760858bb198f7d0c', 'width': 108},... |
considering this hardware, what d'you think i could comfortably run if i optimized as much as i could? | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/127buld/considering_this_hardware_what_dyou_think_i_could/ | FairArkExperience | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 127buld | false | null | t3_127buld | /r/LocalLLaMA/comments/127buld/considering_this_hardware_what_dyou_think_i_could/ | false | false | default | 1 | null |
Lora vs Native Finetuning? | 8 | Has anyone got any stats on the difference between finetuning with a lora vs natively?
* By what percent is one faster than the other?
* Accuracy difference?
* Time and resources required for training?
* Filesize difference?
Thanks! | null | https://www.reddit.com/r/LocalLLaMA/comments/127dfap/lora_vs_native_finetuning/ | Pan000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 127dfap | false | null | t3_127dfap | /r/LocalLLaMA/comments/127dfap/lora_vs_native_finetuning/ | false | false | self | 8 | null |
Problem with running alpaca.cpp on windows 10 | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/127duh3/problem_with_running_alpacacpp_on_windows_10/ | Sumoleon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 127duh3 | false | null | t3_127duh3 | /r/LocalLLaMA/comments/127duh3/problem_with_running_alpacacpp_on_windows_10/ | false | false | default | 1 | null |
Training/finetuning and other basic questions | 11 | This is what I've gathered so far and where my understanding is lacking. Normally I ask GPT4 these things, but it's a bit new :)
&#x200B;
\- The training data for finetuning is essentially just text, it doesn't come in an input and output form, but we can add tags like 'INPUT' and 'OUTPUT' and then for inference ... | null | https://www.reddit.com/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/ | Pan000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 127eb6v | false | null | t3_127eb6v | /r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/ | false | false | self | 11 | null |
Only giving me gibberish | 1 | [deleted] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 127gqzh | false | null | t3_127gqzh | /r/LocalLLaMA/comments/127gqzh/only_giving_me_gibberish/ | false | false | default | 1 | null | ||
Extremely slow performance with 8bit 30b, and complete nonsense with 4bit 65b | 1 | [removed] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 127ldgw | false | null | t3_127ldgw | /r/LocalLLaMA/comments/127ldgw/extremely_slow_performance_with_8bit_30b_and/ | false | false | default | 1 | null | ||
Would it be possible to finetune a llama-7b model and use the adapter_model.bin for a bigger model to save computing power? | 2 | Would it be possible to finetune a llama-7b model and use the adapter\_model.bin for a bigger model to save computing power?
Would be awesome if that was possible. I have no deep understanding about how a lora model is applied to the llama model, so I just wondering if one could "upscale" the lora model with not much... | null | https://www.reddit.com/r/LocalLLaMA/comments/127or7n/would_it_be_possible_to_finetune_a_llama7b_model/ | Ok-Scarcity-7875 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 127or7n | false | null | t3_127or7n | /r/LocalLLaMA/comments/127or7n/would_it_be_possible_to_finetune_a_llama7b_model/ | false | false | self | 2 | null |
Can hobbyists engage in a meaningful way? | 27 | I am just a dude (albeit with a EE degree) that has been blown away with my interactions with Chat GPT 4. I understand that the complexity of that model cannot be remotely approached with home installations. However, I'm very curious to get my feet wet with a local Llama installation. Is there much sense in purchasing ... | null | https://www.reddit.com/r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/ | Old_Court9173 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 127pg1y | false | null | t3_127pg1y | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/ | false | false | self | 27 | null |
Long term memory extension | 23 | So I don't think this has been shared here yet, but someone made an extension to the oobabooga webui which stores text in a database and recalls it when relevant so you can have long term memory without having to worry about increasing the context window
[GitHub - wawawario2/text-generation-webui: A gradio web UI fo... | null | https://www.reddit.com/r/LocalLLaMA/comments/127pmko/long_term_memory_extension/ | NDV-Twist-5283 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 127pmko | false | null | t3_127pmko | /r/LocalLLaMA/comments/127pmko/long_term_memory_extension/ | false | false | self | 23 | {'enabled': False, 'images': [{'id': 'ntpagC7uzBP50psQb3W4RfFZp6AQz8fcisBsGVh4K3Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3iyO2wkL_8ctBCGHvF7w95Ut5qni0E1uA_L7r6SghkA.jpg?width=108&crop=smart&auto=webp&v=enabled&s=60bbad54a26b2336cfa2f439ced296bd317657ff', 'width': 108},... |
Can't generate random stuff ? | 2 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/127r2ze/cant_generate_random_stuff/ | Direct-Ad676 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 127r2ze | false | null | t3_127r2ze | /r/LocalLLaMA/comments/127r2ze/cant_generate_random_stuff/ | false | false | default | 2 | null |
Tweaking LLaMA Results using Fixed Seeds | 6 | I don't understand why this works. While fixing the seed, prompt, and generation settings yields identical output, it's possible to tweak the prompt and generation settings for variations.
**Curiously**, a requested change that is semantically understood by the model yields nearly identical results, but with the chan... | null | https://www.reddit.com/r/LocalLLaMA/comments/127xqpt/tweaking_llama_results_using_fixed_seeds/ | friedrichvonschiller | self.LocalLLaMA | 2023-03-31T21:06:06 | 0 | {} | 127xqpt | false | null | t3_127xqpt | /r/LocalLLaMA/comments/127xqpt/tweaking_llama_results_using_fixed_seeds/ | false | false | self | 6 | null |
Electric Barbarella: AI Voice Chat and shell tools | 1 | null | https://www.youtube.com/watch?v=q8Cl2fZTyOs&t=924 | sswam | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 127yuwv | false | {'oembed': {'author_name': 'Sam Watkins', 'author_url': 'https://www.youtube.com/@ssw4m', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/q8Cl2fZTyOs?start=924&feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-... | t3_127yuwv | /r/LocalLLaMA/comments/127yuwv/electric_barbarella_ai_voice_chat_and_shell_tools/ | false | false | default | 1 | null | |
What is the point of 128 group size? | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/1280omq/what_is_the_point_of_128_group_size/ | Ghurganov | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1280omq | false | null | t3_1280omq | /r/LocalLLaMA/comments/1280omq/what_is_the_point_of_128_group_size/ | false | false | default | 1 | null |
I made a web GUI for alpaca.cpp | 2 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/1281i81/i_made_a_web_gui_for_alpacacpp/ | MediocreProgrammer99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1281i81 | false | null | t3_1281i81 | /r/LocalLLaMA/comments/1281i81/i_made_a_web_gui_for_alpacacpp/ | false | false | default | 2 | null |
Best online cloud GPU provider for 32gb vram to finetune 13B? | 15 | null | https://www.reddit.com/r/LocalLLaMA/comments/1281nk5/best_online_cloud_gpu_provider_for_32gb_vram_to/ | IonizedRay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1281nk5 | false | null | t3_1281nk5 | /r/LocalLLaMA/comments/1281nk5/best_online_cloud_gpu_provider_for_32gb_vram_to/ | false | false | self | 15 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.