organization
string
repo_name
string
base_commit
string
iss_html_url
string
iss_label
string
title
string
body
string
code
null
pr_html_url
string
commit_html_url
string
file_loc
string
own_code_loc
list
ass_file_loc
list
other_rep_loc
list
analysis
dict
loctype
dict
iss_has_pr
int64
THUDM
ChatGLM-6B
8633db1503fc3b0edc1d035f64aa35dce5d97969
https://github.com/THUDM/ChatGLM-6B/issues/622
[BUG/Help] ptuning时,指定PRE_SEQ_LEN=512,训练后,回答的问题仍旧有回答一百字左右就断了,该如何调整?
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior 训练参数如下: PRE_SEQ_LEN=512 LR=2e-2 CUDA_VISIBLE_DEVICES=0 python3 main.py \ --do_train \ --train_file ./data/gwddc.json \ --validation_file ./data/gwddc_test.json \ --prompt_column instructio...
null
null
null
{"base_commit":"8633db1503fc3b0edc1d035f64aa35dce5d97969","files":[{"path":"ptuning\/README.md","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)":n...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "4", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc" }
{ "code": [ "ptuning/arguments.py" ], "doc": [ "ptuning/README.md" ], "test": [], "config": [], "asset": [] }
null
huggingface
transformers
34f28b2a1342fd72c2e4d4e5613855bfb9f35d34
https://github.com/huggingface/transformers/issues/1225
wontfix
Bert output last hidden state
## ❓ Questions & Help Hi, Suppose we have an utterance of length 24 (considering special tokens) and we right-pad it with 0 to max length of 64. If we use Bert pertained model to get the last hidden states, the output would be of size [1, 64, 768]. Can we use just the first 24 as the hidden states of the utter...
null
null
null
{"base_commit":"34f28b2a1342fd72c2e4d4e5613855bfb9f35d34","files":[{"path":"src\/transformers\/models\/bert\/modeling_bert.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,...
[]
[]
[]
{ "iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "src/transformers/models/bert/modeling_bert.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
huggingface
transformers
0e82f0cbc28b41b3d87a5e4069dc0e20bacc2494
https://github.com/huggingface/transformers/issues/12081
GPT2 Flax "TypeError: JAX only supports number and bool dtypes, got dtype object in array"
On GPU ``` >>> from transformers import AutoTokenizer, FlaxAutoModelForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("gpt2-medium") >>> model = FlaxAutoModelForCausalLM.from_pretrained("gpt2-medium") >>> input_context = "The dog" >>> # encode input context >>> input_ids = tokenizer(input_context, re...
null
null
null
{"base_commit":"0e82f0cbc28b41b3d87a5e4069dc0e20bacc2494","files":[{"path":"src\/transformers\/models\/gpt2\/modeling_flax_gpt2.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":...
[ { "Loc": { "(None, None, None)": { "mod": [ 6, 7 ] } }, "path": null } ]
[]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code" }
{ "code": [ null, "src/transformers/models/gpt2/tokenization_gpt2_fast.py", "src/transformers/models/gpt2/modeling_flax_gpt2.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
huggingface
transformers
322037e842e5e89080918c824998c17722df6f19
https://github.com/huggingface/transformers/issues/10079
Unclear error "NotImplementedError: "while saving tokenizer. How fix it?
Here is my tokenizer code and how I save it to a json file" /content/bert-datas7.json" ```` from tokenizers import normalizers from tokenizers.normalizers import Lowercase, NFD, StripAccents bert_tokenizer.pre_tokenizer = Whitespace() from tokenizers.processors import TemplateProcessing bert_tokenizer.pos...
null
null
null
{"base_commit":"322037e842e5e89080918c824998c17722df6f19","files":[{"path":"src\/transformers\/tokenization_utils_fast.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(No...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "src/transformers/tokenization_utils_fast.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
huggingface
transformers
1a688709b34b10bd372e3e0860c8d39d170ebf53
https://github.com/huggingface/transformers/issues/17201
a memory leak in qqp prediction using bart
### System Info ```shell - `transformers` version: 4.19.0.dev0 - Platform: Linux-5.11.0-43-generic-x86_64-with-glibc2.17 - Python version: 3.8.10 - Huggingface_hub version: 0.4.0 - PyTorch version (GPU?): 1.10.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed...
null
null
null
{"base_commit":"1a688709b34b10bd372e3e0860c8d39d170ebf53","files":[{"path":"src\/transformers\/trainer.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kines...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "2\nOr\n5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "src/transformers/trainer.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
huggingface
transformers
cef2e40e0f8eaad13b8d32817a48fdddc32eb2a5
https://github.com/huggingface/transformers/issues/28435
Skip some weights for load_in_8bit and keep them as fp16/32?
### Feature request Hello, I am looking for a way to load a checkpoint where I only load some of the weights in 8 bit and keep others in 16/32 bit. ### Motivation My motivation is for vision-language models like Llava or BLIP2 where I want to load the LLM part in 8 bit but the image encoder should stay in 1...
null
null
null
{"base_commit":"cef2e40e0f8eaad13b8d32817a48fdddc32eb2a5","files":[{"path":"src\/transformers\/modeling_utils.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'star...
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "src/transformers/modeling_utils.py", "src/transformers/utils/quantization_config.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
huggingface
transformers
45d21502f0b67eb8a5ad244d469dcc0dfb7517a7
https://github.com/huggingface/transformers/issues/653
Different Results from version 0.4.0 to version 0.5.0
Hi, I found the results after training is different from version 0.4.0 to version 0.5.0. I have fixed all initialization to reproduce the results. And I also test version 0.2.0 and 0.3.0, the results are the same to version 0.4.0, but from version 0.5.0 +, the results is different. I am wondering that have you trained ...
null
null
null
{"base_commit":"45d21502f0b67eb8a5ad244d469dcc0dfb7517a7","files":[{"path":"pytorch_pretrained_bert\/modeling.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'star...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "pytorch_pretrained_bert/modeling.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
huggingface
transformers
1c8c2d9ab34b8c8d326db9e0608f8e54cfccb885
https://github.com/huggingface/transformers/issues/10202
Fast Tokenizers instantiated via vocab/merge files do not respect skip_special_tokens=True
## Environment info - `transformers` version: 4.3.2 - Platform: macOS-11.2.1-x86_64-i386-64bit - Python version: 3.9.1 - PyTorch version (GPU?): 1.7.1 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ## Information See ...
null
null
null
{"base_commit":"1c8c2d9ab34b8c8d326db9e0608f8e54cfccb885","files":[{"path":"src\/transformers\/tokenization_utils_base.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(No...
[ { "Loc": { "(None, None, None)": { "mod": [ 33 ] } }, "path": null } ]
[]
[]
{ "iss_type": "2", "iss_reason": "5", "loc_way": "Cment指出用户代码问题,给出需要使用的API\n自己代码的问题 另一个issue中指出cmit\nI think this is happening because when you load it from the vocab and merge files, it doesn't know <|endoftext|> is a special token. For the skip_special_tokens to work, I believe it would be necessary to add them...
{ "code": [ "src/transformers/tokenization_utils_base.py", null ], "doc": [], "test": [], "config": [], "asset": [] }
null
huggingface
transformers
5bcbdff15922b1d0eeb035879630ca61c292122a
https://github.com/huggingface/transformers/issues/32661
bug
RoBERTa config defaults are inconsistent with fairseq implementation
### System Info python 3.12, transformers 4.14, latest mac os ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give detail...
null
null
null
{"base_commit":"5bcbdff15922b1d0eeb035879630ca61c292122a","files":[{"path":"src\/transformers\/models\/roberta\/configuration_roberta.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', ...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "src/transformers/models/roberta/configuration_roberta.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
geekan
MetaGPT
be56351e000a0f08562820fb04f6fdbe34d9e655
https://github.com/geekan/MetaGPT/issues/205
Rate Limited error
openai.error.RateLimitError: Rate limit reached for 10KTPM-200RPM in organization org-fK5bb25UFhVbebfBtfCejGc4 on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues. Maybe a way to resume so all the runtime isn't just lost...
null
null
null
{"base_commit":"be56351e000a0f08562820fb04f6fdbe34d9e655","files":[{"path":"metagpt\/provider\/openai_api.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_ki...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "metagpt/provider/openai_api.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
geekan
MetaGPT
8d98ce34e54eb6250f1f2cf60f5d4dd66d462a5d
https://github.com/geekan/MetaGPT/issues/1115
The following error appears on every run
![image](https://github.com/geekan/MetaGPT/assets/115678682/1fb58e0b-47a7-4e1f-a7b7-924ea9adedb0) 2024-03-27 11:15:59.019 | ERROR | metagpt.utils.common:wrapper:631 - Exception occurs, start to serialize the project, exp: Traceback (most recent call last): File "D:\andconda\envs\metagpt\lib\site-packages\tena...
null
null
null
{"base_commit":"8d98ce34e54eb6250f1f2cf60f5d4dd66d462a5d","files":[{"path":"metagpt\/strategy\/planner.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kines...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "metagpt/strategy/planner.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
geekan
MetaGPT
bdf9d224b5a05228897553a29214adc074fbc465
https://github.com/geekan/MetaGPT/issues/754
SubscriptionRunner
import asyncio from metagpt.subscription import SubscriptionRunner from metagpt.roles import Searcher from metagpt.schema import Message async def trigger(): while True: yield Message("the latest news about OpenAI") await asyncio.sleep(1) async def callback(msg: Message): print(ms...
null
null
null
{"base_commit":"bdf9d224b5a05228897553a29214adc074fbc465","files":[{"path":"metagpt\/environment.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 1...
[ { "Loc": { "(None, None, None)": { "mod": [ 21 ] } }, "path": null } ]
[]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code" }
{ "code": [ null, "metagpt/environment.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
geekan
MetaGPT
f88fa9e2df09c28f867bda54ec24fa25b50be830
https://github.com/geekan/MetaGPT/issues/178
Specify Directory of pdf documents as Knowledge Base
Hi, how can we specify any folder which includes pdf documents as a knowledge base and create a new Role of Document Controller to extract specific information from within the documents in KB? Any help would be highly appreciated Thanks much appreciated
null
null
null
{"base_commit":"f88fa9e2df09c28f867bda54ec24fa25b50be830","files":[{"path":"metagpt\/document_store","status":null,"Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)":nu...
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "examples/search_kb.py" ], "doc": [ "metagpt/document_store", "tests/metagpt/document_store" ], "test": [], "config": [], "asset": [] }
null
fastapi
fastapi
c6aa28bea2f751a91078bd8d845133ff83f352bf
https://github.com/fastapi/fastapi/issues/5425
question answered question-migrate
Error while opening swagger docs while uploading file in APIRouter
### First Check - [X] I added a very descriptive title to this issue. - [X] I used the GitHub search to find a similar issue and didn't find it. - [X] I searched the FastAPI documentation, with the integrated search. - [X] I already searched in Google "How to X in FastAPI" and didn't find any information. - [X] ...
null
null
null
{"base_commit":"c6aa28bea2f751a91078bd8d845133ff83f352bf","files":[{"path":"fastapi\/routing.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)":...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "fastapi/routing.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
fastapi
fastapi
1760da0efa55585c19835d81afa8ca386036c325
https://github.com/fastapi/fastapi/issues/3882
question question-migrate
Doing work after the HTTP response has been sent
### First Check - [X] I added a very descriptive title to this issue. - [X] I used the GitHub search to find a similar issue and didn't find it. - [X] I searched the FastAPI documentation, with the integrated search. - [X] I already searched in Google "How to X in FastAPI" and didn't find any information. - [X] I alre...
null
null
null
{"base_commit":"1760da0efa55585c19835d81afa8ca386036c325","files":[{"path":"fastapi\/background.py","status":null,"Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)":nul...
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "fastapi/background.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
fastapi
fastapi
78b07cb809e97f400e196ff3d89862b9d5bd5dc2
https://github.com/fastapi/fastapi/issues/4587
question question-migrate
Use the raw response in Reponse classes
### First Check - [X] I added a very descriptive title to this issue. - [X] I used the GitHub search to find a similar issue and didn't find it. - [X] I searched the FastAPI documentation, with the integrated search. - [X] I already searched in Google "How to X in FastAPI" and didn't find any information. - [X] ...
null
null
null
{"base_commit":"78b07cb809e97f400e196ff3d89862b9d5bd5dc2","files":[{"path":"fastapi\/routing.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)":...
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "fastapi/routing.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
oobabooga
text-generation-webui
8962bb173e9bdc36eb9cf28fe9e1952b2976e781
https://github.com/oobabooga/text-generation-webui/issues/5337
bug
Generation slows at max context, even when truncated
### Describe the bug ### Issue Summary When generating, if the context is near the maximum set via n_ctx (and the truncate value in Parameters is set to match it), generation will be quite slow. This does not occur if the context is more than approximately 300-500 below the set value. It still occurs even if the n_...
null
null
null
{"base_commit":"8962bb173e9bdc36eb9cf28fe9e1952b2976e781","files":[{"path":"modules\/ui_model_menu.py","status":null,"Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)":...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "modules/ui_model_menu.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
oobabooga
text-generation-webui
07510a24149cbd6fd33df0c4a440d60b9783a18e
https://github.com/oobabooga/text-generation-webui/issues/2171
enhancement stale
support for fastest-inference-4bit branch of GPTQ-for-LLaMa
**Description** There is new branch of GPTQ-for-LLaMa - fastest-inference-4bit that combines triton and cuda and people say it's much faster. It would be nice if it was supported here. I tried to compile it myself but it doesnt work with this webui because there is no llama_inference_offload.py in the new branch. ...
null
null
null
{"base_commit":"07510a24149cbd6fd33df0c4a440d60b9783a18e","files":[{"path":"modules\/GPTQ_loader.py","status":null,"Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)":nu...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "modules/GPTQ_loader.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
oobabooga
text-generation-webui
7ddf6147accfb5b95e7dbbd7f1822cf976054a2a
https://github.com/oobabooga/text-generation-webui/issues/446
bug
Factual answer: ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇
### Describe the bug I get factual answers in ?? like this Factual answer: ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ### Is there an existing issue for this? - [X] I have searched the existing issues ### Reproduction Common sense questions and answers Question: Hi Factual answer: ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ### ...
null
null
null
{"base_commit":"7ddf6147accfb5b95e7dbbd7f1822cf976054a2a","files":[{"path":"download-model.py","status":null,"Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)":null,"('...
[]
[]
[]
{ "iss_type": "2\n结果奇怪", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "download-model.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
oobabooga
text-generation-webui
3609ea69e4c4461a4f998bd12cc559d5a016f328
https://github.com/oobabooga/text-generation-webui/issues/5761
bug
api broke: AttributeError: 'NoneType' object has no attribute 'replace'
### Describe the bug api calls result in AttributeError: 'NoneType' object has no attribute 'replace' ### Is there an existing issue for this? - [X] I have searched the existing issues ### Reproduction install no requirements and llama-cpp-python by source then try to run curl curl http://192.168.3.17:5000/v1/...
null
null
null
{"base_commit":"3609ea69e4c4461a4f998bd12cc559d5a016f328","files":[{"path":"modules\/chat.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)":nul...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "modules/chat.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
hacksider
Deep-Live-Cam
69d863b44ab5c7dad6eea04b7e3563f491c714a4
https://github.com/hacksider/Deep-Live-Cam/issues/376
Unable to select camera device through UI
It would be nice to have a way to select which camera to use. I am on Ubuntu 22.04 with a Linux laptop. Since I use an external camera and keep my laptop closed, the program is defaulting to the on-board camera. I was unable to find a quick/easy way to change the default camera in Ubuntu, so it would be nice if the ...
null
null
null
{"base_commit":"69d863b44ab5c7dad6eea04b7e3563f491c714a4","files":[{"path":"modules\/ui.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)":null,...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "modules/ui.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
hacksider
Deep-Live-Cam
080d6f5110d2e185e8ce4e10451ac96313079be2
https://github.com/hacksider/Deep-Live-Cam/issues/315
How to select the correct camera?
How to select the correct camera ? Is there any method to improve the output resolution of the camera?
null
null
null
{"base_commit":"080d6f5110d2e185e8ce4e10451ac96313079be2","files":[{"path":"modules\/ui.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)":null,...
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "modules/ui.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
hacksider
Deep-Live-Cam
6b0cc749574d7307b2f7deedfa2a0dbb363329da
https://github.com/hacksider/Deep-Live-Cam/issues/243
[experimental] doesn't show the camera I want..
I'm using the `experimental` branch so I could choose the camera I wanted (OBS Virtual Camera) which is (2) but it only shows "Camera 0", so I made a test script and I was able to pull my OBS Virtual Camera using 'matplotlib', ``` (venv) (base) PS E:\deep-live-cam> python list.py [ WARN:0@10.769] global cap_msmf.c...
null
null
null
{"base_commit":"6b0cc749574d7307b2f7deedfa2a0dbb363329da","files":[{"path":"modules\/ui.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)":null,...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "modules/ui.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
hacksider
Deep-Live-Cam
513e41395687921d589fc10bbaf2f72ed579c84a
https://github.com/hacksider/Deep-Live-Cam/issues/915
Subject: Missing ui.py file in modules directory - preventing project execution
Hi, I'm trying to run the Deep-Live-Cam project, but I'm encountering a problem. The ui.py file is missing from the modules directory. I've tried the following: * Cloning the repository using git clone: `git clone https://github.com/hacksider/Deep-Live-Cam.git` * Cloning the repository using GitHub Desktop. * D...
null
null
null
{"base_commit":"513e41395687921d589fc10bbaf2f72ed579c84a","files":[{"path":"modules\/ui.py","status":null,"Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)":null,"('Pre...
[]
[]
[]
{ "iss_type": "3", "iss_reason": "4", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "modules/ui.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
hacksider
Deep-Live-Cam
eab5ba7027db1a4d0ec97883aa7a61b55fb81dfa
https://github.com/hacksider/Deep-Live-Cam/issues/345
Program crashes when processing with DirectML
I am using an AMD RX 6600 XT GPU with the latest drivers and attempting to run the program with DirectML. The program's UI turns white and then crashes. It works fine with CPU execution but fails with DirectML. I already tried to reinstall onnxruntime-directml with no effect. Terminal: (myenv) E:\Edesktop\deep-liv...
null
null
null
{"base_commit":"eab5ba7027db1a4d0ec97883aa7a61b55fb81dfa","files":[{"path":"modules\/ui.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)":null,...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "modules/ui.py", "modules/core.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
Textualize
rich
7e1928efee53da1ac7d156912df04aef83eefea5
https://github.com/Textualize/rich/issues/1247
Needs triage
[REQUEST] Extra caching for `get_character_cell_size`
**How would you improve Rich?** Add a small `lru_cache` to https://github.com/willmcgugan/rich/blob/master/rich/cells.py#L28 , similar to cache one layer down for https://github.com/willmcgugan/rich/blob/master/rich/cells.py#L46 Size `4096` was plenty for what I describe below. **What problem does it solved fo...
null
null
null
{"base_commit":"7e1928efee53da1ac7d156912df04aef83eefea5","files":[{"path":"rich\/cells.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)":null,...
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "rich/cells.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
Textualize
rich
5c9161d0c48254fb579827249a9ee7d88f4589b7
https://github.com/Textualize/rich/issues/1489
Needs triage
[REQUEST] current item of a progress
when creating progress bars for logical items (that are then supported with additional progress pars, i would consider it helpful if it was possible to add a name/render able for the current item, and to push those in updates i`m not yet sure how this is best expressed/implemented
null
null
null
{"base_commit":"5c9161d0c48254fb579827249a9ee7d88f4589b7","files":[{"path":"rich\/progress.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)":nu...
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "rich/progress.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
Textualize
rich
0aa85606ad9a7ca6b28a5ae376e433b8e59f6e80
https://github.com/Textualize/rich/issues/2457
bug
[BUG] Console(no_color=True) does not work on Windows 10
You may find a solution to your problem in the [docs](https://rich.readthedocs.io/en/latest/introduction.html) or [issues](https://github.com/willmcgugan/rich/issues). **Describe the bug** The "no_color=True" Console parameter does not seem to do anything on Windows 10. I tested on both Cmder and native cmd.exe t...
null
null
null
{"base_commit":"0aa85606ad9a7ca6b28a5ae376e433b8e59f6e80","files":[{"path":"rich\/console.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)":nul...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "rich/console.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ytdl-org
youtube-dl
427cc215310804127b55744fcc3664ede38a4a0d
https://github.com/ytdl-org/youtube-dl/issues/21363
question
How does youtube-dl detect advertisements?
<!-- ###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE ###################################################################### --> ## Checklist <!-- Carefully read and work through this check lis...
null
null
null
{"base_commit":"427cc215310804127b55744fcc3664ede38a4a0d","files":[{"path":"youtube_dl\/downloader\/hls.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kine...
[]
[]
[]
{ "iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "youtube_dl/downloader/hls.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ytdl-org
youtube-dl
8b7340a45eb0e3aeaa996896ff8690b6c3a32af6
https://github.com/ytdl-org/youtube-dl/issues/15955
use youtube-dl with cookies file in code not from command line
## Please follow the guide below - You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly - Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`) - Use the *Preview* tab to see what your issue will actually look like ...
null
null
null
{"base_commit":"8b7340a45eb0e3aeaa996896ff8690b6c3a32af6","files":[{"path":"youtube_dl\/YoutubeDL.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', ...
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "youtube_dl/YoutubeDL.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ytdl-org
youtube-dl
267d81962a0709f15f82f96b7aadbb5473a06992
https://github.com/ytdl-org/youtube-dl/issues/16870
[bilibili]how can i download video on page2?
### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.06.25*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected. - [x] I've **verified**...
null
null
null
{"base_commit":"267d81962a0709f15f82f96b7aadbb5473a06992","files":[{"path":"youtube_dl\/extractor\/bilibili.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_...
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "youtube_dl/extractor/bilibili.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ytdl-org
youtube-dl
eca1f0d115e6a2712ff0d5f6b25e3ded5e52db71
https://github.com/ytdl-org/youtube-dl/issues/16883
[Feature request] Network retry, with configurability
I just ran some large youtube-dl scripts, and noticed a few videos were missing finally. This was probably due to intermittent network downtimes, and apparently youtube-dl doesn't do any network retry at all (I may be wrong). Thus, I suggest adding an option named for example `--network-retry`, related to `--sock...
null
null
null
{"base_commit":"eca1f0d115e6a2712ff0d5f6b25e3ded5e52db71","files":[{"path":"youtube_dl\/options.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "youtube_dl/options.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ytdl-org
youtube-dl
5014bd67c22b421207b2650d4dc874b95b36dda1
https://github.com/ytdl-org/youtube-dl/issues/30539
question
velocidad de descarga limitada
<!-- ###################################################################### WARNING! IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE ###################################################################### --> ## Checklist <!-- Carefully read and work through this check lis...
null
null
null
{"base_commit":"5014bd67c22b421207b2650d4dc874b95b36dda1","files":[{"path":"youtube_dl\/extractor\/youtube.py","status":null,"Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "youtube_dl/extractor/youtube.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ytdl-org
youtube-dl
e90d175436e61e207e0b0cae7f699494dcf15922
https://github.com/ytdl-org/youtube-dl/issues/9104
Chinese title was missing !
``` root@kangland:/var/www/ydy# youtube-dl -v w0dMz8RBG7g [debug] System config: [] [debug] User config: [] [debug] Command-line args: [u'-v', u'w0dMz8RBG7g'] [debug] Encodings: locale ANSI_X3.4-1968, fs ANSI_X3.4-1968, out ANSI_X3.4-1968, pref ANSI_X3.4-1968 [debug] youtube-dl version 2016.04.01 [debug] Python version...
null
null
null
{"base_commit":"e90d175436e61e207e0b0cae7f699494dcf15922","files":[{"path":"youtube_dl\/options.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "youtube_dl/options.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
OpenInterpreter
open-interpreter
dee41b6932a0d9b5569b1abf9144b7ffd8c3c7ad
https://github.com/OpenInterpreter/open-interpreter/issues/499
Bug
raise Exception("`interpreter.chat()` requires a display. Set `display=True` or pass a message into `interpreter.chat(message)`.")
### Describe the bug Fresh install on ubuntu 22, I'm using interpreter in terminal. After sending a prompt, at some point on the answer the program crashes ``` > Traceback (most recent call last): File "/home/fauxprophet/Documents/Ops/openai/bin/interpreter", line 8, in <module> sys.exit(cli()) File...
null
null
null
{"base_commit":"dee41b6932a0d9b5569b1abf9144b7ffd8c3c7ad","files":[{"path":"interpreter\/core\/core.py","status":null,"Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)"...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "interpreter/core/core.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
OpenInterpreter
open-interpreter
1bb7b19eeb4264f0d7b6410409af6f1cdbf31f3d
https://github.com/OpenInterpreter/open-interpreter/issues/15
Error: cannot import name 'cli' from 'interpreter'
```console ╰─$ uname -a Linux lab 6.2.0-26-generic #26~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Jul 13 16:27:29 UTC 2 x86_64 x86_64 x86_64 GNU/Linux ╰─$ pip --version 1 ↵ pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10) ╰─$ interpreter ...
null
null
null
{"base_commit":"1bb7b19eeb4264f0d7b6410409af6f1cdbf31f3d","files":[{"path":"interpreter\/interpreter.py","status":"modified","Loc":{"(None, None, None)":{"add":null,"mod":[1]},"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(No...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "interpreter/interpreter.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
abi
screenshot-to-code
4e30b207c1ee9ddad05a37c31a11ac5a182490b7
https://github.com/abi/screenshot-to-code/issues/270
Error configuring ANTHROPIC API KEY in.env file
I added "ANTHROPIC_API_KEY=s****" to the.env file "No Anthropic API key found. Please add the environment variable ANTHROPIC_API_KEY to backend/.env"
null
null
null
{"base_commit":"4e30b207c1ee9ddad05a37c31a11ac5a182490b7","files":[{"path":"backend\/config.py","status":"modified","Loc":{"(None, None, None)":{"add":null,"mod":[6]},"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'star...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "", "info_type": "Code" }
{ "code": [ "backend/config.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
abi
screenshot-to-code
1f08d71d4dbc614b6b2eaaddb6f8d5858ca6aa5b
https://github.com/abi/screenshot-to-code/issues/132
Why Connection closed 1006
![image](https://github.com/abi/screenshot-to-code/assets/19514719/e8d6aa4c-e133-475d-bce6-7309082c0cc2) ![image](https://github.com/abi/screenshot-to-code/assets/19514719/9e00d1ef-67e2-4e13-9276-4ea4119c12cc) ![image](https://github.com/abi/screenshot-to-code/assets/19514719/a15e37ce-d0aa-4dfe-896d-3eb0a96a7e63)...
null
null
null
{"base_commit":"1f08d71d4dbc614b6b2eaaddb6f8d5858ca6aa5b","files":[{"path":"backend\/main.py","status":null,"Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)":null,"('P...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "backend/main.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
pytorch
pytorch
4622b3395276b37e10141fab43ffea33941ca0c2
https://github.com/pytorch/pytorch/issues/2384
How the grad is transferred between layer
consider a simple example here: ```python import torch from torch.autograd import Variable input = Variable(torch.randn(20, 3, 28, 28), requires_grad=True) m = torch.nn.Conv2d(3, 16, 5) output = m(input) loss = torch.sum(output)# define loss to perform backprop m.zero_grad() loss.backward() print(type(i...
null
null
null
{"base_commit":"4622b3395276b37e10141fab43ffea33941ca0c2","files":[{"path":"torch\/autograd\/variable.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesi...
[]
[]
[]
{ "iss_type": "3", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "torch/autograd/variable.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
pytorch
pytorch
2abcafcfd8beb4f6a22e08532d58f9f09c490f0f
https://github.com/pytorch/pytorch/issues/96983
module: binaries triaged module: arm
PyTorch 2.0 aarch64 wheels are missing the mkldnn+acl backend support
### 🐛 Describe the bug PyTorch 2.0 aarch64 wheels are missing the mkldnn+acl backend support, where as PyTorch 1.13.0 had support. Solution: the wheels need to be built with the `--enable-mkldnn` option while building them from the pytorch/builder repo. example command for pytorch wheel builder script: `./...
null
null
null
{"base_commit":"2abcafcfd8beb4f6a22e08532d58f9f09c490f0f","files":[{"path":".ci\/aarch64_linux\/build_aarch64_wheel.py","status":"modified","Loc":{"(None, None, None)":{"add":null,"mod":[8]},"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', ...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ ".ci/aarch64_linux/build_aarch64_wheel.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
xtekky
gpt4free
e8f6013d0349229fd8f7d298952cfe56fc4b8761
https://github.com/xtekky/gpt4free/issues/2070
bug stale
Liaobots and You don't work
Liaobots and You do not work, they give the following errors: ``` Liaobots: ResponseStatusError: Response 500: Error ``` ``` You: ResponseStatusError: Response 401: {"status_code":401,"request_id":"request-id-live-183191e7-adc1-4838-8e29-6e0c5c3ca048","error_type":"endpoint_not_authorized_for_sdk","error_mess...
null
null
null
{"base_commit":"e8f6013d0349229fd8f7d298952cfe56fc4b8761","files":[{"path":"g4f\/Provider\/Liaobots.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis'...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "g4f/Provider/Liaobots.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
xtekky
gpt4free
fa2d608822540c9b73350bfa036e8822ade4e23f
https://github.com/xtekky/gpt4free/issues/2305
stale
ValueError: Unknown model: dall-e-3
``` C:\Users\MAX\Desktop>pip install -U g4f[all] Defaulting to user installation because normal site-packages is not writeable Requirement already satisfied: g4f[all] in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (0.3.3.2) ...
null
null
null
{"base_commit":"fa2d608822540c9b73350bfa036e8822ade4e23f","files":[{"path":"g4f\/models.py","status":null,"Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)":null,"('Pre...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "g4f/models.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
xtekky
gpt4free
1ade1d959cbc9aea7cf653bbe5b6c414ba486c97
https://github.com/xtekky/gpt4free/issues/1292
bug stale
RecursionError: maximum recursion depth exceeded while calling a Python object
Ubuntu 22, g4f-0.1.9.0, pip installation method, python3.10 **Bug description** G4F API has these errors after 5-10 requests. I have to restart constantly. It is very uncomfortable. This problem did not exist in the previous version. **Errors** ``` RecursionError: maximum recursion depth exceeded in comparison...
null
null
null
{"base_commit":"1ade1d959cbc9aea7cf653bbe5b6c414ba486c97","files":[{"path":"g4f\/cli.py","status":null,"Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)":null,"('PreTra...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "g4f/cli.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
xtekky
gpt4free
c159eebd494b1aef06340429b7b62cdfb84f783d
https://github.com/xtekky/gpt4free/issues/2556
bug
Errors when generating images in the following models:
Hi! errors when generating images in the following models: Response 404: The page could not be found sdxl, playground-v2.5, sd-3 dall-e-3: Missing "_U" cookie midjourney: Cannot connect to host image.pollinations.ai:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate ve...
null
null
null
{"base_commit":"c159eebd494b1aef06340429b7b62cdfb84f783d","files":[{"path":"projects\/windows\/main.py","status":null,"Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)"...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "projects/windows/main.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
xtekky
gpt4free
b7eee50930dbd782d7c068d1d29cd270b97bc741
https://github.com/xtekky/gpt4free/issues/1710
bug stale
AttributeError: module 'g4f' has no attribute 'client'
**Bug description** When trying to run script from Quickstart, i get this error. Traceback (most recent call last): File "C:/Users/samso/AppData/Local/Programs/Python/Python311/test.py", line 3, in <module> engine = g4f.client.Client() AttributeError: module 'g4f' has no attribute 'client' **Environmen...
null
null
null
{"base_commit":"b7eee50930dbd782d7c068d1d29cd270b97bc741","files":[{"path":"g4f\/client\/__init__.py","status":null,"Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)":n...
[ { "path": "C:/Users/samso/AppData/Local/Programs/Python/Python311/test.py" } ]
[]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code" }
{ "code": [ "g4f/client/__init__.py" ], "doc": [], "test": [ "C:/Users/samso/AppData/Local/Programs/Python/Python311/test.py" ], "config": [], "asset": [] }
null
xtekky
gpt4free
2a54c36043b9d87b96c4b7699ce194f8523479b8
https://github.com/xtekky/gpt4free/issues/552
bug
Unable to fetch the response, Please try again.
![IMG_20230514_171809.jpg](https://github.com/xtekky/gpt4free/assets/29172927/6263b9db-3362-4c5b-b043-80b62213a61b)
null
null
null
{"base_commit":"2a54c36043b9d87b96c4b7699ce194f8523479b8","files":[{"path":"gpt4free\/you\/__init__.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis'...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "gpt4free/you/__init__.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
xtekky
gpt4free
c29487cdb522a2655ccff45bdfc33895ed4daf84
https://github.com/xtekky/gpt4free/issues/2078
bug
HuggingChat provider is not working - ResponseStatusError: Response 500
### Bug description When I try to use the HuggingChat provider, having added a cookies/har file, I always get the same error: `An error occurred: HuggingChat: ResponseStatusError: Response 500:` ``` Using HuggingChat provider and CohereForAI/c4ai-command-r-plus model INFO:werkzeug:192.168.80.1 - - [22/Jun/2024 ...
null
null
null
{"base_commit":"c29487cdb522a2655ccff45bdfc33895ed4daf84","files":[{"path":"g4f\/Provider\/HuggingChat.py","status":null,"Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 1...
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "g4f/Provider/HuggingChat.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
scikit-learn
scikit-learn
f7026b04f5e5909aa15848b25de2becd675871a9
https://github.com/scikit-learn/scikit-learn/issues/2475
Multinomial Naive Bayes: Scikit and Weka have different results
Hi All, I used the sklearn.naive_bayes.MultinomialNB on a toy example. Comparing the results with WEKA, I've noticed a quite different AUC. Scikit (0.579) - Weka (0.664)
null
null
null
{"base_commit":"f7026b04f5e5909aa15848b25de2becd675871a9","files":[{"path":"sklearn\/cross_validation.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesi...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "sklearn/cross_validation.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
scikit-learn
scikit-learn
0ab5c678bba02888b62b777b4c757e367b3458d5
https://github.com/scikit-learn/scikit-learn/issues/8470
How to let gbdt = GradientBoostingRegressor(), gbdt.fit(X_feature, X_label) know whether the feature of input X is categorical or numerical?
null
null
null
{"base_commit":"0ab5c678bba02888b62b777b4c757e367b3458d5","files":[{"path":"sklearn\/preprocessing\/_encoders.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'star...
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "sklearn/preprocessing/_encoders.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
meta-llama
llama
57b0eb62de0636e75af471e49e2f1862d908d9d8
https://github.com/meta-llama/llama/issues/201
Torchrun distributed running does not work
Running in a distributed manner either returns an error, or with the simplest example, produce obviously incorrect output. The following is the result of running 13B model across two nodes. Node A: `python -m torch.distributed.run --nproc_per_node 1 --nnodes=2 --node_rank=0 --master_addr="gpu3.lan" --master_port...
null
null
null
{"base_commit":"57b0eb62de0636e75af471e49e2f1862d908d9d8","files":[{"path":"example.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)":null,"('P...
[]
[]
[]
{ "iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "example.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
meta-llama
llama
ea9f33d6d3ea8ed7d560d270986407fd6c2e52b7
https://github.com/meta-llama/llama/issues/670
Counting tokens for Chat models
Does anyone how to calculate prompt and completion tokens for Llama Chat models for monitoring purposes? Can we add this in responses as many times we don't have libraries to achieve this in languages like java, kotlin, etc. Similar to tiktoken by openai - https://github.com/openai/tiktoken
null
null
null
{"base_commit":"ea9f33d6d3ea8ed7d560d270986407fd6c2e52b7","files":[{"path":"llama\/tokenizer.py","status":"modified","Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)":...
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "llama/tokenizer.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
meta-llama
llama
7565eb6fee2175b2d4fe2cfb45067a61b35d7f5e
https://github.com/meta-llama/llama/issues/751
documentation
Run llama2 on specified GPU
Suppose I have 8 A6000 GPU, I would like to run separate experiments on separate GPU, how can I do it? For example, I want to run chat_completion.py on CUDA:0 and run text_completion.py on CUDA:1 simutaneously. Are there any ways to do it? Thank you.
null
null
null
{"base_commit":"7565eb6fee2175b2d4fe2cfb45067a61b35d7f5e","files":[{"path":"example_text_completion.py","status":null,"Loc":{"(None, None, None)":null,"(None, 'find_best_app', 32)":null,"(None, 'call_factory', 82)":null,"(None, 'locate_app', 125)":null,"(None, 'test_locate_app', 148)":null,"(None, 'start_kinesis', 14)"...
[]
[]
[]
{ "iss_type": "3\nhow can I do it", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "example_text_completion.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null