Datasets:
organization string | repo_name string | base_commit string | iss_html_url string | iss_label string | title string | body string | code null | pr_html_url string | commit_html_url string | file_loc string | own_code_loc list | ass_file_loc list | other_rep_loc list | analysis dict | loctype dict | iss_has_pr int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
scikit-learn | scikit-learn | 559609fe98ec2145788133687e64a6e87766bc77 | https://github.com/scikit-learn/scikit-learn/issues/25525 | Bug
module:feature_extraction | Extend SequentialFeatureSelector example to demonstrate how to use negative tol | ### Describe the bug
I utilized the **SequentialFeatureSelector** for feature selection in my code, with the direction set to "backward." The tolerance value is negative and the selection process stops when the decrease in the metric, AUC in this case, is less than the specified tolerance. Generally, increasing the ... | null | https://github.com/scikit-learn/scikit-learn/pull/26205 | null | {'base_commit': '559609fe98ec2145788133687e64a6e87766bc77', 'files': [{'path': 'examples/feature_selection/plot_select_from_model_diabetes.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [145], 'mod': [123, 124, 125]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"examples/feature_selection/plot_select_from_model_diabetes.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
pallets | flask | cb94f4c5d3d4e1797207fd03d20d06c7bc0d05b4 | https://github.com/pallets/flask/issues/2264 | cli | Handle app factory in FLASK_APP | `FLASK_APP=myproject.app:create_app('dev')`
[
Gunicorn does this with `eval`](https://github.com/benoitc/gunicorn/blob/fbd151e9841e2c87a18512d71475bcff863a5171/gunicorn/util.py#L364), which I'm not super happy with. Instead, we could use `literal_eval` to allow a simple list of arguments. The line should never be so ... | null | https://github.com/pallets/flask/pull/2326 | null | {'base_commit': 'cb94f4c5d3d4e1797207fd03d20d06c7bc0d05b4', 'files': [{'path': 'flask/cli.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11, 12]}, "(None, 'find_best_app', 32)": {'mod': [58, 62, 69, 71]}, "(None, 'call_factory', 82)": {'mod': [82, 83, 84, 85, 86, 88, 89, 90, 91, 92, 93]}, "(None, 'lo... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"flask/cli.py"
],
"doc": [],
"test": [
"tests/test_cli.py"
],
"config": [],
"asset": []
} | 1 |
localstack | localstack | 737ca72b7bce6e377dd6876eacee63338fa8c30c | https://github.com/localstack/localstack/issues/894 | ERROR:localstack.services.generic_proxy: Error forwarding request: | Starting local dev environment. CTRL-C to quit.
Starting mock API Gateway (http port 4567)...
Starting mock DynamoDB (http port 4569)...
Starting mock SES (http port 4579)...
Starting mock Kinesis (http port 4568)...
Starting mock Redshift (http port 4577)...
Starting mock S3 (http port 4572)...
Starting mock Cl... | null | https://github.com/localstack/localstack/pull/1526 | null | {'base_commit': '737ca72b7bce6e377dd6876eacee63338fa8c30c', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [186]}}}, {'path': 'localstack/config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14]}}}, {'path': 'localstack/services/kinesis/kinesis_starter.py... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"localstack/config.py",
"localstack/services/kinesis/kinesis_starter.py"
],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | 1 | |
huggingface | transformers | d2871b29754abd0f72cf42c299bb1c041519f7bc | https://github.com/huggingface/transformers/issues/30 | [Feature request] Add example of finetuning the pretrained models on custom corpus | null | https://github.com/huggingface/transformers/pull/25107 | null | {'base_commit': 'd2871b29754abd0f72cf42c299bb1c041519f7bc', 'files': [{'path': 'src/transformers/modeling_utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [75, 108]}, "('PreTrainedModel', 'from_pretrained', 1959)": {'add': [2227]}, "(None, 'load_state_dict', 442)": {'mod': [461]}, "('PreTrainedMod... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"src/transformers/trainer.py",
"src/transformers/modeling_utils.py",
"src/transformers/training_args.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | ||
pandas-dev | pandas | 51a70dcb7133bc7cb8e6bea5da39a2cf58fa8319 | https://github.com/pandas-dev/pandas/issues/11080 | Indexing
Performance | PERF: checking is_monotonic_increasing/decreasing before sorting on an index | We don't keep the sortedness state in an index per-se, but it is rather cheap to check
- `is_monotonic_increasing` or `is_monotonic_decreasing` on a reg-index
- MultiIndex should check `is_lexsorted` (this might be done already)
```
In [8]: df = DataFrame(np.random.randn(1000000,2),columns=list('AB'))
In [9]: %timei... | null | https://github.com/pandas-dev/pandas/pull/11294 | null | {'base_commit': '51a70dcb7133bc7cb8e6bea5da39a2cf58fa8319', 'files': [{'path': 'asv_bench/benchmarks/frame_methods.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [932]}}}, {'path': 'doc/source/whatsnew/v0.17.1.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [54]}}}, {'path': 'pandas/... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/core/frame.py",
"asv_bench/benchmarks/frame_methods.py"
],
"doc": [
"doc/source/whatsnew/v0.17.1.txt"
],
"test": [],
"config": [],
"asset": []
} | 1 |
zylon-ai | private-gpt | fdb45741e521d606b028984dbc2f6ac57755bb88 | https://github.com/zylon-ai/private-gpt/issues/10 | Suggestions for speeding up ingestion? | I presume I must be doing something wrong, as it is taking hours to ingest a 500kbyte text on an i9-12900 with 128GB. In fact it's not even done yet. Using models are recommended.
Help?
Thanks
Some output:
llama_print_timings: load time = 674.34 ms
llama_print_timings: sample time = 0.0... | null | https://github.com/zylon-ai/private-gpt/pull/224 | null | {'base_commit': 'fdb45741e521d606b028984dbc2f6ac57755bb88', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [4, 15, 17, 23, 25, 28, 58, 62, 86]}}}, {'path': 'example.env', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4], 'mod': [2]}}}, {'path': 'ingest.py', 's... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"ingest.py",
"privateGPT.py"
],
"doc": [
"README.md"
],
"test": [],
"config": [
"example.env"
],
"asset": []
} | 1 | |
huggingface | transformers | 9fef668338b15e508bac99598dd139546fece00b | https://github.com/huggingface/transformers/issues/9 | Crash at the end of training | Hi, I tried running the Squad model this morning (on a single GPU with gradient accumulation over 3 steps) but after 3 hours of training, my job failed with the following output:
I was running the code, unmodified, from commit 3bfbc21376af691b912f3b6256bbeaf8e0046ba8
Is this an issue you know about?
```
11/08/2... | null | https://github.com/huggingface/transformers/pull/16310 | null | {'base_commit': '9fef668338b15e508bac99598dd139546fece00b', 'files': [{'path': 'tests/big_bird/test_modeling_big_bird.py', 'status': 'modified', 'Loc': {"('BigBirdModelTester', '__init__', 47)": {'mod': [73]}, "('BigBirdModelTest', 'test_fast_integration', 561)": {'mod': [584]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [],
"test": [
"tests/big_bird/test_modeling_big_bird.py"
],
"config": [],
"asset": []
} | 1 | |
psf | requests | ccabcf1fca906bfa6b65a3189c1c41061e6c1042 | https://github.com/psf/requests/issues/3698 | AttributeError: 'NoneType' object has no attribute 'read' | Hello :)
After a recent upgrade for our [coala](https://github.com/coala/coala) project to `requests` 2.12.1 we encounter an exception in our test suites which seems to be caused by `requests`.
Build: https://ci.appveyor.com/project/coala/coala-bears/build/1.0.3537/job/1wm7b4u9yhgkxkgn
Relevant part:
```
===... | null | https://github.com/psf/requests/pull/3718 | null | {'base_commit': 'ccabcf1fca906bfa6b65a3189c1c41061e6c1042', 'files': [{'path': 'requests/models.py', 'status': 'modified', 'Loc': {"('Response', 'content', 763)": {'mod': [772]}}}, {'path': 'tests/test_requests.py', 'status': 'modified', 'Loc': {"('TestRequests', None, 55)": {'add': [1096]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"requests/models.py"
],
"doc": [],
"test": [
"tests/test_requests.py"
],
"config": [],
"asset": []
} | 1 | |
AntonOsika | gpt-engineer | fc805074be7b3b507bc1699e537f9b691c6f91b9 | https://github.com/AntonOsika/gpt-engineer/issues/674 | bug
documentation | ModuleNotFoundError: No module named 'tkinter' | **Bug description**
When running `gpt-engineer --improve` (using the recent version from PyPI), I get the following output:
```
$ gpt-engineer --improve
Traceback (most recent call last):
File "/home/.../.local/bin/gpt-engineer", line 5, in <module>
from gpt_engineer.main import app
File "/home/.../.lo... | null | https://github.com/AntonOsika/gpt-engineer/pull/675 | null | {'base_commit': 'fc805074be7b3b507bc1699e537f9b691c6f91b9', 'files': [{'path': 'docs/installation.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [45]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [
"docs/installation.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 |
pallets | flask | 85dce2c836fe03aefc07b7f4e0aec575e170f1cd | https://github.com/pallets/flask/issues/593 | blueprints | Nestable blueprints | I'd like to be able to register "sub-blueprints" using `Blueprint.register_blueprint(*args, **kwargs)`. This would register the nested blueprints with an app when the "parent" is registered with it. All parameters are preserved, other than `url_prefix`, which is handled similarly to in `add_url_rule`. A naíve implement... | null | https://github.com/pallets/flask/pull/3923 | null | {'base_commit': '85dce2c836fe03aefc07b7f4e0aec575e170f1cd', 'files': [{'path': 'CHANGES.rst', 'status': 'modified', 'Loc': {'(None, None, 71)': {'add': [71]}}}, {'path': 'docs/blueprints.rst', 'status': 'modified', 'Loc': {'(None, None, 122)': {'add': [122]}}}, {'path': 'src/flask/app.py', 'status': 'modified', 'Loc': ... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"src/flask/blueprints.py",
"src/flask/app.py"
],
"doc": [
"docs/blueprints.rst",
"CHANGES.rst"
],
"test": [
"tests/test_blueprints.py"
],
"config": [],
"asset": []
} | null |
AUTOMATIC1111 | stable-diffusion-webui | f92d61497a426a19818625c3ccdaae9beeb82b31 | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/14263 | bug | [Bug]: KeyError: "do_not_save" when trying to save a prompt | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
When I try to save a prompt, it errors in the console saying
```
File "/home/ciel/stable-diffusion/stable-diffusion-webui/modules/styles.py", line 212, in save_styles
s... | null | https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14276 | null | {'base_commit': 'f92d61497a426a19818625c3ccdaae9beeb82b31', 'files': [{'path': 'modules/styles.py', 'status': 'modified', 'Loc': {"('StyleDatabase', '__init__', 95)": {'mod': [101, 102, 103, 104]}, "('StyleDatabase', None, 94)": {'mod': [158, 159, 160, 161]}, "('StyleDatabase', 'get_style_paths', 158)": {'mod': [175, 1... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"modules/styles.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
home-assistant | core | c3e9c1a7e8fdc949b8e638d79ab476507ff92f18 | https://github.com/home-assistant/core/issues/60067 | integration: environment_canada
by-code-owner | Environment Canada (EC) radar integration slowing Environment Canada servers | ### The problem
The `config_flow` change to the EC integration did not change the way the underlying radar retrieval works, but did enable radar for everyone. As a result the EC servers are getting far too many requests. We (the codeowners) have been working with EC to diagnose this issue and understand their concer... | null | https://github.com/home-assistant/core/pull/60087 | null | {'base_commit': 'c3e9c1a7e8fdc949b8e638d79ab476507ff92f18', 'files': [{'path': 'homeassistant/components/environment_canada/camera.py', 'status': 'modified', 'Loc': {"('ECCamera', '__init__', 49)": {'add': [57]}}}, {'path': 'homeassistant/components/environment_canada/manifest.json', 'status': 'modified', 'Loc': {'(Non... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"homeassistant/components/environment_canada/camera.py",
"homeassistant/components/environment_canada/manifest.json"
],
"doc": [],
"test": [],
"config": [
"requirements_all.txt",
"requirements_test_all.txt"
],
"asset": []
} | 1 |
abi | screenshot-to-code | 939539611f0cad12056f7be78ef6b2128b90b779 | https://github.com/abi/screenshot-to-code/issues/336 | bug
p2 | Handle Nones in chunk.choices[0].delta | 
There is a successful request for the openai interface, but it seems that no code is generated.
backend-1 | ERROR: Exception in ASGI application
backend-1 | Traceback (most recent call last):
... | null | https://github.com/abi/screenshot-to-code/pull/341 | null | {'base_commit': '939539611f0cad12056f7be78ef6b2128b90b779', 'files': [{'path': 'backend/llm.py', 'status': 'modified', 'Loc': {"(None, 'stream_openai_response', 32)": {'mod': [62, 63, 64]}}}, {'path': 'frontend/package.json', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [49]}}}, {'path': 'frontend/src/Ap... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"backend/llm.py",
"frontend/src/App.tsx",
"frontend/package.json"
],
"doc": [],
"test": [],
"config": [
"frontend/yarn.lock"
],
"asset": []
} | 1 |
Significant-Gravitas | AutoGPT | bf895eb656dee9084273cd36395828bd06aa231d | https://github.com/Significant-Gravitas/AutoGPT/issues/6 | enhancement
good first issue
API costs | Make Auto-GPT aware of it's running cost | Auto-GPT is expensive to run due to GPT-4's API cost.
We could experiment with making it aware of this fact, by tracking tokens as they are used and converting to a dollar cost.
This could also be displayed to the user to help them be more aware of exactly how much they are spending. | null | https://github.com/Significant-Gravitas/AutoGPT/pull/762 | null | {'base_commit': 'bf895eb656dee9084273cd36395828bd06aa231d', 'files': [{'path': 'autogpt/chat.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}, "(None, 'chat_with_ai', 54)": {'add': [135]}}}, {'path': 'autogpt/config/ai_config.py', 'status': 'modified', 'Loc': {"('AIConfig', None, 21)": {'add': [28]... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"autogpt/chat.py",
"autogpt/prompts/prompt.py",
"autogpt/config/ai_config.py",
"autogpt/memory/base.py",
"autogpt/setup.py",
"autogpt/llm_utils.py"
],
"doc": [],
"test": [
"tests/unit/test_commands.py",
"tests/unit/test_setup.py"
],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 3e01ce744a981d8f19ae77ec695005e7000f4703 | https://github.com/yt-dlp/yt-dlp/issues/5855 | bug | Generic extractor can crash if Brotli is not available | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I'm running yt-dlp version **2022.11.11** ([update instructions](https://github.com/yt-dlp... | null | null | https://github.com/yt-dlp/yt-dlp/commit/3e01ce744a981d8f19ae77ec695005e7000f4703 | {'base_commit': '3e01ce744a981d8f19ae77ec695005e7000f4703', 'files': [{'path': 'yt_dlp/extractor/generic.py', 'status': 'modified', 'Loc': {"('GenericIE', None, 42)": {'add': [2156]}, "('GenericIE', '_real_extract', 2276)": {'mod': [2315]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "commit",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"yt_dlp/extractor/generic.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
CorentinJ | Real-Time-Voice-Cloning | ded7b37234e229d9bde0a9a506f7c65605803731 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/543 | Lack of pre-compiled results in lost interest | so I know the first thing people are going to say is, this isn't an issue. However, it is. by not having a precompiled version to download over half the people that find their way to this GitHub are going to lose interest. Honestly, I'm one of them. I attempted to compile it but then I saw that I had to track down eac... | null | https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/546 | null | {'base_commit': 'ded7b37234e229d9bde0a9a506f7c65605803731', 'files': [{'path': 'toolbox/ui.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [11]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"toolbox/ui.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
scikit-learn | scikit-learn | 96b5814de70ad2435b6db5f49b607b136921f701 | https://github.com/scikit-learn/scikit-learn/issues/26948 | Documentation | The copy button on install copies an extensive comman including env activation | ### Describe the issue linked to the documentation
https://scikit-learn.org/stable/install.html
Above link will lead you to the sklearn downlanding for link .
when you link copy link button it will copy
`python3 -m venv sklearn-venvpython -m venv sklearn-venvpython -m venv sklearn-venvsource sklearn-venv/bin/ac... | null | https://github.com/scikit-learn/scikit-learn/pull/27052 | null | {'base_commit': '96b5814de70ad2435b6db5f49b607b136921f701', 'files': [{'path': 'doc/install.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107]}}}, {'path': 'doc/themes/scik... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"doc/themes/scikit-learn-modern/static/css/theme.css"
],
"doc": [
"doc/install.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 |
keras-team | keras | 49b9682b3570211c7d8f619f8538c08fd5d8bdad | https://github.com/keras-team/keras/issues/10036 | [API DESIGN REVIEW] sample weight in ImageDataGenerator.flow | https://docs.google.com/document/d/14anankKROhliJCpInQH-pITatdjO9UzSN6Iz0MwcDHw/edit?usp=sharing
Makes it easy to use data augmentation when sample weights are available. | null | https://github.com/keras-team/keras/pull/10092 | null | {'base_commit': '49b9682b3570211c7d8f619f8538c08fd5d8bdad', 'files': [{'path': 'keras/preprocessing/image.py', 'status': 'modified', 'Loc': {"('ImageDataGenerator', 'flow', 715)": {'add': [734, 759], 'mod': [754]}, "('NumpyArrayIterator', None, 1188)": {'add': [1201]}, "('NumpyArrayIterator', '__init__', 1216)": {'add'... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"tests/keras/preprocessing/image_test.py",
"keras/preprocessing/image.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
scrapy | scrapy | efb53aafdcaae058962c6189ddecb3dc62b02c31 | https://github.com/scrapy/scrapy/issues/6514 | enhancement | Migrate from setup.py to pyproject.toml | We should migrate to the modern declarative setuptools metadata approach as discussed in https://setuptools.pypa.io/en/latest/userguide/quickstart.html and https://setuptools.pypa.io/en/latest/userguide/pyproject_config.html, but only after the 2.12 release. | null | https://github.com/scrapy/scrapy/pull/6547 | null | {'base_commit': 'efb53aafdcaae058962c6189ddecb3dc62b02c31', 'files': [{'path': '.bandit.yml', 'status': 'removed', 'Loc': {}}, {'path': '.bumpversion.cfg', 'status': 'removed', 'Loc': {}}, {'path': '.coveragerc', 'status': 'removed', 'Loc': {}}, {'path': '.isort.cfg', 'status': 'removed', 'Loc': {}}, {'path': '.pre-com... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"tests/test_spiderloader/__init__.py",
".isort.cfg",
".coveragerc",
"setup.cfg",
"setup.py",
".bumpversion.cfg"
],
"doc": [],
"test": [
"tests/test_crawler.py"
],
"config": [
"pytest.ini",
".pre-commit-config.yaml",
"tox.ini",
"pylintrc",
".bandit.... | 1 |
fastapi | fastapi | c6e950dc9cacefd692dbd8987a3acd12a44b506f | https://github.com/fastapi/fastapi/issues/5859 | question
question-migrate | FastAPI==0.89.0 Cannot use `None` as a return type when `status_code` is set to 204 with `from __future__ import annotations` | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] I alre... | null | https://github.com/fastapi/fastapi/pull/2246 | null | {'base_commit': 'c6e950dc9cacefd692dbd8987a3acd12a44b506f', 'files': [{'path': '.github/workflows/preview-docs.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [38]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [
".github/workflows/preview-docs.yml"
],
"test": [],
"config": [],
"asset": []
} | 1 |
3b1b | manim | 3938f81c1b4a5ee81d5bfc6563c17a225f7e5068 | https://github.com/3b1b/manim/issues/1330 | Error after installing manim | I installed all manim & dependecies, but when I ran `python -m manim example_scenes.py OpeningManimExample`, I got the following error:
`Traceback (most recent call last):
File "c:\users\jm\anaconda3\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\users\jm... | null | https://github.com/3b1b/manim/pull/1343 | null | {'base_commit': '3938f81c1b4a5ee81d5bfc6563c17a225f7e5068', 'files': [{'path': 'manimlib/window.py', 'status': 'modified', 'Loc': {"('Window', None, 10)": {'mod': [15]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"manimlib/window.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
keras-team | keras | 84b283e6200bcb051ed976782fbb2b123bf9b8fc | https://github.com/keras-team/keras/issues/19793 | type:bug/performance | model.keras format much slower to load | Anyone experiencing unreasonably slow load times when loading a keras-format saved model? I have noticed this repeated when working in ipython, where simply instantiating a model via `Model.from_config` then calling `model.load_weights` is much (several factors) faster than loading a `model.keras` file.
My understan... | null | https://github.com/keras-team/keras/pull/19852 | null | {'base_commit': '84b283e6200bcb051ed976782fbb2b123bf9b8fc', 'files': [{'path': 'keras/src/saving/saving_lib.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 34]}, "(None, '_save_model_to_fileobj', 95)": {'mod': [112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 127, 128, 129, 130, 131, 132... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"keras/src/saving/saving_lib_test.py",
"keras/src/saving/saving_lib.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ansible | ansible | 4cdb266dac852859f695b0555cbe49e58343e69a | https://github.com/ansible/ansible/issues/3539 | bug | Bug in Conditional Include | Hi,
I know that when using conditionals on an include, 'All the tasks get evaluated, but the conditional is applied to each and every task'. However this breaks when some of that tasks register variables and other tasks in the group use those variable.
Example:
main.yml:
```
- include: extra.yml
when: do_extra i... | null | https://github.com/ansible/ansible/pull/20158 | null | {'base_commit': '4cdb266dac852859f695b0555cbe49e58343e69a', 'files': [{'path': 'lib/ansible/modules/windows/win_robocopy.ps1', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [25, 26, 27, 28, 73, 76, 93, 94, 95, 114, 115, 167, 168]}}}, {'path': 'lib/ansible/modules/windows/win_robocopy.py', 'status': 'modif... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"lib/ansible/modules/windows/win_robocopy.ps1",
"lib/ansible/modules/windows/win_robocopy.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
psf | requests | f5dacf84468ab7e0631cc61a3f1431a32e3e143c | https://github.com/psf/requests/issues/2654 | Feature Request
Contributor Friendly | utils.get_netrc_auth silently fails when netrc exists but fails to parse | My .netrc contains a line for the github auth, [like this](https://gist.github.com/wikimatze/9790374).
It turns out that `netrc.netrc()` doesn't like that:
```
>>> from netrc import netrc
>>> netrc()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/System/Library/Frameworks/Python.fra... | null | https://github.com/psf/requests/pull/2656 | null | {'base_commit': 'f5dacf84468ab7e0631cc61a3f1431a32e3e143c', 'files': [{'path': 'requests/utils.py', 'status': 'modified', 'Loc': {"(None, 'get_netrc_auth', 70)": {'mod': [70, 108, 109]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"requests/utils.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
oobabooga | text-generation-webui | 0877741b0350d200be7f1e6cca2780a25ee29cd0 | https://github.com/oobabooga/text-generation-webui/issues/5851 | bug | Inference failing using ExLlamav2 version 0.0.18 | ### Describe the bug
Since ExLlamav2 was upgraded to version 0.0.18 in the requirements.txt, inference using it is no longer working and fails with the error in the logs below. Reverting to version 0.0.17 resolves the issue.
### Is there an existing issue for this?
- [X] I have searched the existing issues
... | null | null | https://github.com/oobabooga/text-generation-webui/commit/0877741b0350d200be7f1e6cca2780a25ee29cd0 | {'base_commit': '0877741b0350d200be7f1e6cca2780a25ee29cd0', 'files': [{'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, 59)': {'mod': [59, 60, 61, 62, 63]}}}, {'path': 'requirements_amd.txt', 'status': 'modified', 'Loc': {'(None, None, 45)': {'mod': [45, 46, 47]}}}, {'path': 'requirements_amd_noa... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "commit",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"requirements_apple_silicon.txt",
"requirements_amd_noavx2.txt",
"requirements_apple_intel.txt",
"requirements_amd.txt",
"requirements.txt",
"requirements_noavx2.txt"
],
"asset": []
} | null |
zylon-ai | private-gpt | 89477ea9d3a83181b0222b732a81c71db9edf142 | https://github.com/zylon-ai/private-gpt/issues/2013 | bug | [BUG] Another permissions error when installing with docker-compose | ### Pre-check
- [X] I have searched the existing issues and none cover this bug.
### Description
This looks similar, but not the same as #1876
As for following the instructions, I've not seen any relevant guide to installing with Docker, hence working a bit blind.
Background: I'm trying to run this on an Asus... | null | https://github.com/zylon-ai/private-gpt/pull/2059 | null | {'base_commit': '89477ea9d3a83181b0222b732a81c71db9edf142', 'files': [{'path': 'Dockerfile.llamacpp-cpu', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3, 23, 30]}}}, {'path': 'Dockerfile.ollama', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 13, 20]}}}, {'path': 'docker-compose.yaml', ... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [
"docker-compose.yaml"
],
"test": [],
"config": [
"Dockerfile.ollama",
"Dockerfile.llamacpp-cpu"
],
"asset": []
} | 1 |
scikit-learn | scikit-learn | e04b8e70e60df88751af5cd667cafb66dc32b397 | https://github.com/scikit-learn/scikit-learn/issues/26590 | Bug | KNNImputer add_indicator fails to persist where missing data had been present in training | ### Describe the bug
Hello, I've encountered an issue where the KNNImputer fails to record the fields where there were missing data at the time when `.fit` is called, but not recognised if `.transform` is called on a dense matrix. I would have expected it to return a 2x3 matrix rather than 2x2, with `missingindicato... | null | https://github.com/scikit-learn/scikit-learn/pull/26600 | null | {'base_commit': 'e04b8e70e60df88751af5cd667cafb66dc32b397', 'files': [{'path': 'doc/whats_new/v1.3.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14]}}}, {'path': 'sklearn/impute/_knn.py', 'status': 'modified', 'Loc': {"('KNNImputer', 'transform', 242)": {'mod': [285]}}}, {'path': 'sklearn/impute/te... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"sklearn/impute/_knn.py"
],
"doc": [
"doc/whats_new/v1.3.rst"
],
"test": [
"sklearn/impute/tests/test_common.py"
],
"config": [],
"asset": []
} | 1 |
nvbn | thefuck | 9660ec7813a0e77ec3411682b0084d07b540084e | https://github.com/nvbn/thefuck/issues/543 | Adding sudo works for `aura -Sy` but not `aura -Ay` | `fuck` is unable to add `sudo` to an `aura -Ay` command:
```
$ aura -Ay foobar-beta-git # from AUR
aura >>= You have to use `sudo` for that.
$ fuck
No fucks given
```
But works as expected for `aura -Sy`:
```
$ aura -Sy foobar # pacman alias
error: you cannot perform this operation unless you are root.
aura >>= Pl... | null | https://github.com/nvbn/thefuck/pull/557 | null | {'base_commit': '9660ec7813a0e77ec3411682b0084d07b540084e', 'files': [{'path': 'thefuck/rules/sudo.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [22]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"thefuck/rules/sudo.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
scikit-learn | scikit-learn | 2707099b23a0a8580731553629566c1182d26f48 | https://github.com/scikit-learn/scikit-learn/issues/29294 | Moderate
help wanted | ConvergenceWarnings cannot be turned off | Hi, I'm unable to turn off convergence warnings from `GraphicalLassoCV`.
I've tried most of the solutions from, and none of them worked (see below for actual implementations):
https://stackoverflow.com/questions/879173/how-to-ignore-deprecation-warnings-in-python
https://stackoverflow.com/questions/32612180/elimin... | null | https://github.com/scikit-learn/scikit-learn/pull/30380 | null | {'base_commit': '2707099b23a0a8580731553629566c1182d26f48', 'files': [{'path': 'sklearn/utils/parallel.py', 'status': 'modified', 'Loc': {"('_FuncWrapper', 'with_config', 121)": {'add': [122]}, "(None, '_with_config', 24)": {'mod': [24, 26, 27]}, "('Parallel', '__call__', 54)": {'mod': [73, 74, 77]}, "('_FuncWrapper', ... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"sklearn/utils/parallel.py"
],
"doc": [],
"test": [
"sklearn/utils/tests/test_parallel.py"
],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 7b2b1eff57e41364b4b427e36e766607e7eed3a0 | https://github.com/All-Hands-AI/OpenHands/issues/20 | Control Loop: long term planning and execution | The biggest, most complicated aspect of Devin is long-term planning and execution. I'd like to start a discussion about how this might work in OpenDevin.
There's some [recent prior work from Microsoft](https://arxiv.org/pdf/2403.08299.pdf) with some impressive results. I'll summarize here, with some commentary.
#... | null | https://github.com/All-Hands-AI/OpenHands/pull/3771 | null | {'base_commit': '7b2b1eff57e41364b4b427e36e766607e7eed3a0', 'files': [{'path': '.gitignore', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [230]}}}, {'path': 'containers/runtime/README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 3, 5, 9]}}}, {'path': 'frontend/src/components/Agent... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"openhands/runtime/e2b/runtime.py",
"frontend/src/types/Message.tsx",
"frontend/src/types/ResponseType.tsx",
"frontend/src/store.ts",
"openhands/runtime/remote/runtime.py",
"openhands/runtime/runtime.py",
"frontend/src/services/session.ts",
"openhands/server/session/agent.p... | 1 | |
All-Hands-AI | OpenHands | 2242702cf94eab7275f2cb148859135018d9b280 | https://github.com/All-Hands-AI/OpenHands/issues/1251 | enhancement | Sandbox Capabilities Framework | **Summary**
We have an existing use case for a Jupyter-aware agent, which always runs in a sandbox where Jupyter is available. There are some other scenarios I can think of where an agent might want some guarantees about what it can do with the sandbox:
* We might want a "postgres migration writer", which needs acces... | null | https://github.com/All-Hands-AI/OpenHands/pull/1255 | null | {'base_commit': '2242702cf94eab7275f2cb148859135018d9b280', 'files': [{'path': 'Makefile', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [220]}}}, {'path': 'agenthub/codeact_agent/codeact_agent.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [17]}, "('CodeActAgent', None, 66)": {'add': [7... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"opendevin/sandbox/docker/ssh_box.py",
"opendevin/schema/config.py",
"agenthub/codeact_agent/codeact_agent.py",
"opendevin/controller/action_manager.py",
"opendevin/sandbox/docker/local_box.py",
"opendevin/sandbox/e2b/sandbox.py",
"opendevin/sandbox/sandbox.py",
"opendevin/... | 1 |
deepfakes | faceswap | 0ea743029db0d47f09d33ef90f50ad84c20b085f | https://github.com/deepfakes/faceswap/issues/263 | Very slow extraction with scripts vs fakeapp 1.1 | 1080ti + OC'd 2600k using winpython 3.6.2 cuda 9.0 and tensorflow 1.6
**Training** utilizes ~50% of the GPU now (which is better than the ~25% utilized with FA 1.1) but extraction doesn't seem to utilize the GPU at all (getting around 1.33it/s) wheras with FA 1.1 I get around 17it/s - tried CNN and it dropped down t... | null | https://github.com/deepfakes/faceswap/pull/259 | null | {'base_commit': '0ea743029db0d47f09d33ef90f50ad84c20b085f', 'files': [{'path': 'lib/FaceLandmarksExtractor/FaceLandmarksExtractor.py', 'status': 'modified', 'Loc': {"(None, 'initialize', 108)": {'add': [126], 'mod': [108, 117, 123, 124, 125]}, "(None, 'extract', 137)": {'mod': [137, 138, 150, 151, 152, 153, 154, 155, 1... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"lib/faces_detect.py",
"lib/cli.py",
"lib/FaceLandmarksExtractor/FaceLandmarksExtractor.py",
"scripts/extract.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
fastapi | fastapi | ef176c663195489b44030bfe1fb94a317762c8d5 | https://github.com/fastapi/fastapi/issues/3323 | feature
reviewed | Support PEP 593 `Annotated` for specifying dependencies and parameters | ### First check
* [x] I added a very descriptive title to this issue.
* [x] I used the GitHub search to find a similar issue and didn't find it.
* [x] I searched the FastAPI documentation, with the integrated search.
* [x] I already searched in Google "How to X in FastAPI" and didn't find any information.
* [x] ... | null | https://github.com/fastapi/fastapi/pull/4871 | null | {'base_commit': 'ef176c663195489b44030bfe1fb94a317762c8d5', 'files': [{'path': 'fastapi/dependencies/utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [58], 'mod': [51]}, "(None, 'get_dependant', 282)": {'add': [336], 'mod': [301, 303, 307, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"fastapi/dependencies/utils.py",
"fastapi/utils.py",
"fastapi/param_functions.py",
"tests/main.py",
"fastapi/params.py"
],
"doc": [],
"test": [
"tests/test_params_repr.py",
"tests/test_application.py",
"tests/test_path.py"
],
"config": [],
"asset": []
} | 1 |
python | cpython | e01eeb7b4b8d00b9f5c6acb48957f46ac4e252c0 | https://github.com/python/cpython/issues/92417 | docs | Many references to unsupported Python versions in the stdlib docs | **Documentation**
There are currently many places in the stdlib docs where there are needless comments about how to maintain compatibility with Python versions that are now end-of-life. Many of these can now be removed, to improve brevity and clarity in the documentation.
I plan to submit a number of PRs to fix t... | null | https://github.com/python/cpython/pull/92539 | null | {'base_commit': 'e01eeb7b4b8d00b9f5c6acb48957f46ac4e252c0', 'files': [{'path': 'Doc/library/unittest.mock-examples.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [663]}}}, {'path': 'Doc/library/unittest.mock.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [2384]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": " ",
"info_type": ""
} | {
"code": [],
"doc": [
"Doc/library/unittest.mock-examples.rst",
"Doc/library/unittest.mock.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 |
scikit-learn | scikit-learn | 23d8761615d0417eef5f52cc796518e44d41ca2a | https://github.com/scikit-learn/scikit-learn/issues/19248 | Documentation
module:cluster | Birch should be called BIRCH | C.f. the original paper.
Zhang, T.; Ramakrishnan, R.; Livny, M. (1996). "BIRCH: an efficient data clustering method for very large databases". Proceedings of the 1996 ACM SIGMOD international conference on Management of data - SIGMOD '96. pp. 103–114. doi:10.1145/233269.233324 | null | https://github.com/scikit-learn/scikit-learn/pull/19368 | null | {'base_commit': '23d8761615d0417eef5f52cc796518e44d41ca2a', 'files': [{'path': 'doc/modules/clustering.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [106, 946, 965, 999, 1001, 1005]}}}, {'path': 'examples/cluster/plot_birch_vs_minibatchkmeans.py', 'status': 'modified', 'Loc': {'(None, None, None)': ... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"examples/cluster/plot_birch_vs_minibatchkmeans.py",
"sklearn/cluster/_birch.py",
"examples/cluster/plot_cluster_comparison.py"
],
"doc": [
"doc/modules/clustering.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 |
localstack | localstack | 65b807e4e95fe6da3e30f13e4271dc9dcfaa334e | https://github.com/localstack/localstack/issues/402 | type: bug | Dynamodbstreams Use Kinesis Shard Identifiers | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
Dynamodbstreams seem to be making use of Kinesis shard identifiers which are considered invalid by botocore request validators.
Error response from boto3 when attempting to `get_shard_iterator` f... | null | https://github.com/localstack/localstack/pull/403 | null | {'base_commit': '65b807e4e95fe6da3e30f13e4271dc9dcfaa334e', 'files': [{'path': 'localstack/services/dynamodbstreams/dynamodbstreams_api.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1, 119]}, "(None, 'post_request', 47)": {'add': [76], 'mod': [70, 78]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"localstack/services/dynamodbstreams/dynamodbstreams_api.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
pallets | flask | ee76129812419d473eb62434051e81d5855255b6 | https://github.com/pallets/flask/issues/602 | Misspelling in docs @ flask.Flask.handle_exception | `Default exception handling that kicks in when an exception occours that is not caught. In debug mode the exception will be re-raised immediately, otherwise it is logged and the handler for a 500 internal server error is used. If no such handler exists, a default 500 internal server error message is displayed.`
Occour... | null | https://github.com/pallets/flask/pull/603 | null | {'base_commit': 'ee76129812419d473eb62434051e81d5855255b6', 'files': [{'path': 'flask/app.py', 'status': 'modified', 'Loc': {"('Flask', 'handle_exception', 1266)": {'mod': [1268]}}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "不太确定问题类别,因为是开发者询问typo error",
"info_type": ""
} | {
"code": [
"flask/app.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
ansible | ansible | 79d00adc52a091d0ddd1d8a96b06adf2f67f161b | https://github.com/ansible/ansible/issues/36378 | cloud
aws
module
affects_2.4
support:certified
docs | Documentation Error for ec2_vpc_nacl rules | ##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
ec2_vpc_nacl
##### ANSIBLE VERSION
```
ansible 2.4.3.0
config file = None
configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/pyt... | null | https://github.com/ansible/ansible/pull/36380 | null | {'base_commit': '79d00adc52a091d0ddd1d8a96b06adf2f67f161b', 'files': [{'path': 'lib/ansible/modules/cloud/amazon/ec2_vpc_nacl.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [87]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"lib/ansible/modules/cloud/amazon/ec2_vpc_nacl.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
geekan | MetaGPT | a32e238801d0a8f3c1bd97b98d038b40977a8cc6 | https://github.com/geekan/MetaGPT/issues/1174 | New provider: Amazon Bedrock (AWS) | **Feature description**
Please include support for Amazon Bedrock models. These models can be from Amazon, Anthropic, AI21, Cohere, Mistral, or Meta Llama 2.
**Your Feature**
1. Create a new LLM Provides under [metagpt/provider](https://github.com/geekan/MetaGPT/tree/db65554c4931d4a95e20331b770cf4f7e5202264/metag... | null | https://github.com/geekan/MetaGPT/pull/1231 | null | {'base_commit': 'a32e238801d0a8f3c1bd97b98d038b40977a8cc6', 'files': [{'path': 'config/puppeteer-config.json', 'status': 'modified', 'Loc': {}}, {'path': 'metagpt/configs/llm_config.py', 'status': 'modified', 'Loc': {"('LLMType', None, 17)": {'add': [34]}, "('LLMConfig', None, 40)": {'add': [80], 'mod': [77]}}}, {'path... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"metagpt/utils/token_counter.py",
"metagpt/provider/__init__.py",
"metagpt/configs/llm_config.py",
"config/puppeteer-config.json",
"tests/metagpt/provider/mock_llm_config.py",
"tests/metagpt/provider/req_resp_const.py"
],
"doc": [],
"test": [],
"config": [
"requirements... | 1 | |
pandas-dev | pandas | 862cd05df4452592a99dd1a4fa10ce8cfb3766f7 | https://github.com/pandas-dev/pandas/issues/37494 | Enhancement
Groupby
ExtensionArray
NA - MaskedArrays
Closing Candidate | ENH: improve the resulting dtype for groupby operations on nullable dtypes | Follow-up on https://github.com/pandas-dev/pandas/pull/37433, and partly related to https://github.com/pandas-dev/pandas/issues/37493
Currently, after groupby operations we try to cast back to the original dtype when possible (at least in case of extension arrays). But this is not always correct, and also not done c... | null | https://github.com/pandas-dev/pandas/pull/38291 | null | {'base_commit': '862cd05df4452592a99dd1a4fa10ce8cfb3766f7', 'files': [{'path': 'pandas/core/dtypes/cast.py', 'status': 'modified', 'Loc': {"(None, 'maybe_cast_result_dtype', 342)": {'mod': [360, 362, 363, 364, 365]}}}, {'path': 'pandas/core/groupby/ops.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/core/dtypes/cast.py",
"pandas/core/groupby/ops.py"
],
"doc": [],
"test": [
"pandas/tests/groupby/aggregate/test_cython.py",
"pandas/tests/arrays/integer/test_arithmetic.py",
"pandas/tests/resample/test_datetime_index.py",
"pandas/tests/groupby/test_function.py"
],
... | 1 |
This repository hosts MULocBench, a comprehensive dataset.
MULocBench addresses limitations in existing benchmarks by focusing on accurate project localization (e.g., files and functions) for issue resolution, which is a critical first step in software maintenance. It comprises 1,100 issues from 46 popular GitHub Python projects, offering greater diversity in issue types, root causes, location scopes, and file types compared to prior datasets. This dataset provides a more realistic testbed for evaluating and advancing methods for multi-faceted issue resolution.
1. Downloads
Please download the benchmark from https://huggingface.co/datasets/somethingone/MULocBench/blob/main/all_issues_with_pr_commit_comment_all_project_0922.pkl
If you’d like to view the data directly, you can download the JSON file and then open the json file using your browser
2. How to Use DataSet
import pickle
filepath = "input the data path, e.g., all_issues_with_pr_commit_comment_all_project_0922.pkl"
with open(filepath, 'rb') as file:
iss_list = pickle.load(file)
for ind, e_iss in enumerate(iss_list):
print(f"{ind} issue info is {e_iss['iss_html_url']}") # print issue html url
for featurename in e_iss:
print(f"{featurename}: {e_iss[featurename]}") # print feature_name: value for each issue
3. Data Instances
An example of an issue is as follows:
organization: (str) - Organization name.
repo_name: (str) - Repository name.
iss_html_url: (str) - Issue html url.
title: (str) - Issue title.
label: (str) - Issue lable.
body: (str) - Issue body.
pr_html_url: (str) - Pull request html url corresponding to the issue.
commit_html_url: (str) - Commit html url corresponding to the issue.
file_loc: (dict) - Within-Project location information.
own_code_loc: (list) - User-Authored location information.
ass_file_loc: (list) - Runtime-file location information.
other_rep_loc: (list) - Third-Party-file location information.
analysis: (dict) - Issue type, reason, location scope and type.
base_commit: (str) - The commit hash of the repository representing the HEAD of the repository before the issue.
- Downloads last month
- 501