repo stringclasses 21
values | pull_number float64 88 192k | instance_id stringlengths 16 34 | issue_numbers stringlengths 6 20 | base_commit stringlengths 40 40 | patch stringlengths 266 270k | test_patch stringlengths 350 165k | problem_statement stringlengths 38 24k | hints_text stringlengths 1 33.2k ⌀ | created_at stringdate 2016-01-11 17:37:29 2024-10-18 14:52:41 | language stringclasses 4
values | Dockerfile stringlengths 100 3.03k | P2P stringlengths 2 216k | F2P stringlengths 11 10.5k | F2F stringclasses 26
values | test_command stringlengths 27 5.49k | task_category stringclasses 3
values | is_no_nodes bool 2
classes | is_func_only bool 2
classes | is_class_only bool 2
classes | is_mixed bool 2
classes | num_func_changes int64 0 238 | num_class_changes int64 0 70 | num_nodes int64 0 264 | is_single_func bool 2
classes | is_single_class bool 2
classes | modified_nodes stringlengths 2 42.2k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/transformers | 19,219 | huggingface__transformers-19219 | ['19116'] | 2d956958252617a178a68a06582c99b133fe7d3d | diff --git a/src/transformers/hf_argparser.py b/src/transformers/hf_argparser.py
--- a/src/transformers/hf_argparser.py
+++ b/src/transformers/hf_argparser.py
@@ -281,7 +281,9 @@ def parse_json_file(self, json_file: str, allow_extra_keys: bool = False) -> Tup
- the dataclass instances in the same ord... | diff --git a/tests/utils/test_hf_argparser.py b/tests/utils/test_hf_argparser.py
--- a/tests/utils/test_hf_argparser.py
+++ b/tests/utils/test_hf_argparser.py
@@ -13,12 +13,17 @@
# limitations under the License.
import argparse
+import json
+import os
+import tempfile
import unittest
from argparse import Namespac... | HfArgumentParser support yaml parser
### Feature request
HfArgumentParser now supports for parsing dict and json files, will it be possible to support for parsing the widely used yaml files?
### Motivation
I think using yaml is a good way to record arguments.
### Your contribution
Not yet.
| cc @sgugger
If you want to open a PR, please go ahead!
You can just use
`parser.parse_dict(yaml.safe_load(f))`
Which could all go in a `parse_yaml_file` method :-) Doing this and also refactoring the `parse_json_file` to use `parse_dict`, as well as adding small tests would be nice additions that shouldn't be too ... | 2022-09-27 18:49:45+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_basic', 'tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_with_string_literal_annotation', 'tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_with_list', 'tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_parse_dict_extra_key', 'te... | ['tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_parse_json', 'tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_parse_yaml'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/utils/test_hf_argparser.py --junitxml=test-results.xml | Feature | false | true | false | false | 2 | 0 | 2 | false | false | ["src/transformers/hf_argparser.py->module->class_definition:HfArgumentParser->function_definition:parse_yaml_file", "src/transformers/hf_argparser.py->module->class_definition:HfArgumentParser->function_definition:parse_json_file"] |
huggingface/transformers | 19,590 | huggingface__transformers-19590 | ['19528'] | 3d320c78c32334f66d72d57ff6322d9e3a7dc00b | diff --git a/src/transformers/models/bert/tokenization_bert_tf.py b/src/transformers/models/bert/tokenization_bert_tf.py
--- a/src/transformers/models/bert/tokenization_bert_tf.py
+++ b/src/transformers/models/bert/tokenization_bert_tf.py
@@ -3,6 +3,7 @@
import tensorflow as tf
+from tensorflow_text import BertTok... | diff --git a/tests/models/bert/test_tokenization_bert_tf.py b/tests/models/bert/test_tokenization_bert_tf.py
--- a/tests/models/bert/test_tokenization_bert_tf.py
+++ b/tests/models/bert/test_tokenization_bert_tf.py
@@ -40,8 +40,15 @@ class BertTokenizationTest(unittest.TestCase):
def setUp(self):
super().... | Allow TFBertTokenizer to use Tensorflow text BertTokenizer (and not FastBertTokenizer) to make it servable by TF Serving
### Feature request
I would like to serve a bundle of Tokenizer + Model on TF Serving, but can't do it because TF Serving still have no support for TF FastBertTokenizer annd FastBertNormalize oper... | null | 2022-10-13 18:00:22+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | [] | ['tests/models/bert/test_tokenization_bert_tf.py:BertTokenizationTest:test_output_equivalence'] | null | pytest -v --tb=short --show-capture=no --junitxml=test-results.xml /testbed/tests/models/bert/test_tokenization_bert_tf.py | Feature | false | false | false | true | 1 | 2 | 3 | false | false | ["src/transformers/models/bert/tokenization_bert_tf.py->module->class_definition:TFBertTokenizer->function_definition:unpaired_tokenize", "src/transformers/models/bert/tokenization_bert_tf.py->module->class_definition:TFBertTokenizer", "src/transformers/models/bert/tokenization_bert_tf.py->module->class_definition:TFBe... |
huggingface/transformers | 20,136 | huggingface__transformers-20136 | ['18748'] | fda125638f53febc059cb67f9d7abce058a8f44f | diff --git a/docs/source/en/model_doc/owlvit.mdx b/docs/source/en/model_doc/owlvit.mdx
--- a/docs/source/en/model_doc/owlvit.mdx
+++ b/docs/source/en/model_doc/owlvit.mdx
@@ -80,6 +80,8 @@ This model was contributed by [adirik](https://huggingface.co/adirik). The origi
[[autodoc]] OwlViTFeatureExtractor
- __cal... | diff --git a/tests/models/owlvit/test_modeling_owlvit.py b/tests/models/owlvit/test_modeling_owlvit.py
--- a/tests/models/owlvit/test_modeling_owlvit.py
+++ b/tests/models/owlvit/test_modeling_owlvit.py
@@ -19,7 +19,6 @@
import os
import tempfile
import unittest
-from typing import Dict, List, Tuple
import numpy ... | Add image-guided object detection support to OWL-ViT
Hi,
The [OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit) model is an open-vocabulary model that can be used for both zero-shot text-guided (supported) and one-shot image-guided (not supported) object detection.
It'd be great to add support ... | I think it would be a great addition, especially as it doesn't seem to be too much work to add. I'm guessing for the processor, and your description, the call signature would look something like this:
`def __call__(self, text=None, query_image=None, images=None, padding="max_length", return_tensors="np", **kwargs):... | 2022-11-09 11:18:55+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/owlvit/test_modeling_owlvit.py:OwlViTVisionModelTest:test_correct_missing_keys', 'tests/models/owlvit/test_modeling_owlvit.py:OwlViTTextModelTest:test_problem_types', 'tests/models/owlvit/test_modeling_owlvit.py:OwlViTModelTest:test_model_main_input_name', 'tests/models/owlvit/test_modeling_owlvit.py:Owl... | ['tests/models/owlvit/test_processor_owlvit.py:OwlViTProcessorTest:test_processor_case2'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test-results.json --report-log=pytest-log.jsonl /testbed/tests/models/owlvit/test_modeling_owlvit.py /testbed/tests/models/owlvit/test_processor_owlvit.py | Feature | false | false | false | true | 31 | 6 | 37 | false | false | ["src/transformers/models/owlvit/processing_owlvit.py->module->class_definition:OwlViTProcessor->function_definition:post_process_image_guided_detection", "src/transformers/models/owlvit/modeling_owlvit.py->module->class_definition:OwlViTModel->function_definition:forward", "src/transformers/models/owlvit/processing_ow... |
huggingface/transformers | 21,345 | huggingface__transformers-21345 | ['21344'] | 92ce53aab859012f7714dae6d6fce7a7d701e75f | diff --git a/src/transformers/activations.py b/src/transformers/activations.py
--- a/src/transformers/activations.py
+++ b/src/transformers/activations.py
@@ -25,6 +25,27 @@
logger = logging.get_logger(__name__)
+class PytorchGELUTanh(nn.Module):
+ """
+ A fast C implementation of the tanh approximation of t... | diff --git a/tests/utils/test_activations.py b/tests/utils/test_activations.py
--- a/tests/utils/test_activations.py
+++ b/tests/utils/test_activations.py
@@ -51,6 +51,7 @@ def test_get_activation(self):
get_activation("gelu_fast")
get_activation("gelu_new")
get_activation("gelu_python")
+ ... | Add the pytorch implementation of the OpenAI GeLU approximation
### Feature request
Add support for the pytorch implementation of OpenAI's approximation of the GeLU function, added in pytorch 1.12. This implementation is equivalent to `gelu_new` or `gelu_fast` but much faster. It can come as a separate activation fu... | null | 2023-01-27 23:00:12+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/utils/test_activations.py:TestActivations:test_gelu_versions', 'tests/utils/test_activations.py:TestActivations:test_activations_are_distinct_objects', 'tests/utils/test_activations.py:TestActivations:test_gelu_10'] | ['tests/utils/test_activations.py:TestActivations:test_get_activation'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/utils/test_activations.py --junitxml=test-results.xml | Feature | false | false | false | true | 1 | 2 | 3 | false | false | ["src/transformers/activations.py->module->class_definition:PytorchGELUTanh", "src/transformers/activations.py->module->class_definition:PytorchGELUTanh->function_definition:forward", "src/transformers/activations.py->module->class_definition:PytorchGELUTanh->function_definition:__init__"] |
huggingface/transformers | 21,768 | huggingface__transformers-21768 | ['21689'] | 99ba36e72fe7d1528e2c6572373a425967ee544f | diff --git a/src/transformers/optimization.py b/src/transformers/optimization.py
--- a/src/transformers/optimization.py
+++ b/src/transformers/optimization.py
@@ -16,6 +16,7 @@
import math
import warnings
+from functools import partial
from typing import Callable, Iterable, Optional, Tuple, Union
import torch
@... | diff --git a/tests/optimization/test_optimization.py b/tests/optimization/test_optimization.py
--- a/tests/optimization/test_optimization.py
+++ b/tests/optimization/test_optimization.py
@@ -166,5 +166,21 @@ def test_schedulers(self):
)
scheduler = scheduler_func(self.optimizer, **kwargs)
+ ... | Make schedulers picklable
### Feature request
Change lambda functions passed to `LambdaLR` in `get_constant_schedule`, `get_constant_schedule_with_warmup`, `get_linear_schedule_with_warmup`, `get_cosine_schedule_with_warmup`, `get_cosine_with_hard_restarts_schedule_with_warmup` and `get_polynomial_decay_schedule_with_... | Thanks for explaining your issue in depth, and happy to review a PR! | 2023-02-23 19:13:53+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/optimization/test_optimization.py:OptimizationTest:test_adam_w', 'tests/optimization/test_optimization.py:OptimizationTest:test_adafactor'] | ['tests/optimization/test_optimization.py:ScheduleInitTest:test_schedulers'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/optimization/test_optimization.py --junitxml=test-results.xml | Feature | false | true | false | false | 19 | 0 | 19 | false | false | ["src/transformers/optimization.py->module->function_definition:get_cosine_schedule_with_warmup->function_definition:lr_lambda", "src/transformers/optimization.py->module->function_definition:get_cosine_with_hard_restarts_schedule_with_warmup", "src/transformers/optimization.py->module->function_definition:get_constant... |
huggingface/transformers | 22,458 | huggingface__transformers-22458 | ['22392'] | cd73b9a8c140fb74cd93187f5c3d380cfc308023 | diff --git a/src/transformers/image_transforms.py b/src/transformers/image_transforms.py
--- a/src/transformers/image_transforms.py
+++ b/src/transformers/image_transforms.py
@@ -118,6 +118,33 @@ def rescale(
return rescaled_image
+def _rescale_for_pil_conversion(image):
+ """
+ Detects whether or not th... | diff --git a/tests/test_image_transforms.py b/tests/test_image_transforms.py
--- a/tests/test_image_transforms.py
+++ b/tests/test_image_transforms.py
@@ -249,6 +249,14 @@ def test_resize(self):
# PIL size is in (width, height) order
self.assertEqual(resized_image.size, (40, 30))
+ # Check an... | Inconsistent Normalization for ViTImageProcessor when `do_resize` is False
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.31
- Python version: 3.10.9
- Huggingface_hub version: 0.13.2
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?):... | cc @amyeroberts
Hi @Interpause, thanks for raising this issue!
Indeed, this is a funny behaviour. This is happening because of the use of the PIL library to resize images and the rescaling behaviour that happens in `ToTensor`.
To explain in more detail, I'll refer to the input `im` and `im_pil` and `to_tens(im... | 2023-03-29 20:03:48+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/test_image_transforms.py:ImageTransformsTester:test_get_resize_output_image_size', 'tests/test_image_transforms.py:ImageTransformsTester:test_to_pil_image_5_numpy_uint_channels_first', 'tests/test_image_transforms.py:ImageTransformsTester:test_id_to_rgb', 'tests/test_image_transforms.py:ImageTransformsTester:te... | ['tests/test_image_transforms.py:ImageTransformsTester:test_resize'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/test_image_transforms.py | Bug Fix | true | false | false | false | 0 | 0 | 0 | false | false | ["src/transformers/image_transforms.py->module->function_definition:to_pil_image", "src/transformers/image_transforms.py->module->function_definition:resize", "src/transformers/image_transforms.py->module->function_definition:_rescale_for_pil_conversion"] |
huggingface/transformers | 22,920 | huggingface__transformers-22920 | ['22904'] | 1e1cb6f8e5af1c592ed7d6ca035b0e07297e52b8 | diff --git a/src/transformers/models/sam/image_processing_sam.py b/src/transformers/models/sam/image_processing_sam.py
--- a/src/transformers/models/sam/image_processing_sam.py
+++ b/src/transformers/models/sam/image_processing_sam.py
@@ -378,12 +378,13 @@ def post_process_masks(
Remove padding and upscale mas... | diff --git a/tests/models/sam/test_processor_sam.py b/tests/models/sam/test_processor_sam.py
--- a/tests/models/sam/test_processor_sam.py
+++ b/tests/models/sam/test_processor_sam.py
@@ -17,8 +17,8 @@
import numpy as np
-from transformers.testing_utils import require_torchvision, require_vision
-from transformers.... | SAM: Notebook example not working
### System Info
- `transformers` version: 4.29.0.dev0
- Platform: macOS-13.2-arm64-arm-64bit
- Python version: 3.10.6
- Huggingface_hub version: 0.13.4
- Safetensors version: 0.3.0
- PyTorch version (GPU?): 1.13.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax v... | I have similar issue when i run
```
img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
input_points = [[[450, 600]]] # 2D location of a window in the image
inputs = processor(raw_image, input_p... | 2023-04-21 13:38:26+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/sam/test_processor_sam.py:SamProcessorTest:test_image_processor', 'tests/models/sam/test_processor_sam.py:SamProcessorTest:test_save_load_pretrained_additional_features'] | ['tests/models/sam/test_processor_sam.py:SamProcessorTest:test_post_process_masks'] | null | pytest -v --tb=short --show-capture=no --junitxml=test-results.xml /testbed/tests/models/sam/test_processor_sam.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/sam/image_processing_sam.py->module->class_definition:SamImageProcessor->function_definition:post_process_masks"] |
huggingface/transformers | 23,126 | huggingface__transformers-23126 | ['20249'] | b61d5b47f640308068139561f673765b2af39874 | diff --git a/src/transformers/hf_argparser.py b/src/transformers/hf_argparser.py
--- a/src/transformers/hf_argparser.py
+++ b/src/transformers/hf_argparser.py
@@ -15,6 +15,7 @@
import dataclasses
import json
import sys
+import types
from argparse import ArgumentDefaultsHelpFormatter, ArgumentParser, ArgumentTypeErr... | diff --git a/tests/utils/test_hf_argparser.py b/tests/utils/test_hf_argparser.py
--- a/tests/utils/test_hf_argparser.py
+++ b/tests/utils/test_hf_argparser.py
@@ -15,6 +15,7 @@
import argparse
import json
import os
+import sys
import tempfile
import unittest
from argparse import Namespace
@@ -36,6 +37,10 @@
... | Support X | Y syntax on HfArgumentParser
### Feature request
[PEP-604](https://peps.python.org/pep-0604/) created the X | Y syntax on python 3.10, which is equivalent to Union[X, Y]. The use of this syntax is not supported by HfArgumentParser.
### Motivation
With this syntax I would like to use something lik... | Looks like adding support while not breaking previous Python version will be tricky, as `from types import UnionType` only work for Python 3.10 and above. We can look at a PR if you want to try a contribution, but I don't think we will add this ourselves until Python 3.10 is more widely supported (PyTorch and TensorFlo... | 2023-05-03 10:49:29+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_basic', 'tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_with_string_literal_annotation', 'tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_with_literal', 'tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_parse_dict_extra_key', ... | ['tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_with_optional'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/utils/test_hf_argparser.py -rA --json-report --json-report-file=test_output.json | Feature | false | true | false | false | 2 | 0 | 2 | false | false | ["src/transformers/hf_argparser.py->module->class_definition:HfArgumentParser->function_definition:_parse_dataclass_field", "src/transformers/hf_argparser.py->module->class_definition:HfArgumentParser->function_definition:_add_dataclass_arguments"] |
huggingface/transformers | 24,510 | huggingface__transformers-24510 | ['16136'] | b52a03cd3bec92d0ee84f0b1f7edee0d5117200a | diff --git a/src/transformers/modeling_utils.py b/src/transformers/modeling_utils.py
--- a/src/transformers/modeling_utils.py
+++ b/src/transformers/modeling_utils.py
@@ -3477,6 +3477,36 @@ def reverse_bettertransformer(self):
return BetterTransformer.reverse(self)
+ def warn_if_padding_and_no_attention... | diff --git a/tests/models/bert/test_modeling_bert.py b/tests/models/bert/test_modeling_bert.py
--- a/tests/models/bert/test_modeling_bert.py
+++ b/tests/models/bert/test_modeling_bert.py
@@ -18,7 +18,7 @@
from transformers import BertConfig, is_torch_available
from transformers.models.auto import get_values
-from t... | Add warning message if model uses `input_ids` that include padding tokens, but no `attention_mask` is provided.
## **First good issue**
A current error is that a user forwards a batched tensor of `input_ids` that include a padding token, e.g. ```input_ids = torch.tensor([["hello", "this", "is", "a", "long", "string... | Models usually don't know the right pad token ID as pointed out in the issue (I'm also not sure that community-contributed models or models not as heavily used as BERT have the right pas token ID in their configs), so I'm not in favor of this. Plus, the check of the inputs at each forward pass would slow down performan... | 2023-06-27 01:44:15+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/bert/test_modeling_bert.py:BertModelTest:test_greedy_generate', 'tests/models/bert/test_modeling_bert.py:BertModelTest:test_model_common_attributes', 'tests/models/bert/test_modeling_bert.py:BertModelTest:test_beam_sample_generate_dict_output', 'tests/models/bert/test_modeling_bert.py:BertModelTest:test_... | ['tests/test_modeling_utils.py:ModelUtilsTest:test_warn_if_padding_and_no_attention_mask', 'tests/models/bert/test_modeling_bert.py:BertModelTest:test_for_warning_if_padding_and_no_attention_mask'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_results.json /testbed/tests/models/bert/test_modeling_bert.py /testbed/tests/test_modeling_utils.py | Feature | false | true | false | false | 10 | 0 | 10 | false | false | ["src/transformers/models/bridgetower/modeling_bridgetower.py->module->class_definition:BridgeTowerTextModel->function_definition:forward", "src/transformers/modeling_utils.py->module->class_definition:PreTrainedModel->function_definition:warn_if_padding_and_no_attention_mask", "src/transformers/models/bert/modeling_be... |
huggingface/transformers | 25,358 | huggingface__transformers-25358 | ['25357'] | 080a97119c0dabfd0fb5c3e26a872ad2958e4f77 | diff --git a/src/transformers/utils/generic.py b/src/transformers/utils/generic.py
--- a/src/transformers/utils/generic.py
+++ b/src/transformers/utils/generic.py
@@ -248,6 +248,21 @@ class ModelOutput(OrderedDict):
</Tip>
"""
+ def __init_subclass__(cls) -> None:
+ """Register subclasses as pytre... | diff --git a/tests/utils/test_model_output.py b/tests/utils/test_model_output.py
--- a/tests/utils/test_model_output.py
+++ b/tests/utils/test_model_output.py
@@ -17,6 +17,7 @@
from dataclasses import dataclass
from typing import Optional
+from transformers.testing_utils import require_torch
from transformers.util... | DDP grads not synced when static_graph=True
### System Info
Related: https://github.com/pytorch/pytorch/issues/106690
This behavior seems to be a quirk of `DistributedDataParallel.forward` and how it chooses to handle serializing and deserializing model output types. Even though `ModelOutput` is a subclass of a sup... | null | 2023-08-07 20:09:18+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/utils/test_model_output.py:ModelOutputTester:test_dict_like_properties', 'tests/utils/test_model_output.py:ModelOutputTester:test_index_with_ints_and_slices', 'tests/utils/test_model_output.py:ModelOutputTester:test_set_keys', 'tests/utils/test_model_output.py:ModelOutputTester:test_set_attributes', 'tests/util... | ['tests/utils/test_model_output.py:ModelOutputTester:test_torch_pytree'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/utils/test_model_output.py -rA --junitxml=test-results.xml | Bug Fix | false | false | false | true | 1 | 1 | 2 | false | false | ["src/transformers/utils/generic.py->module->class_definition:ModelOutput", "src/transformers/utils/generic.py->module->class_definition:ModelOutput->function_definition:__init_subclass__"] |
huggingface/transformers | 25,636 | huggingface__transformers-25636 | ['25634'] | 021887682224daf29264f98c759a45e88c82e244 | diff --git a/src/transformers/models/gpt2/modeling_flax_gpt2.py b/src/transformers/models/gpt2/modeling_flax_gpt2.py
--- a/src/transformers/models/gpt2/modeling_flax_gpt2.py
+++ b/src/transformers/models/gpt2/modeling_flax_gpt2.py
@@ -753,7 +753,9 @@ def prepare_inputs_for_generation(self, input_ids, max_length, attent... | diff --git a/tests/models/gpt2/test_modeling_flax_gpt2.py b/tests/models/gpt2/test_modeling_flax_gpt2.py
--- a/tests/models/gpt2/test_modeling_flax_gpt2.py
+++ b/tests/models/gpt2/test_modeling_flax_gpt2.py
@@ -187,6 +187,26 @@ def check_use_cache_forward_with_attn_mask(self, model_class_name, config, input
di... | Problem caused by boolean attention mask in `pretrained_model.generate` of Flax GPT2
Hi!
I notice that the usage of a boolean attention mask in `pretrained_model.generate` of Flax GPT2 can cause an error. Here is a short, self-contained code block to showcase the problem; I also prepared a [colab notebook here](htt... | cc @sanchit-gandhi
Hey @liutianlin0121! Thanks for the comprehensive issue description! That's a good spot - we actually covert the `attention_mask` to `"i4"` dtype under-the-hood when we call the Flax module:
https://github.com/huggingface/transformers/blob/450a181d8b963b4e896be4aac701815aa554a6bb/src/transformers/m... | 2023-08-21 17:41:40+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/gpt2/test_modeling_flax_gpt2.py:FlaxGPT2ModelTest:test_model_outputs_equivalence', 'tests/models/gpt2/test_modeling_flax_gpt2.py:FlaxGPT2ModelTest:test_beam_search_generate_num_return_sequences', 'tests/models/gpt2/test_modeling_flax_gpt2.py:FlaxGPT2ModelTest:test_no_automatic_init', 'tests/models/gpt2/t... | ['tests/models/gpt2/test_modeling_flax_gpt2.py:FlaxGPT2ModelTest:test_bool_attention_mask_in_generation'] | null | pytest -v --tb=short /testbed/tests/models/gpt2/test_modeling_flax_gpt2.py -rA --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/gpt2/modeling_flax_gpt2.py->module->class_definition:FlaxGPT2LMHeadModel->function_definition:prepare_inputs_for_generation"] |
huggingface/transformers | 25,765 | huggingface__transformers-25765 | ['23331'] | d0354e5e86842b757cec1ecb7de314a1f2421c1e | diff --git a/src/transformers/models/mega/modeling_mega.py b/src/transformers/models/mega/modeling_mega.py
--- a/src/transformers/models/mega/modeling_mega.py
+++ b/src/transformers/models/mega/modeling_mega.py
@@ -1542,6 +1542,9 @@ def forward(
else:
raise ValueError("You have to specify either i... | diff --git a/tests/models/mega/test_modeling_mega.py b/tests/models/mega/test_modeling_mega.py
--- a/tests/models/mega/test_modeling_mega.py
+++ b/tests/models/mega/test_modeling_mega.py
@@ -313,6 +313,34 @@ def create_and_check_decoder_model_past_large_inputs(
# test that outputs are equal for slice
... | RuntimeError: The size of tensor a (16) must match the size of tensor b (16000) at non-singleton dimension 2
### System Info
- `transformers` version: 4.30.0.dev0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- Py... | Hi @Tylersuard, thanks for reporting this issue.
So that we can best try and help you, could you update the notebook so that it contains the minimal logic to replicate the error and can be run out-of-the-box? As it stands, there's many blocks with comments; references to loading / processing data we don't have acce... | 2023-08-25 17:48:04+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/mega/test_modeling_mega.py:MegaModelTest:test_for_token_classification', 'tests/models/mega/test_modeling_mega.py:MegaModelTest:test_head_pruning_save_load_from_pretrained', 'tests/models/mega/test_modeling_mega.py:MegaModelTest:test_model_as_decoder', 'tests/models/mega/test_modeling_mega.py:MegaModelTe... | ['tests/models/mega/test_modeling_mega.py:MegaModelTest:test_decoder_model_with_chunking'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/mega/test_modeling_mega.py -rA --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/mega/modeling_mega.py->module->class_definition:MegaModel->function_definition:forward"] |
huggingface/transformers | 25,884 | huggingface__transformers-25884 | ['25804'] | 716bb2e3910fd4872064c55b0d8bc3dad754d129 | diff --git a/src/transformers/pipelines/base.py b/src/transformers/pipelines/base.py
--- a/src/transformers/pipelines/base.py
+++ b/src/transformers/pipelines/base.py
@@ -872,6 +872,9 @@ def save_pretrained(self, save_directory: str, safe_serialization: bool = False)
if self.feature_extractor is not None:
... | diff --git a/tests/pipelines/test_pipelines_image_segmentation.py b/tests/pipelines/test_pipelines_image_segmentation.py
--- a/tests/pipelines/test_pipelines_image_segmentation.py
+++ b/tests/pipelines/test_pipelines_image_segmentation.py
@@ -13,6 +13,7 @@
# limitations under the License.
import hashlib
+import tem... | OSError: /home/datascience/huggingface does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co//home/datascience/huggingface/None' for available files.
### System Info
import transformers
transformers.__version__
'4.31.0'
### Who can help?
_No response_
### Inform... | Hey! Thanks for reporting! Yep I thing we should make sure the `image_processor`is also saved! Would you like to open a PR? 🤗 | 2023-08-31 07:29:21+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/pipelines/test_pipelines_image_segmentation.py:ImageSegmentationPipelineTests:test_small_model_pt_no_panoptic', 'tests/pipelines/test_pipelines_image_segmentation.py:ImageSegmentationPipelineTests:test_small_model_pt', 'tests/pipelines/test_pipelines_image_segmentation.py:ImageSegmentationPipelineTests:test_sma... | ['tests/pipelines/test_pipelines_image_segmentation.py:ImageSegmentationPipelineTests:test_save_load'] | null | pytest -v --tb=short /testbed/tests/pipelines/test_pipelines_image_segmentation.py -rA --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/pipelines/base.py->module->class_definition:Pipeline->function_definition:save_pretrained"] |
huggingface/transformers | 26,164 | huggingface__transformers-26164 | ['25422'] | 7c63e6fc8c34dcf8b0121eaee776f41ccf3b1137 | diff --git a/src/transformers/models/whisper/modeling_whisper.py b/src/transformers/models/whisper/modeling_whisper.py
--- a/src/transformers/models/whisper/modeling_whisper.py
+++ b/src/transformers/models/whisper/modeling_whisper.py
@@ -1719,13 +1719,22 @@ def generate(
decoder_start_token_id, *text_prom... | diff --git a/tests/models/whisper/test_modeling_whisper.py b/tests/models/whisper/test_modeling_whisper.py
--- a/tests/models/whisper/test_modeling_whisper.py
+++ b/tests/models/whisper/test_modeling_whisper.py
@@ -1075,6 +1075,29 @@ def test_generate_with_prompt_ids_and_forced_decoder_ids(self):
for row in ou... | Whisper Prompting max_new_tokens
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU... | Hi @Helene-Maxcici! Thanks for writing this issue, there’s definitely an out of bounds issue here.
Appreciate you catching the precedence issue that the slicing doesn’t quite match OpenAI’s, we should change that in the fix PR so its slicing one less than half the max_length instead one one more than half. Ultimate... | 2023-09-14 14:02:14+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_model_is_small', 'tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_contrastive_generate_low_memory', 'tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_group_beam_search_generate', 'tests/models/whisper/test_model... | ['tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_generate_with_prompt_ids_max_length'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/whisper/test_modeling_whisper.py -rA --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/whisper/modeling_whisper.py->module->class_definition:WhisperForConditionalGeneration->function_definition:generate"] |
huggingface/transformers | 26,386 | huggingface__transformers-26386 | ['24602'] | 546e7679e7f692ebeefcfc5063cec271a55bae20 | diff --git a/src/transformers/models/esm/modeling_esm.py b/src/transformers/models/esm/modeling_esm.py
--- a/src/transformers/models/esm/modeling_esm.py
+++ b/src/transformers/models/esm/modeling_esm.py
@@ -690,6 +690,7 @@ class EsmPreTrainedModel(PreTrainedModel):
config_class = EsmConfig
base_model_prefix... | diff --git a/tests/models/esm/test_modeling_esm.py b/tests/models/esm/test_modeling_esm.py
--- a/tests/models/esm/test_modeling_esm.py
+++ b/tests/models/esm/test_modeling_esm.py
@@ -151,6 +151,24 @@ def create_and_check_for_token_classification(
result = model(input_ids, attention_mask=input_mask, labels=toke... | Support gradient checkpointing for ESM models
Would you please add `gradient_checkpointing_enable()` feature for ESM models?
These models currently are the best available pre-trained protein language models for researchers.
Many thanks.
| cc @Rocketknight1
Any updates?
It's on the to-do list, but I'm afraid there are competing priorities at the moment!
Let's open it up for anyone in the community who might want to tackle it :)
Hi @amyeroberts @Rocketknight1 I would like to work on this
@sanjeevk-os Great! Once you have the code ready, open a PR and p... | 2023-09-25 14:22:07+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y build... | ['tests/models/esm/test_modeling_esm.py:EsmModelTest:test_head_pruning_save_load_from_pretrained', 'tests/models/esm/test_modeling_esm.py:EsmModelTest:test_tied_weights_keys', 'tests/models/esm/test_modeling_esm.py:EsmModelTest:test_gradient_checkpointing_backward_compatibility', 'tests/models/esm/test_modeling_esm.py:... | ['tests/models/esm/test_modeling_esm.py:EsmModelTest:test_esm_gradient_checkpointing'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/esm/test_modeling_esm.py -rA --junitxml=test-results.xml | Feature | false | false | false | true | 2 | 2 | 4 | false | false | ["src/transformers/models/esm/modeling_esm.py->module->class_definition:EsmPreTrainedModel", "src/transformers/models/esm/modeling_esm.py->module->class_definition:EsmModel->function_definition:_set_gradient_checkpointing", "src/transformers/models/esm/modeling_esm.py->module->class_definition:EsmModel", "src/transform... |
huggingface/transformers | 26,568 | huggingface__transformers-26568 | ['26566', '26566'] | bd6205919aad4d3a2300a39a98a642f1cc3a5348 | diff --git a/src/transformers/models/swin2sr/configuration_swin2sr.py b/src/transformers/models/swin2sr/configuration_swin2sr.py
--- a/src/transformers/models/swin2sr/configuration_swin2sr.py
+++ b/src/transformers/models/swin2sr/configuration_swin2sr.py
@@ -44,6 +44,8 @@ class Swin2SRConfig(PretrainedConfig):
... | diff --git a/tests/models/swin2sr/test_modeling_swin2sr.py b/tests/models/swin2sr/test_modeling_swin2sr.py
--- a/tests/models/swin2sr/test_modeling_swin2sr.py
+++ b/tests/models/swin2sr/test_modeling_swin2sr.py
@@ -46,6 +46,7 @@ def __init__(
image_size=32,
patch_size=1,
num_channels=3,
+ ... | SWIN2SR: Allow to choose number of in_channels and out_channels
### Feature request
I'd like to be able to specify a different number of output and input channels for the Swin2sr superresolution model. The current [SWIN2SR](https://github.com/huggingface/transformers/blob/v4.33.3/src/transformers/models/swin2sr/mode... | 2023-10-03 16:27:03+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/swin2sr/test_modeling_swin2sr.py:Swin2SRModelTest:test_head_pruning_save_load_from_pretrained', 'tests/models/swin2sr/test_modeling_swin2sr.py:Swin2SRModelTest:test_headmasking', 'tests/models/swin2sr/test_modeling_swin2sr.py:Swin2SRModelTest:test_can_use_safetensors', 'tests/models/swin2sr/test_modeling... | ['tests/models/swin2sr/test_modeling_swin2sr.py:Swin2SRModelTest:test_model_for_image_super_resolution'] | null | pytest -v --tb=short /testbed/tests/models/swin2sr/test_modeling_swin2sr.py -rA --junitxml=test-results.xml | Feature | false | false | true | false | 0 | 8 | 8 | false | false | ["src/transformers/models/swin2sr/modeling_swin2sr.py->module->class_definition:UpsampleOneStep", "src/transformers/models/swin2sr/modeling_swin2sr.py->module->class_definition:Swin2SRModel->function_definition:__init__", "src/transformers/models/swin2sr/modeling_swin2sr.py->module->class_definition:Swin2SRForImageSupe... | |
huggingface/transformers | 27,463 | huggingface__transformers-27463 | ['27361'] | 3cefac1d974db5e2825a0cb2b842883a628be7a0 | diff --git a/docs/source/en/model_doc/sam.md b/docs/source/en/model_doc/sam.md
--- a/docs/source/en/model_doc/sam.md
+++ b/docs/source/en/model_doc/sam.md
@@ -66,6 +66,34 @@ masks = processor.image_processor.post_process_masks(
scores = outputs.iou_scores
```
+You can also process your own masks alongside the input... | diff --git a/tests/models/sam/test_processor_sam.py b/tests/models/sam/test_processor_sam.py
--- a/tests/models/sam/test_processor_sam.py
+++ b/tests/models/sam/test_processor_sam.py
@@ -58,13 +58,18 @@ def prepare_image_inputs(self):
"""This function prepares a list of PIL images, or a list of numpy arrays if... | Add how to preprocess mask for finetuning with SAM
### Feature request
The [SAM image processor](https://github.com/huggingface/transformers/blob/main/src/transformers/models/sam/image_processing_sam.py) takes images as input and resizes them so that the longest edge is 1024 (using default values). This is the size ex... | Hi @rwood-97, thanks for raising this issue!
Agreed - being able to pass in the masks to the image processor would be ideal! Feel free to ping me on a PR for review if you'd like to open one :) | 2023-11-13 11:52:42+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/sam/test_processor_sam.py:TFSamProcessorTest:test_post_process_masks', 'tests/models/sam/test_processor_sam.py:SamProcessorEquivalenceTest:test_post_process_masks_equivalence', 'tests/models/sam/test_processor_sam.py:TFSamProcessorTest:test_save_load_pretrained_additional_features', 'tests/models/sam/tes... | ['tests/models/sam/test_processor_sam.py:SamProcessorTest:test_image_processor_with_masks'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/sam/test_processor_sam.py -rA --junitxml=test-results.xml | Feature | false | false | false | true | 5 | 2 | 7 | false | false | ["src/transformers/models/sam/image_processing_sam.py->module->class_definition:SamImageProcessor", "src/transformers/models/sam/image_processing_sam.py->module->class_definition:SamImageProcessor->function_definition:_preprocess_mask", "src/transformers/models/sam/image_processing_sam.py->module->class_definition:SamI... |
huggingface/transformers | 27,561 | huggingface__transformers-27561 | ['27537'] | 5330b83bc5637b8e7eafe095c22ef19e21baff2d | diff --git a/docs/source/en/model_doc/dinov2.md b/docs/source/en/model_doc/dinov2.md
--- a/docs/source/en/model_doc/dinov2.md
+++ b/docs/source/en/model_doc/dinov2.md
@@ -25,6 +25,37 @@ The abstract from the paper is the following:
This model was contributed by [nielsr](https://huggingface.co/nielsr).
The original co... | diff --git a/tests/models/dinov2/test_modeling_dinov2.py b/tests/models/dinov2/test_modeling_dinov2.py
--- a/tests/models/dinov2/test_modeling_dinov2.py
+++ b/tests/models/dinov2/test_modeling_dinov2.py
@@ -221,7 +221,7 @@ class Dinov2ModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase):
if is_t... | Allow script tracing DINOv2
I found PR to dinov2 "Pass scale factor as a tuple of floats to F.interpolate() to allow tracing."
https://github.com/facebookresearch/dinov2/pull/247
https://github.com/huggingface/transformers/blob/85fde09c97213bf7e8625f83096bb2a9e183f987/src/transformers/models/dinov2/modeling_dinov2.... | I have exception now:
<img width="1153" alt="image" src="https://github.com/huggingface/transformers/assets/11178882/ce61c11a-9247-4045-8da4-5fdd9d3bb899">
Hi @Danil328, thanks for raising this issue!
Could you make sure to follow the [issue template](https://github.com/huggingface/transformers/blob/main/.github... | 2023-11-17 13:44:45+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/dinov2/test_modeling_dinov2.py:Dinov2ModelTest:test_equivalence_flax_to_pt', 'tests/models/dinov2/test_modeling_dinov2.py:Dinov2ModelTest:test_keep_in_fp32_modules', 'tests/models/dinov2/test_modeling_dinov2.py:Dinov2ModelTest:test_model', 'tests/models/dinov2/test_modeling_dinov2.py:Dinov2BackboneTest:t... | ['tests/models/dinov2/test_modeling_dinov2.py:Dinov2ModelTest:test_torch_fx', 'tests/models/dinov2/test_modeling_dinov2.py:Dinov2ModelTest:test_torch_fx_output_loss'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/dinov2/test_modeling_dinov2.py -rA --junitxml=test-results.xml | Feature | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/dinov2/modeling_dinov2.py->module->class_definition:Dinov2Embeddings->function_definition:interpolate_pos_encoding"] |
huggingface/transformers | 27,717 | huggingface__transformers-27717 | ['26497'] | ef5ab72f4b538d6f9ea032ac307b75b40ceef42e | diff --git a/src/transformers/convert_slow_tokenizer.py b/src/transformers/convert_slow_tokenizer.py
--- a/src/transformers/convert_slow_tokenizer.py
+++ b/src/transformers/convert_slow_tokenizer.py
@@ -800,8 +800,6 @@ def vocab(self, proto):
("<unk>", 0.0),
]
vocab += [(piece.piece, piec... | diff --git a/tests/models/nllb/test_tokenization_nllb.py b/tests/models/nllb/test_tokenization_nllb.py
--- a/tests/models/nllb/test_tokenization_nllb.py
+++ b/tests/models/nllb/test_tokenization_nllb.py
@@ -24,6 +24,7 @@
NllbTokenizerFast,
is_torch_available,
)
+from transformers.models.nllb.tokenization_nll... | NllbTokenizer: optionally list language codes in the config, to enable updating it more smoothly
### Feature request
Currently, `NllbTokenizer` during initialization takes the list of language codes from a hardcoded constant FAIRSEQ_LANGUAGE_CODES.
I propose enable overriding this list with a field in the tokeniz... | WDYT @ArthurZucker?
Mmm I guess for now this can make sense, but think when refactoring NLLB, the FAIRSEQ_LANGUAGE_CODES will be the default of `additional_special_tokens` in the correct order, removing the need to change this. You can also already add language codes using `additional_special_tokens`
Thanks @ArthurZuck... | 2023-11-27 07:16:03+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/nllb/test_tokenization_nllb.py:NllbTokenizationTest:test_embeded_special_tokens', 'tests/models/nllb/test_tokenization_nllb.py:NllbTokenizationTest:test_num_special_tokens_to_add_equal', 'tests/models/nllb/test_tokenization_nllb.py:NllbTokenizationTest:test_tokenizers_special_tokens_properties_unset_1', ... | ['tests/models/nllb/test_tokenization_nllb.py:NllbTokenizationTest:test_new_language_codes'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/nllb/test_tokenization_nllb.py -rA --junitxml=test-results.xml | Feature | false | false | false | true | 11 | 4 | 15 | false | false | ["src/transformers/convert_slow_tokenizer.py->module->class_definition:NllbConverter->function_definition:vocab", "src/transformers/models/nllb/tokenization_nllb_fast.py->module->class_definition:NllbTokenizerFast->function_definition:lang_code_to_id", "src/transformers/models/nllb/tokenization_nllb.py->module->class_d... |
huggingface/transformers | 28,010 | huggingface__transformers-28010 | ['28622'] | f7ef7cec6c6c162087421f36a17eabdbb223579d | diff --git a/src/transformers/convert_slow_tokenizer.py b/src/transformers/convert_slow_tokenizer.py
--- a/src/transformers/convert_slow_tokenizer.py
+++ b/src/transformers/convert_slow_tokenizer.py
@@ -585,6 +585,9 @@ def converted(self) -> Tokenizer:
replacement = "▁"
add_prefix_space = True
+ ... | diff --git a/tests/models/llama/test_tokenization_llama.py b/tests/models/llama/test_tokenization_llama.py
--- a/tests/models/llama/test_tokenization_llama.py
+++ b/tests/models/llama/test_tokenization_llama.py
@@ -306,6 +306,34 @@ def test_pickle_subword_regularization_tokenizer(self):
def test_subword_regulariza... | Can `LlamaTokenizerFast` support the argument `add_prefix_space = False`
### System Info
With `transformers==4.36.2`
It seems the argument `add_prefix_space` is invalid here.
### Who can help?
@ArthurZucker
### Reproduction
```
>>> from transformers import LlamaTokenizerFast
>>> tokenizer = LlamaT... | null | 2023-12-13 16:59:44+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/seamless_m4t/test_tokenization_seamless_m4t.py:SeamlessM4TTokenizationTest:test_truncation_side_in_kwargs', 'tests/models/t5/test_tokenization_t5.py:T5TokenizationTest:test_special_tokens_initialization_with_non_empty_additional_special_tokens', 'tests/models/llama/test_tokenization_llama.py:LlamaTokeniz... | ['tests/models/t5/test_tokenization_t5.py:T5TokenizationTest:test_add_prefix_space', 'tests/models/llama/test_tokenization_llama.py:LlamaTokenizationTest:test_add_prefix_space'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/llama/test_tokenization_llama.py /testbed/tests/models/seamless_m4t/test_tokenization_seamless_m4t.py /testbed/tests/models/t5/test_tokenization_t5.py | Bug Fix | false | false | false | true | 10 | 11 | 21 | false | false | ["src/transformers/models/seamless_m4t/tokenization_seamless_m4t.py->module->class_definition:SeamlessM4TTokenizer", "src/transformers/convert_slow_tokenizer.py->module->class_definition:LlamaConverter->function_definition:normalizer", "src/transformers/models/t5/tokenization_t5.py->module->class_definition:T5Tokenizer... |
huggingface/transformers | 28,398 | huggingface__transformers-28398 | ['23116'] | fff8ca8e597532f141bc3f522f47573320a06730 | diff --git a/src/transformers/models/oneformer/image_processing_oneformer.py b/src/transformers/models/oneformer/image_processing_oneformer.py
--- a/src/transformers/models/oneformer/image_processing_oneformer.py
+++ b/src/transformers/models/oneformer/image_processing_oneformer.py
@@ -15,11 +15,13 @@
"""Image process... | diff --git a/tests/models/oneformer/test_image_processing_oneformer.py b/tests/models/oneformer/test_image_processing_oneformer.py
--- a/tests/models/oneformer/test_image_processing_oneformer.py
+++ b/tests/models/oneformer/test_image_processing_oneformer.py
@@ -15,10 +15,11 @@
import json
+import os
+import tempf... | OneFormerImageProcessor does not support passing local config file, always tries to download from repo
### System Info
- `transformers` version: 4.29.0.dev0
- Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.10.10
- Huggingface_hub version: 0.14.1
- Safetensors version: 0.3.1
- PyTorch ... | @rbavery Thanks for raising this issue.
I'm able to load a processor locally on the development branch without issue:
```python
from transformers import OneFormerProcessor
processor = OneFormerProcessor.from_pretrained('shi-labs/oneformer_ade20k_swin_tiny')
processor.save_pretrained('foo')
new_processor =... | 2024-01-08 16:33:29+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/oneformer/test_image_processing_oneformer.py:OneFormerImageProcessingTest:test_init_without_params', 'tests/models/oneformer/test_image_processing_oneformer.py:OneFormerImageProcessingTest:test_image_processor_to_json_file', 'tests/models/oneformer/test_image_processing_oneformer.py:OneFormerImageProcess... | ['tests/models/oneformer/test_image_processing_oneformer.py:OneFormerImageProcessingTest:test_can_load_with_local_metadata'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/oneformer/test_image_processing_oneformer.py | Bug Fix | false | false | false | true | 2 | 2 | 4 | false | false | ["src/transformers/models/oneformer/image_processing_oneformer.py->module->class_definition:OneFormerImageProcessor->function_definition:__init__", "src/transformers/models/oneformer/image_processing_oneformer.py->module->class_definition:OneFormerImageProcessor", "src/transformers/models/oneformer/image_processing_one... |
huggingface/transformers | 28,517 | huggingface__transformers-28517 | ['28505'] | edb170238febf7fc3e3278ed5b9ca0b2c40c70e3 | diff --git a/src/transformers/models/mixtral/modeling_mixtral.py b/src/transformers/models/mixtral/modeling_mixtral.py
--- a/src/transformers/models/mixtral/modeling_mixtral.py
+++ b/src/transformers/models/mixtral/modeling_mixtral.py
@@ -74,7 +74,9 @@
_CONFIG_FOR_DOC = "MixtralConfig"
-def load_balancing_loss_fun... | diff --git a/tests/models/mixtral/test_modeling_mixtral.py b/tests/models/mixtral/test_modeling_mixtral.py
--- a/tests/models/mixtral/test_modeling_mixtral.py
+++ b/tests/models/mixtral/test_modeling_mixtral.py
@@ -462,7 +462,6 @@ def test_load_balancing_loss(self):
r"""
Let's make sure we can actuall... | Exclude the load balancing loss of padding tokens in Mixtral-8x7B
### Feature request
The auxiliary loss in Mixtral-MoE shouldn't **include the loss from padding tokens**.
### Motivation
I think it is better to change the function
[load_balancing_loss_func](https://github.com/huggingface/transformers/blob/main/sr... | cc @ArthurZucker
feel free to open a PR for this! Otherwise will mark it as a good second issue 🤗
I would like to work on this issue, i will go through the linked file today and ask any questions i have.
I was looking at the code.
Below is what the model outputs
`return MoeModelOutputWithPast(
last_hi... | 2024-01-16 02:39:12+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/mixtral/test_modeling_mixtral.py:MixtralModelTest:test_beam_search_generate_dict_output', 'tests/models/mixtral/test_modeling_mixtral.py:MixtralModelTest:test_generate_with_head_masking', 'tests/models/mixtral/test_modeling_mixtral.py:MixtralModelTest:test_greedy_generate_dict_outputs_use_cache', 'tests/... | ['tests/models/mixtral/test_modeling_mixtral.py:MixtralModelTest:test_load_balancing_loss'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/mixtral/test_modeling_mixtral.py | Feature | false | true | false | false | 2 | 0 | 2 | false | false | ["src/transformers/models/mixtral/modeling_mixtral.py->module->function_definition:load_balancing_loss_func", "src/transformers/models/mixtral/modeling_mixtral.py->module->class_definition:MixtralForCausalLM->function_definition:forward"] |
huggingface/transformers | 28,522 | huggingface__transformers-28522 | ['26547'] | 0cdcd7a2b319689d75ae4807cfb7b228aa322f83 | diff --git a/src/transformers/models/barthez/tokenization_barthez.py b/src/transformers/models/barthez/tokenization_barthez.py
--- a/src/transformers/models/barthez/tokenization_barthez.py
+++ b/src/transformers/models/barthez/tokenization_barthez.py
@@ -251,6 +251,7 @@ def _convert_id_to_token(self, index):
"... | diff --git a/tests/models/speecht5/test_tokenization_speecht5.py b/tests/models/speecht5/test_tokenization_speecht5.py
--- a/tests/models/speecht5/test_tokenization_speecht5.py
+++ b/tests/models/speecht5/test_tokenization_speecht5.py
@@ -202,3 +202,17 @@ def test_tokenizer_integration(self):
revision="c5e... | [SpeechT5] Decode function strips space after special token
### System Info
- `transformers` version: 4.34.0.dev0
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.8.1
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorc... | Thanks for reporting! This is happening because:
```python
def convert_tokens_to_string(self, tokens):
"""Converts a sequence of tokens (string) in a single string."""
current_sub_tokens = []
out_string = ""
for token in tokens:
# make sure that special tokens are... | 2024-01-16 09:16:28+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/speecht5/test_tokenization_speecht5.py:SpeechT5TokenizerTest:test_batch_encode_plus_padding', 'tests/models/speecht5/test_tokenization_speecht5.py:SpeechT5TokenizerTest:test_full_tokenizer', 'tests/models/speecht5/test_tokenization_speecht5.py:SpeechT5TokenizerTest:test_call', 'tests/models/speecht5/test... | ['tests/models/speecht5/test_tokenization_speecht5.py:SpeechT5TokenizerTest:test_encode_decode'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/speecht5/test_tokenization_speecht5.py | Bug Fix | false | false | false | true | 1 | 5 | 6 | false | false | ["src/transformers/models/barthez/tokenization_barthez.py->module->class_definition:BarthezTokenizer", "src/transformers/models/speecht5/tokenization_speecht5.py->module->class_definition:SpeechT5Tokenizer->function_definition:convert_tokens_to_string", "src/transformers/models/speecht5/tokenization_speecht5.py->module... |
huggingface/transformers | 28,535 | huggingface__transformers-28535 | ['28387'] | 07ae53e6e77ec6ff4fb25fbacfec4b11cfc82749 | diff --git a/src/transformers/models/esm/tokenization_esm.py b/src/transformers/models/esm/tokenization_esm.py
--- a/src/transformers/models/esm/tokenization_esm.py
+++ b/src/transformers/models/esm/tokenization_esm.py
@@ -14,10 +14,9 @@
# limitations under the License.
"""Tokenization classes for ESM."""
import os
... | diff --git a/tests/models/esm/test_tokenization_esm.py b/tests/models/esm/test_tokenization_esm.py
--- a/tests/models/esm/test_tokenization_esm.py
+++ b/tests/models/esm/test_tokenization_esm.py
@@ -87,3 +87,25 @@ def test_tokenize_special_tokens(self):
self.assertEqual(len(token_2), 1)
... | Issue with Adding New Tokens to ESM2 Model Tokenizer
Hello
I am encountering an issue while working with the ESM2 models (`facebook/esm2_t6_8M_UR50D`). Specifically, when I try to add new tokens to the tokenizer, they are automatically classified as special tokens, even though I am specifying `special_tokens=False`.... | Seems like a bug with ESMTokenizer, (which doesn't use this library).
@ArthurZucker for insights or the more relevant people ?
Hey, I cannot reproduce this:
```python
In [23]: model_checkpoint = "facebook/esm2_t6_8M_UR50D"
...: tokenizer_2 = AutoTokenizer.from_pretrained(model_checkpoint)
huggingface/token... | 2024-01-16 15:06:24+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/esm/test_tokenization_esm.py:ESMTokenizationTest:test_tokenize_special_tokens', 'tests/models/esm/test_tokenization_esm.py:ESMTokenizationTest:test_tokenizer_call_pad', 'tests/models/esm/test_tokenization_esm.py:ESMTokenizationTest:test_tokenizer_call_no_pad', 'tests/models/esm/test_tokenization_esm.py:E... | ['tests/models/esm/test_tokenization_esm.py:ESMTokenizationTest:test_add_tokens'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/esm/test_tokenization_esm.py | Bug Fix | false | false | false | true | 4 | 1 | 5 | false | false | ["src/transformers/models/esm/tokenization_esm.py->module->class_definition:EsmTokenizer", "src/transformers/models/esm/tokenization_esm.py->module->class_definition:EsmTokenizer->function_definition:get_vocab", "src/transformers/models/esm/tokenization_esm.py->module->class_definition:EsmTokenizer->function_definition... |
huggingface/transformers | 28,563 | huggingface__transformers-28563 | ['28002'] | 2c1eebc1216549d8195d7d1c6adb8b99afee3ec5 | diff --git a/src/transformers/models/whisper/modeling_whisper.py b/src/transformers/models/whisper/modeling_whisper.py
--- a/src/transformers/models/whisper/modeling_whisper.py
+++ b/src/transformers/models/whisper/modeling_whisper.py
@@ -57,6 +57,8 @@
logger = logging.get_logger(__name__)
+_HIDDEN_STATES_START_PO... | diff --git a/tests/models/whisper/test_modeling_whisper.py b/tests/models/whisper/test_modeling_whisper.py
--- a/tests/models/whisper/test_modeling_whisper.py
+++ b/tests/models/whisper/test_modeling_whisper.py
@@ -2292,16 +2292,15 @@ def get_subsampled_output_lengths(self, input_lengths):
def encoder_seq_length(s... | Not handled case when use_weighted_layer_sum and return-dict=True in WhisperForAudioClassification
@sanchit-gandhi
I use WhisperForAudioClassification task and want to use `use_weighted_layer_sum=True`, but there is a problem when call forward,
the encoder part can return tuple or dict if `return_dict=True` but the... | Hi @ElsebaiyMohamed, thanks for raising this issue and providing details on the error + a snippet. Could you also provide information about the running environment: run `transformers-cli env` in the terminal and copy-paste the output?
Hi @amyeroberts ,
Apologies for the delayed response! 🙏 Life threw a curveball, bu... | 2024-01-17 17:22:35+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_model_is_small', 'tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_contrastive_generate_low_memory', 'tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_group_beam_search_generate', 'tests/models/whisper/test_model... | ['tests/models/whisper/test_modeling_whisper.py:WhisperEncoderModelTest:test_forward_pass_weighted_layer_sum'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/whisper/test_modeling_whisper.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/whisper/modeling_whisper.py->module->class_definition:WhisperForAudioClassification->function_definition:forward"] |
huggingface/transformers | 28,940 | huggingface__transformers-28940 | ['28817'] | dd1c9052159ae824c8acef7c2552f9fad5ca020a | diff --git a/src/transformers/pipelines/base.py b/src/transformers/pipelines/base.py
--- a/src/transformers/pipelines/base.py
+++ b/src/transformers/pipelines/base.py
@@ -861,7 +861,7 @@ def __init__(
raise ValueError(f"{device} unrecognized or not available.")
else:
self.device =... | diff --git a/tests/pipelines/test_pipelines_common.py b/tests/pipelines/test_pipelines_common.py
--- a/tests/pipelines/test_pipelines_common.py
+++ b/tests/pipelines/test_pipelines_common.py
@@ -199,6 +199,29 @@ def test_unbatch_attentions_hidden_states(self):
outputs = text_classifier(["This is great !"] * 20... | Populate torch_dtype from a model to a pipeline
### Feature request
When constructing a pipeline object from a model and a tokenizer, the pipeline doesn't inherit the `torch_dtype` field from the underlying model.
```
model = AutoModelForCausalLM.from_pretrained("t5-small", torch_dtype = torch.bfloat16)
pipeline ... | cc @Rocketknight1 WDYT? Sounds good to me
This sounds like a safe assumption to me too, though obviously I'd like to confirm that with some tests! I'm in favour of the PR if you're happy to open it @B-Step62
@ArthurZucker @Rocketknight1 Great! I will open a PR soon, in the meantime could you assign the issue to me?
... | 2024-02-09 12:05:13+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/pipelines/test_pipelines_common.py:CommonPipelineTest:test_unbatch_attentions_hidden_states', 'tests/pipelines/test_pipelines_common.py:CommonPipelineTest:test_check_task', 'tests/pipelines/test_pipelines_common.py:PipelinePadTest:test_pipeline_padding', 'tests/pipelines/test_pipelines_common.py:CommonPipelineT... | ['tests/pipelines/test_pipelines_common.py:CommonPipelineTest:test_torch_dtype_property'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/pipelines/test_pipelines_common.py | Feature | false | false | false | true | 1 | 2 | 3 | false | false | ["src/transformers/pipelines/base.py->module->class_definition:Pipeline->function_definition:__init__", "src/transformers/pipelines/base.py->module->class_definition:Pipeline", "src/transformers/pipelines/base.py->module->class_definition:Pipeline->function_definition:torch_dtype"] |
huggingface/transformers | 29,175 | huggingface__transformers-29175 | ['28919'] | ae49b218c3d718df90d8e4a109016450fb8f0632 | diff --git a/src/transformers/dynamic_module_utils.py b/src/transformers/dynamic_module_utils.py
--- a/src/transformers/dynamic_module_utils.py
+++ b/src/transformers/dynamic_module_utils.py
@@ -185,19 +185,35 @@ def check_imports(filename: Union[str, os.PathLike]) -> List[str]:
return get_relative_imports(filenam... | diff --git a/tests/models/auto/test_modeling_auto.py b/tests/models/auto/test_modeling_auto.py
--- a/tests/models/auto/test_modeling_auto.py
+++ b/tests/models/auto/test_modeling_auto.py
@@ -376,6 +376,27 @@ def test_from_pretrained_dynamic_model_distant_with_ref(self):
for p1, p2 in zip(model.parameters(), re... | dependency issue when working with a custom architecture in a repo that has a dot in its name
### System Info
- `transformers` version: 4.35.2
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: not installe... | cc @Rocketknight1 I can do it if you are low on bandwidth! Think it makes sense as a lot of models have `2.5B` or such names!
I can take this one, I think!
to anyone reading this in the future:
I found a work around this, **if you cannot rename your repo and remove the dot from its name**, you can follow these steps... | 2024-02-21 14:48:16+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/auto/test_modeling_auto.py:AutoModelTest:test_model_from_tf_suggestion', 'tests/models/auto/test_modeling_auto.py:AutoModelTest:test_attr_not_existing', 'tests/models/auto/test_modeling_auto.py:AutoModelTest:test_from_pretrained_with_tuple_values', 'tests/models/auto/test_modeling_auto.py:AutoModelTest:t... | ['tests/models/auto/test_modeling_auto.py:AutoModelTest:test_from_pretrained_dynamic_model_with_period'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/auto/test_modeling_auto.py | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["src/transformers/dynamic_module_utils.py->module->function_definition:get_class_from_dynamic_module", "src/transformers/dynamic_module_utils.py->module->function_definition:get_class_in_module"] |
huggingface/transformers | 29,300 | huggingface__transformers-29300 | ['29239'] | 8f2f0f0f85f9e517c495b2083c218215819bae34 | diff --git a/src/transformers/models/conditional_detr/image_processing_conditional_detr.py b/src/transformers/models/conditional_detr/image_processing_conditional_detr.py
--- a/src/transformers/models/conditional_detr/image_processing_conditional_detr.py
+++ b/src/transformers/models/conditional_detr/image_processing_c... | diff --git a/tests/models/conditional_detr/test_image_processing_conditional_detr.py b/tests/models/conditional_detr/test_image_processing_conditional_detr.py
--- a/tests/models/conditional_detr/test_image_processing_conditional_detr.py
+++ b/tests/models/conditional_detr/test_image_processing_conditional_detr.py
@@ -3... | `YolosImageProcessor.preprocess` drops annotations when padding
### System Info
- `transformers` version: 4.38.1
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.10.9
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: not installed
- Accelerate config: not found
- Py... | null | 2024-02-26 16:11:46+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/deformable_detr/test_image_processing_deformable_detr.py:DeformableDetrImageProcessingTest:test_cast_dtype_device', 'tests/models/yolos/test_image_processing_yolos.py:YolosImageProcessingTest:test_image_processor_from_and_save_pretrained', 'tests/models/yolos/test_image_processing_yolos.py:YolosImageProc... | ['tests/models/yolos/test_image_processing_yolos.py:YolosImageProcessingTest:test_batched_coco_panoptic_annotations'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/conditional_detr/test_image_processing_conditional_detr.py /testbed/tests/models/deformable_detr/test_image_processing_deformable_detr.py /testbed/tests/models/deta/test_image_processing_deta.py /testbed/tests/models/detr/test_image_processing_d... | Bug Fix | false | true | false | false | 5 | 0 | 5 | false | false | ["src/transformers/models/deformable_detr/image_processing_deformable_detr.py->module->class_definition:DeformableDetrImageProcessor->function_definition:preprocess", "src/transformers/models/detr/image_processing_detr.py->module->class_definition:DetrImageProcessor->function_definition:preprocess", "src/transformers/m... |
huggingface/transformers | 29,311 | huggingface__transformers-29311 | ['29243'] | b27aa206ddf3fe66b36db587603141b3d0379a82 | diff --git a/src/transformers/models/wav2vec2/tokenization_wav2vec2.py b/src/transformers/models/wav2vec2/tokenization_wav2vec2.py
--- a/src/transformers/models/wav2vec2/tokenization_wav2vec2.py
+++ b/src/transformers/models/wav2vec2/tokenization_wav2vec2.py
@@ -125,7 +125,6 @@ class Wav2Vec2CTCTokenizerOutput(ModelOut... | diff --git a/tests/models/wav2vec2/test_tokenization_wav2vec2.py b/tests/models/wav2vec2/test_tokenization_wav2vec2.py
--- a/tests/models/wav2vec2/test_tokenization_wav2vec2.py
+++ b/tests/models/wav2vec2/test_tokenization_wav2vec2.py
@@ -13,6 +13,7 @@
# See the License for the specific language governing permissions ... | `skip_special_tokens` for `Wav2Vec2CTCTokenizer` does not work expectedly.
### System Info
- `transformers` version: 4.37.2
- Platform: Linux-5.15.0-1042-nvidia-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.20.1
- Safetensors version: 0.4.2
- Accelerate version: 0.26.1
- Accelera... | it could / should but should also be left to the super class IMO!
Would you like to open a PR for a fix? I don't think that this is intended behaviour | 2024-02-27 06:22:32+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2CTCTokenizerTest:test_maximum_encoding_length_pair_input', 'tests/models/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2CTCTokenizerTest:test_right_and_left_truncation', 'tests/models/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2CTCTokenizerTest:test_neste... | ['tests/models/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2CTCTokenizerTest:test_tokenizer_decode_added_tokens', 'tests/models/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2TokenizerTest:test_tokenizer_decode_added_tokens'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/wav2vec2/test_tokenization_wav2vec2.py | Bug Fix | false | false | false | true | 2 | 1 | 3 | false | false | ["src/transformers/models/wav2vec2/tokenization_wav2vec2.py->module->class_definition:Wav2Vec2CTCTokenizer->function_definition:_decode", "src/transformers/models/wav2vec2/tokenization_wav2vec2.py->module->class_definition:Wav2Vec2CTCTokenizer", "src/transformers/models/wav2vec2/tokenization_wav2vec2.py->module->class_... |
huggingface/transformers | 29,519 | huggingface__transformers-29519 | ['29176'] | b338a6c3b8eda29610d4d472cad8cd87cbfdaaed | diff --git a/src/transformers/modeling_attn_mask_utils.py b/src/transformers/modeling_attn_mask_utils.py
--- a/src/transformers/modeling_attn_mask_utils.py
+++ b/src/transformers/modeling_attn_mask_utils.py
@@ -164,10 +164,10 @@ def _make_causal_mask(
# add lower triangular sliding window mask if necessary
... | diff --git a/tests/test_modeling_utils.py b/tests/test_modeling_utils.py
--- a/tests/test_modeling_utils.py
+++ b/tests/test_modeling_utils.py
@@ -1673,7 +1673,7 @@ def check_to_causal(self, mask_converter, q_len, kv_len, bsz=3):
def compute_num_context_mask(self, kv_len, context, q_len):
# This function ... | Sliding window inconsistency between PyTorch and Flax
### System Info
transformers main (ae49b218c), Python 3.10.8
### Who can help?
@ArthurZucker, @sanchit-gandhi
### Reproduction
The attention `sliding_window` has different interpretation for PyTorch and Flax. Here's are matching examples:
**PyTorch... | Hey! Pretty sure `MistralSdpaAttention` does not support sliding window yet! Are you using `attn_implementation="flash_attention_2"`?
@ArthurZucker I'm using the default implementation on the CPU, I've just checked to make sure and it's "eager". Initially I thought the issues may be in flash_attn, but you made me re... | 2024-03-07 15:56:14+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/test_modeling_utils.py:ModelUtilsTest:test_shard_checkpoint', 'tests/test_modeling_utils.py:ModelUtilsTest:test_unexpected_keys_warnings', 'tests/test_modeling_utils.py:ModelUtilsTest:test_no_super_init_config_and_model', 'tests/test_modeling_utils.py:AttentionMaskTester:test_2d_to_4d', 'tests/test_modeling_uti... | ['tests/test_modeling_utils.py:AttentionMaskTester:test_causal_mask_sliding', 'tests/test_modeling_utils.py:AttentionMaskTester:test_2d_to_4d_causal_sliding'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/test_modeling_utils.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/modeling_attn_mask_utils.py->module->class_definition:AttentionMaskConverter->function_definition:_make_causal_mask"] |
huggingface/transformers | 29,589 | huggingface__transformers-29589 | ['29425'] | fadb053379b3ef24c4ec8e6d7d58555af21f58db | diff --git a/src/transformers/trainer.py b/src/transformers/trainer.py
--- a/src/transformers/trainer.py
+++ b/src/transformers/trainer.py
@@ -4247,8 +4247,23 @@ def _add_sm_patterns_to_gitignore(self) -> None:
self.repo.git_push()
def create_accelerator_and_postprocess(self):
- grad_acc_kwar... | diff --git a/src/transformers/testing_utils.py b/src/transformers/testing_utils.py
--- a/src/transformers/testing_utils.py
+++ b/src/transformers/testing_utils.py
@@ -52,6 +52,7 @@
)
from .integrations.deepspeed import is_deepspeed_available
from .utils import (
+ ACCELERATE_MIN_VERSION,
is_accelerate_availa... | Allow Trainer to Sync Gradients Each Batch When Performing Gradient Accumulation
### Feature request
We propose a feature to allow:
- `_do_sync` to take a `force` boolean flag, where `_do_sync(force=True)` forces a gradient sync.
- `Trainer` / `Accelerate` to appropriately pass the `force` flag if the user requests ... | Hi! This solution does indeed make sense to me, let's start with a PR to accelerate and then the upstream to transformers? :)
Note: for the `TrainingArguments`, we need to add this to the Accelerator config class instead and handle the logic that way as we are no longer adding more args to the `TrainingArguments` w... | 2024-03-11 14:19:04+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/trainer/test_trainer.py:TrainerIntegrationTest:test_galore_matched_modules', 'tests/trainer/test_trainer.py:TrainerIntegrationTest:test_predict_with_jit', 'tests/trainer/test_trainer.py:TrainerOptimizerChoiceTest:test_bnb_paged_adam8bit', 'tests/trainer/test_trainer.py:TrainerIntegrationTest:test_logging_inf_na... | ['tests/trainer/test_trainer.py:TrainerIntegrationTest:test_accelerator_config_from_yaml', 'tests/trainer/test_trainer.py:TrainerIntegrationTest:test_accelerate_config_from_dataclass_grad_accum', 'tests/trainer/test_trainer.py:TrainerIntegrationTest:test_accelerator_config_from_dict', 'tests/trainer/test_trainer.py:Tra... | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/src/transformers/testing_utils.py /testbed/tests/trainer/test_trainer.py | Feature | false | false | false | true | 2 | 1 | 3 | false | false | ["src/transformers/utils/import_utils.py->module->function_definition:is_accelerate_available", "src/transformers/trainer_pt_utils.py->module->class_definition:AcceleratorConfig", "src/transformers/trainer.py->module->class_definition:Trainer->function_definition:create_accelerator_and_postprocess"] |
huggingface/transformers | 29,680 | huggingface__transformers-29680 | ['29551'] | 87e2ea33aab6454be3afbd4f0342b518f15bccef | diff --git a/src/transformers/generation/logits_process.py b/src/transformers/generation/logits_process.py
--- a/src/transformers/generation/logits_process.py
+++ b/src/transformers/generation/logits_process.py
@@ -151,11 +151,13 @@ def __init__(self, min_length: int, eos_token_id: Union[int, List[int]]):
@add_s... | diff --git a/tests/generation/test_logits_process.py b/tests/generation/test_logits_process.py
--- a/tests/generation/test_logits_process.py
+++ b/tests/generation/test_logits_process.py
@@ -157,8 +157,9 @@ def test_temperature_dist_warper(self):
temp_dist_warper_sharper = TemperatureLogitsWarper(temperature=0... | Contrastive decoding "raw" logits and scores are identical
### System Info
- `transformers` version: 4.38.2
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.20.3
- Safetensors version: 0.4.2
- Accelerate version: not installed
- Accelerate config: not found... | Hi @dmarx 👋
In theory I agree with the issue -- `scores` should indeed contain the degeneration penalty. However, our API dictates that we return the scores for ALL tokens (and not just the selected tokens at each iteration), and the `contrastive_score` is only computed for the `top_k` tokens. As such, in practice... | 2024-03-15 15:55:40+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/generation/test_logits_process.py:LogitsProcessorTest:test_early_stop_processor', 'tests/generation/test_logits_process.py:LogitsProcessorTest:test_eta_dist_warper', 'tests/generation/test_logits_process.py:LogitsProcessorTest:test_new_min_length_dist_processor_0', 'tests/generation/test_logits_process.py:Logit... | ['tests/generation/test_logits_process.py:LogitsProcessorTest:test_remove_nan_inf_logits_processor', 'tests/generation/test_logits_process.py:LogitsProcessorTest:test_no_repeat_ngram_dist_processor', 'tests/generation/test_logits_process.py:LogitsProcessorTest:test_hamming_diversity', 'tests/generation/test_logits_proc... | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/generation/test_logits_process.py /testbed/tests/generation/test_utils.py | Bug Fix | false | true | false | false | 28 | 0 | 28 | false | false | ["src/transformers/generation/logits_process.py->module->class_definition:WhisperTimeStampLogitsProcessor->function_definition:__call__", "src/transformers/generation/logits_process.py->module->class_definition:EpsilonLogitsWarper->function_definition:__call__", "src/transformers/generation/logits_process.py->module->c... |
huggingface/transformers | 29,688 | huggingface__transformers-29688 | ['29685'] | f4dc26d46687f5f4baf3fe64a1d87cafefbeec53 | diff --git a/src/transformers/models/whisper/generation_whisper.py b/src/transformers/models/whisper/generation_whisper.py
--- a/src/transformers/models/whisper/generation_whisper.py
+++ b/src/transformers/models/whisper/generation_whisper.py
@@ -262,7 +262,7 @@ def generate(
synced_gpus: bool = False,
... | diff --git a/tests/models/whisper/test_modeling_whisper.py b/tests/models/whisper/test_modeling_whisper.py
--- a/tests/models/whisper/test_modeling_whisper.py
+++ b/tests/models/whisper/test_modeling_whisper.py
@@ -545,10 +545,19 @@ def test_generate_language(self):
# test language code
model.genera... | Support mixed-language batches in `WhisperGenerationMixin`
### Feature request
It is currently not possible to mix multiple languages in a single batch when running [Whisper](https://huggingface.co/docs/transformers/en/model_doc/whisper). The `language` argument only accepts a single string (as opposed to a separate... | null | 2024-03-16 10:17:27+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_model_is_small', 'tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_contrastive_generate_low_memory', 'tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_group_beam_search_generate', 'tests/models/whisper/test_model... | ['tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_generate_language'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/models/whisper/test_modeling_whisper.py | Feature | false | true | false | false | 5 | 0 | 5 | false | false | ["src/transformers/models/whisper/generation_whisper.py->module->class_definition:WhisperGenerationMixin->function_definition:_prepare_decoder_input_ids", "src/transformers/models/whisper/generation_whisper.py->module->class_definition:WhisperGenerationMixin->function_definition:_retrieve_init_tokens->function_definiti... |
huggingface/transformers | 29,838 | huggingface__transformers-29838 | ['29016'] | 76a33a10923ccc1074917f6b6a1e719e626b7dc9 | diff --git a/src/transformers/trainer.py b/src/transformers/trainer.py
--- a/src/transformers/trainer.py
+++ b/src/transformers/trainer.py
@@ -1048,6 +1048,36 @@ def create_optimizer(self):
return self.optimizer
+ def get_num_trainable_parameters(self):
+ """
+ Get the number of trainable ... | diff --git a/tests/trainer/test_trainer.py b/tests/trainer/test_trainer.py
--- a/tests/trainer/test_trainer.py
+++ b/tests/trainer/test_trainer.py
@@ -3769,3 +3769,41 @@ def test_hyperparameter_search_backends(self):
list(ALL_HYPERPARAMETER_SEARCH_BACKENDS.keys()),
list(HPSearchBackend),
... | Trainer: Functions to inspect model and optimizer status
### Feature request
In huggingface Trainer, are there any functions to inspect model and optimizer status? such as, how many parameters require grad, learning rate of each parameter, which optimizer group each parameter belong...
I didn't find any related f... | cc @muellerzr @pacman100
Hi, can I take over the issue?
@CKeibel Sure! No need to claim on an issue, we prioritise based on PRs open, as we find this helps prevent issues from going stale without being addressed. Once you have something opened, feel free to ping me and @muellerzr for review 🤗
Hey, thanks for the rep... | 2024-03-24 10:58:01+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/trainer/test_trainer.py:TrainerIntegrationTest:test_accelerator_config_from_yaml', 'tests/trainer/test_trainer.py:TrainerIntegrationTest:test_galore_matched_modules', 'tests/trainer/test_trainer.py:TrainerIntegrationTest:test_predict_with_jit', 'tests/trainer/test_trainer.py:TrainerOptimizerChoiceTest:test_bnb_... | ['tests/trainer/test_trainer.py:OptimizerAndModelInspectionTest:test_get_num_trainable_parameters', 'tests/trainer/test_trainer.py:OptimizerAndModelInspectionTest:test_get_learning_rates', 'tests/trainer/test_trainer.py:OptimizerAndModelInspectionTest:test_get_optimizer_group'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/trainer/test_trainer.py | Feature | false | false | false | true | 3 | 1 | 4 | false | false | ["src/transformers/trainer.py->module->class_definition:Trainer->function_definition:get_optimizer_group", "src/transformers/trainer.py->module->class_definition:Trainer", "src/transformers/trainer.py->module->class_definition:Trainer->function_definition:get_learning_rates", "src/transformers/trainer.py->module->class... |
huggingface/transformers | 30,556 | huggingface__transformers-30556 | ['30521'] | a3aabc702e1c49243e7b48f22d88362d50e786c5 | diff --git a/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py b/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py
--- a/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py
+++ b/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py
@@ -122,7 +12... | diff --git a/tests/trainer/test_data_collator.py b/tests/trainer/test_data_collator.py
--- a/tests/trainer/test_data_collator.py
+++ b/tests/trainer/test_data_collator.py
@@ -23,6 +23,7 @@
BertTokenizer,
DataCollatorForLanguageModeling,
DataCollatorForPermutationLanguageModeling,
+ DataCollatorForSeq2... | [BUG] DataCollatorForSeq2Seq with PaddingStrategy.MAX_LENGTH may not pad labels
It seems that when padding, if the MAX_LENGTH policy is set, the same padding is not performed on the label.
test case below:
```python
from transformers import DataCollatorForSeq2Seq,
from transformers.utils import PaddingStrategy
... | Thanks for raising this issue! Yea, that seems like a valid bug imo. The padding strategy isn't respected with `max_length`.
I'd change these lines:
https://github.com/huggingface/transformers/blob/73014b561d5f88d728e46a57d346f516fefe3f2d/src/transformers/data/data_collator.py#L591-L592
to something like:
```pyth... | 2024-04-29 21:36:29+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/trainer/test_data_collator.py:NumpyDataCollatorIntegrationTest:test_data_collator_for_language_modeling', 'tests/trainer/test_data_collator.py:DataCollatorIntegrationTest:test_default_with_no_labels', 'tests/trainer/test_data_collator.py:NumpyDataCollatorIntegrationTest:test_default_with_no_labels', 'tests/trai... | ['tests/trainer/test_data_collator.py:DataCollatorIntegrationTest:test_data_collator_for_seq2seq_with_pt', 'tests/trainer/test_data_collator.py:NumpyDataCollatorIntegrationTest:test_data_collator_for_seq2seq', 'tests/trainer/test_data_collator.py:DataCollatorIntegrationTest:test_data_collator_for_seq2seq_with_lists'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/trainer/test_data_collator.py | Bug Fix | false | false | false | true | 1 | 1 | 2 | false | false | ["examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py->module->class_definition:ModelArguments", "src/transformers/data/data_collator.py->module->class_definition:DataCollatorForSeq2Seq->function_definition:__call__"] |
huggingface/transformers | 30,602 | huggingface__transformers-30602 | ['30601'] | c681b58b06f6fb8b5c331f380548af3b4b33f881 | diff --git a/src/transformers/modeling_utils.py b/src/transformers/modeling_utils.py
--- a/src/transformers/modeling_utils.py
+++ b/src/transformers/modeling_utils.py
@@ -3263,8 +3263,8 @@ def from_pretrained(
)
else:
raise EnvironmentError(
- ... | diff --git a/tests/test_modeling_utils.py b/tests/test_modeling_utils.py
--- a/tests/test_modeling_utils.py
+++ b/tests/test_modeling_utils.py
@@ -1001,6 +1001,26 @@ def test_use_safetensors(self):
self.assertTrue(any(f.endswith("safetensors") for f in all_downloaded_files))
self.assertFalse(a... | `model.safetensors` missing in model file not found error in default case
### System Info
System info isn't super relevant here since the confusion is really just an just an error message string. I just reproduced in a CPU instance but this is applicable whenever model loading is needed.
- `transformers` version: 4.4... | null | 2024-05-01 19:16:26+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/test_modeling_utils.py:ModelUtilsTest:test_safetensors_torch_from_torch_sharded', 'tests/test_modeling_utils.py:ModelUtilsTest:test_unexpected_keys_warnings', 'tests/test_modeling_utils.py:AttentionMaskTester:test_torch_compile_fullgraph', 'tests/test_modeling_utils.py:ModelUtilsTest:test_tied_weights_reload', ... | ['tests/test_modeling_utils.py:ModelUtilsTest:test_use_safetensors'] | null | pytest -v --tb=short --show-capture=no --json-report /testbed/tests/test_modeling_utils.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/modeling_utils.py->module->class_definition:PreTrainedModel->function_definition:from_pretrained"] |
huggingface/transformers | 30,772 | huggingface__transformers-30772 | ['30685'] | 04c7c176d7f70ec4b43c8c2a0327ff8d193f5c1d | diff --git a/src/transformers/models/gptsan_japanese/tokenization_gptsan_japanese.py b/src/transformers/models/gptsan_japanese/tokenization_gptsan_japanese.py
--- a/src/transformers/models/gptsan_japanese/tokenization_gptsan_japanese.py
+++ b/src/transformers/models/gptsan_japanese/tokenization_gptsan_japanese.py
@@ -3... | diff --git a/tests/models/layoutxlm/test_tokenization_layoutxlm.py b/tests/models/layoutxlm/test_tokenization_layoutxlm.py
--- a/tests/models/layoutxlm/test_tokenization_layoutxlm.py
+++ b/tests/models/layoutxlm/test_tokenization_layoutxlm.py
@@ -150,17 +150,40 @@ def test_save_sentencepiece_tokenizer(self) -> None:
... | `PreTrainedTokenizerFast._batch_encode_plus()` got an unexpected keyword argument `'split_special_tokens'`
### System Info
Transformer version: 4.38.1
Platform: Ubuntu
Python version: 3.10.13
### Who can help?
@ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My ow... | null | 2024-05-13 09:58:38+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/udop/test_tokenization_udop.py:UdopTokenizationTest:test_tokenizer_mismatch_warning', 'tests/models/layoutxlm/test_tokenization_layoutxlm.py:LayoutXLMTokenizationTest:test_chat_template_dict_saving', 'tests/models/layoutxlm/test_tokenization_layoutxlm.py:LayoutXLMTokenizationTest:test_tokenizer_mismatch_... | ['tests/models/layoutxlm/test_tokenization_layoutxlm.py:LayoutXLMTokenizationTest:test_split_special_tokens', 'tests/models/udop/test_tokenization_udop.py:UdopTokenizationTest:test_split_special_tokens'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/models/layoutxlm/test_tokenization_layoutxlm.py /testbed/tests/models/udop/test_tokenization_udop.py /testbed/tests/test_tokenization_common.py | Bug Fix | false | false | false | true | 13 | 1 | 14 | false | false | ["src/transformers/tokenization_utils_base.py->module->class_definition:PreTrainedTokenizerBase->function_definition:batch_encode_plus", "src/transformers/tokenization_utils_base.py->module->class_definition:PreTrainedTokenizerBase->function_definition:_batch_encode_plus", "src/transformers/tokenization_utils.py->modul... |
huggingface/transformers | 30,934 | huggingface__transformers-30934 | ['30922'] | a755745546779ae5c42510bc02a859bdac82b3b7 | diff --git a/src/transformers/image_transforms.py b/src/transformers/image_transforms.py
--- a/src/transformers/image_transforms.py
+++ b/src/transformers/image_transforms.py
@@ -14,6 +14,7 @@
# limitations under the License.
import warnings
+from math import ceil
from typing import Iterable, List, Optional, Tuple... | diff --git a/tests/test_image_transforms.py b/tests/test_image_transforms.py
--- a/tests/test_image_transforms.py
+++ b/tests/test_image_transforms.py
@@ -369,6 +369,10 @@ def test_center_crop(self):
self.assertEqual(cropped_image.shape, (300, 260, 3))
self.assertTrue(np.allclose(cropped_image, expect... | `center_crop` outputs wrong sized array if provided with odd-numbered dimensions smaller than requested crop size
### System Info
transformers 4.40.1, python 3.12
### Who can help?
@amyeroberts
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially support... | I believe the issue is more accurately caused by odd-numbered difference between original size and new size. Rounding up rather than down when calculating the padding fixes the above test cases. | 2024-05-21 10:22:57+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/test_image_transforms.py:ImageTransformsTester:test_flip_channel_order', 'tests/test_image_transforms.py:ImageTransformsTester:test_get_resize_output_image_size', 'tests/test_image_transforms.py:ImageTransformsTester:test_resize', 'tests/test_image_transforms.py:ImageTransformsTester:test_to_pil_image_5_numpy_u... | ['tests/test_image_transforms.py:ImageTransformsTester:test_center_crop'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/test_image_transforms.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/image_transforms.py->module->function_definition:center_crop"] |
huggingface/transformers | 30,964 | huggingface__transformers-30964 | ['29625'] | 6739e1d261f80caec34b8c8ac7a030907a4f75a2 | diff --git a/src/transformers/models/llama/tokenization_llama_fast.py b/src/transformers/models/llama/tokenization_llama_fast.py
--- a/src/transformers/models/llama/tokenization_llama_fast.py
+++ b/src/transformers/models/llama/tokenization_llama_fast.py
@@ -163,6 +163,7 @@ def __init__(
add_bos_token=add_... | diff --git a/tests/models/llama/test_tokenization_llama.py b/tests/models/llama/test_tokenization_llama.py
--- a/tests/models/llama/test_tokenization_llama.py
+++ b/tests/models/llama/test_tokenization_llama.py
@@ -602,6 +602,10 @@ def test_special_token_special_word(self):
self.assertEqual(decoded_tokens, "he... | `add_prefix_space` won't be respected by Llama tokenizer
### System Info
- `transformers` version: 4.38.2
- Platform: Linux-6.5.0-14-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- Huggingface_hub version: 0.21.3
- Safetensors version: 0.4.2
- Accelerate version: 0.27.2
- Accelerate config: not fo... | Hey, I took a peek under the hood and looks like setting `add_prefix_true` is only changing `kwargs[slow]=True` (in [tokenization_llama_fast.py](https://github.com/huggingface/transformers/blob/5011908e10d9592eeb634f4940e0bc130d3edc69/src/transformers/models/llama/tokenization_llama_fast.py#L127C9-L132C1). The `super()... | 2024-05-22 13:01:20+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/llama/test_tokenization_llama.py:LlamaTokenizationTest:test_offsets_mapping', 'tests/models/llama/test_tokenization_llama.py:LlamaTokenizationTest:test_number_of_added_tokens', 'tests/models/llama/test_tokenization_llama.py:LlamaTokenizationTest:test_mask_output', 'tests/models/llama/test_tokenization_ll... | ['tests/models/llama/test_tokenization_llama.py:LlamaIntegrationTest:test_no_prefix_space'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/models/llama/test_tokenization_llama.py | Bug Fix | false | false | true | false | 0 | 1 | 1 | false | true | ["src/transformers/models/llama/tokenization_llama_fast.py->module->class_definition:LlamaTokenizerFast->function_definition:__init__"] |
huggingface/transformers | 31,095 | huggingface__transformers-31095 | ['31033'] | a564d10afe1a78c31934f0492422700f61a0ffc0 | diff --git a/src/transformers/trainer.py b/src/transformers/trainer.py
--- a/src/transformers/trainer.py
+++ b/src/transformers/trainer.py
@@ -2306,6 +2306,8 @@ def _inner_training_loop(
self.optimizer.step()
+ self.control = self.callback_handler.on_optimizer_step(args, self... | diff --git a/tests/trainer/test_trainer_callback.py b/tests/trainer/test_trainer_callback.py
--- a/tests/trainer/test_trainer_callback.py
+++ b/tests/trainer/test_trainer_callback.py
@@ -78,6 +78,9 @@ def on_epoch_end(self, args, state, control, **kwargs):
def on_step_begin(self, args, state, control, **kwargs):
... | Add per-parameter gradient logging (and before optimizer step callback)
@RylanSchaeffer
### Feature request
I wish to log (in wandb) the norm of the gradient of each parameter in my transformer. Currently, supplying a max grad norm value will automatically log the gradient norm for the whole model, but there is no ... | cc @muellerzr @younesbelkada
Great feature @dhruvbpai - feel free to open a PoC PR and we'll take it from there! | 2024-05-28 21:30:20+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/trainer/test_trainer_callback.py:TrainerCallbackTest:test_stateful_mixed_callbacks', 'tests/trainer/test_trainer_callback.py:TrainerCallbackTest:test_stateful_duplicate_callbacks', 'tests/trainer/test_trainer_callback.py:TrainerCallbackTest:test_missing_stateful_callback', 'tests/trainer/test_trainer_callback.p... | ['tests/trainer/test_trainer_callback.py:TrainerCallbackTest:test_event_flow'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/trainer/test_trainer_callback.py | Feature | false | false | false | true | 3 | 2 | 5 | false | false | ["src/transformers/trainer_callback.py->module->class_definition:TrainerCallback", "src/transformers/trainer_callback.py->module->class_definition:TrainerCallback->function_definition:on_optimizer_step", "src/transformers/trainer_callback.py->module->class_definition:CallbackHandler", "src/transformers/trainer.py->modu... |
huggingface/transformers | 31,128 | huggingface__transformers-31128 | ['31085'] | 2b9e252b16396c926dad0e3c31802b4af8004e93 | diff --git a/src/transformers/optimization.py b/src/transformers/optimization.py
--- a/src/transformers/optimization.py
+++ b/src/transformers/optimization.py
@@ -540,6 +540,9 @@ def scheduler_hook(param):
if name == SchedulerType.INVERSE_SQRT:
return schedule_func(optimizer, num_warmup_steps=num_warmup_s... | diff --git a/tests/optimization/test_optimization.py b/tests/optimization/test_optimization.py
--- a/tests/optimization/test_optimization.py
+++ b/tests/optimization/test_optimization.py
@@ -36,6 +36,7 @@
get_inverse_sqrt_schedule,
get_linear_schedule_with_warmup,
get_polynomial_decay_schedul... | get_wsd_schedule gets passed num_training_steps because not handled
getting:
```
TypeError: get_wsd_schedule() got an unexpected keyword argument 'num_training_steps'
```
because there's not a handling of ```WARMUP_STABLE_DECAY```, get_wsd_schedule gets passed default params.
https://github.com/huggingface/trans... | cc @muellerzr | 2024-05-30 03:10:04+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/optimization/test_optimization.py:ScheduleInitTest:test_schedulers', 'tests/optimization/test_optimization.py:OptimizationTest:test_adam_w', 'tests/optimization/test_optimization.py:OptimizationTest:test_adafactor'] | ['tests/optimization/test_optimization.py:ScheduleInitTest:test_get_scheduler'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/optimization/test_optimization.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/optimization.py->module->function_definition:get_scheduler"] |
huggingface/transformers | 31,247 | huggingface__transformers-31247 | ['31246'] | 6f40a213eb10e38a5f242d0645519d413d32d798 | diff --git a/src/transformers/cache_utils.py b/src/transformers/cache_utils.py
--- a/src/transformers/cache_utils.py
+++ b/src/transformers/cache_utils.py
@@ -1249,3 +1249,77 @@ def reset(self):
# In-place ops prevent breaking the static address
self.key_cache[layer_idx].zero_()
s... | diff --git a/tests/models/mamba/test_modeling_mamba.py b/tests/models/mamba/test_modeling_mamba.py
--- a/tests/models/mamba/test_modeling_mamba.py
+++ b/tests/models/mamba/test_modeling_mamba.py
@@ -187,11 +187,20 @@ def create_and_check_state_equivalency(self, config, input_ids, *args):
outputs = model(input_... | We Need Compile Support For Mamba!
### Feature request
This feature adds `torch.compile` support for mamba archtecture
### Motivation
The motivation is that by supporting compile on mamba, we can get faster inference speed and better throughput even if we don't have high performance specified mamba kernels installed... | null | 2024-06-04 22:36:14+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/mamba/test_modeling_mamba.py:MambaModelTest:test_training', 'tests/models/mamba/test_modeling_mamba.py:MambaIntegrationTests:test_simple_generate_0_cpu', 'tests/models/mamba/test_modeling_mamba.py:MambaModelTest:test_beam_sample_generate_dict_output', 'tests/models/mamba/test_modeling_mamba.py:MambaModel... | ['tests/models/mamba/test_modeling_mamba.py:MambaModelTest:test_state_equivalency', 'tests/models/mamba/test_modeling_mamba.py:MambaModelTest:test_mamba_cached_slow_forward_and_backwards'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/models/mamba/test_modeling_mamba.py | Feature | false | false | false | true | 13 | 4 | 17 | false | false | ["src/transformers/models/mamba/modeling_mamba.py->module->class_definition:MambaForCausalLM->function_definition:prepare_inputs_for_generation", "src/transformers/models/mamba/modeling_mamba.py->module->class_definition:MambaForCausalLM->function_definition:_update_model_kwargs_for_generation", "src/transformers/model... |
huggingface/transformers | 31,448 | huggingface__transformers-31448 | ['31435'] | cd71f9381b86b0dc1fd60e8b87fb5bade35aa0cd | diff --git a/src/transformers/generation/stopping_criteria.py b/src/transformers/generation/stopping_criteria.py
--- a/src/transformers/generation/stopping_criteria.py
+++ b/src/transformers/generation/stopping_criteria.py
@@ -372,10 +372,11 @@ def _stop_string_create_embedding_vec(token_list, token_indices, stop_strin... | diff --git a/tests/generation/test_stopping_criteria.py b/tests/generation/test_stopping_criteria.py
--- a/tests/generation/test_stopping_criteria.py
+++ b/tests/generation/test_stopping_criteria.py
@@ -208,6 +208,24 @@ def test_stop_string_embedding_vecs(self):
token_lengths = embedding_vec[:, 2].tolist()
... | `stop_strings` Argument in `model.generate()` Results in Exception if Generation Completes Without `stop_string` Being Generated
### System Info
`transformers==4.41.2`
### Who can help?
@gante any thoughts here?
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Task... | Might be a duplicate of https://github.com/huggingface/transformers/issues/31435
It looks like this line sets the `tokenizer` to `None` automatically, creates a related but not identical issue.
https://github.com/huggingface/transformers/blob/eed9ed67987/src/transformers/generation/utils.py#L1643
@ahmed-moubtahij... | 2024-06-17 13:14:50+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/generation/test_stopping_criteria.py:StoppingCriteriaTestCase:test_max_time_criteria', 'tests/generation/test_stopping_criteria.py:StoppingCriteriaTestCase:test_criterias_per_row', 'tests/generation/test_stopping_criteria.py:StoppingCriteriaTestCase:test_stop_string_criteria', 'tests/generation/test_stopping_cr... | ['tests/generation/test_stopping_criteria.py:StoppingCriteriaTestCase:test_single_letter_stop_string'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/generation/test_stopping_criteria.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/generation/stopping_criteria.py->module->class_definition:StopStringCriteria->function_definition:_stop_string_create_embedding_vec"] |
langchain-ai/langchain | 4,009 | langchain-ai__langchain-4009 | ['3988'] | aa383559999b3d6a781c62ed7f8589fef8892879 | diff --git a/langchain/callbacks/openai_info.py b/langchain/callbacks/openai_info.py
--- a/langchain/callbacks/openai_info.py
+++ b/langchain/callbacks/openai_info.py
@@ -4,44 +4,40 @@
from langchain.callbacks.base import BaseCallbackHandler
from langchain.schema import AgentAction, AgentFinish, LLMResult
-
-def ge... | diff --git a/tests/unit_tests/callbacks/test_openai_info.py b/tests/unit_tests/callbacks/test_openai_info.py
new file mode 100644
--- /dev/null
+++ b/tests/unit_tests/callbacks/test_openai_info.py
@@ -0,0 +1,46 @@
+import pytest
+
+from langchain.callbacks import OpenAICallbackHandler
+from langchain.llms.openai import... | LangChain openAI callback doesn't allow finetuned models
Hi all!
I have an [application](https://github.com/ur-whitelab/BO-LIFT) based on langchain.
A few months ago, I used it with fine-tuned (FT) models.
We added a token usage counter later, and I haven't tried fine-tuned models again since then.
Recently we ... | null | 2023-05-02 22:52:00+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies and C++ build tools
RUN apt-get update && apt-get install -y \
git \
build-essential \
g++ \
cmake \
&& rm -rf /var/lib/apt... | ['tests/unit_tests/callbacks/test_openai_info.py:None:test_on_llm_end'] | ['tests/unit_tests/callbacks/test_openai_info.py:None:test_on_llm_end_custom_model'] | null | pytest /testbed/tests/unit_tests/callbacks/test_openai_info.py -v --json-report | Bug Fix | false | true | false | false | 3 | 0 | 3 | false | false | ["langchain/callbacks/openai_info.py->module->function_definition:get_openai_model_cost_per_1k_tokens", "langchain/callbacks/openai_info.py->module->function_definition:get_openai_token_cost_for_model", "langchain/callbacks/openai_info.py->module->class_definition:OpenAICallbackHandler->function_definition:on_llm_end"] |
langchain-ai/langchain | 4,103 | langchain-ai__langchain-4103 | ['4087'] | 624554a43a1ab0113f3d79ebcbc9e726faecb339 | diff --git a/langchain/document_loaders/csv_loader.py b/langchain/document_loaders/csv_loader.py
--- a/langchain/document_loaders/csv_loader.py
+++ b/langchain/document_loaders/csv_loader.py
@@ -36,13 +36,7 @@ def __init__(
self.file_path = file_path
self.source_column = source_column
self.en... | diff --git a/tests/unit_tests/document_loader/test_csv_loader.py b/tests/unit_tests/document_loader/test_csv_loader.py
--- a/tests/unit_tests/document_loader/test_csv_loader.py
+++ b/tests/unit_tests/document_loader/test_csv_loader.py
@@ -1,4 +1,4 @@
-from pytest_mock import MockerFixture
+from pathlib import Path
f... | CSVLoader TypeError: "delimiter" must be string, not NoneType
it seems that the source code for initializing a CSVLoader doesn't put an appropriate if condition here:
```
def __init__(
self,
file_path: str,
source_column: Optional[str] = None,
csv_args: Optional[Dict] = None,... | Is there a work around for this?
I'm using it in a directory loader like this:
csv_directory_loader = DirectoryLoader(csv_folder_path, glob="**/*.csv", loader_cls=CSVLoader, show_progress=True)
and it gives me the same error.
> Is there a work around for this?
>
> I'm using it in a directory loader like th... | 2023-05-04 11:28:14+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies and C++ build tools
RUN apt-get update && apt-get install -y \
git \
build-essential \
g++ \
cmake \
&& rm -rf /var/lib/apt... | [] | ['tests/unit_tests/document_loader/test_csv_loader.py:TestCSVLoader:test_csv_loader_load_valid_data', 'tests/unit_tests/document_loader/test_csv_loader.py:TestCSVLoader:test_csv_loader_load_single_row_file', 'tests/unit_tests/document_loader/test_csv_loader.py:TestCSVLoader:test_csv_loader_load_single_column_file', 'te... | null | pytest /testbed/tests/unit_tests/document_loader/test_csv_loader.py -v --json-report | Bug Fix | false | false | true | false | 0 | 1 | 1 | false | true | ["langchain/document_loaders/csv_loader.py->module->class_definition:CSVLoader->function_definition:__init__"] |
langchain-ai/langchain | 4,420 | langchain-ai__langchain-4420 | ['4153'] | f2150285a495fc530a7707218ea4980c17a170e5 | diff --git a/langchain/document_loaders/whatsapp_chat.py b/langchain/document_loaders/whatsapp_chat.py
--- a/langchain/document_loaders/whatsapp_chat.py
+++ b/langchain/document_loaders/whatsapp_chat.py
@@ -44,7 +44,7 @@ def load(self) -> List[Document]:
)
\]?
[\s-]*
- ... | diff --git a/tests/integration_tests/document_loaders/test_whatsapp_chat.py b/tests/integration_tests/document_loaders/test_whatsapp_chat.py
--- a/tests/integration_tests/document_loaders/test_whatsapp_chat.py
+++ b/tests/integration_tests/document_loaders/test_whatsapp_chat.py
@@ -16,4 +16,5 @@ def test_whatsapp_chat_... | WhatsAppChatLoader doesn't work on chats exported from WhatsApp
### System Info
langchain 0.0.158
Mac OS M1
Python 3.11
### Who can help?
@ey
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Mo... | it also doesn't work on Ukrainian date format, e.g.
```
[05.05.23, 15:45:46] User: text
```
---
I used the following input formats:
```
[05.05.23, 15:48:11] James: Hi here
[11/8/21, 9:41:32 AM] User name: Message 123
1/23/23, 3:19 AM - User 2: Bye!
1/23/23, 3:22_AM - User 1: And let me know if anything ... | 2023-05-09 21:23:12+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies and C++ build tools
RUN apt-get update && apt-get install -y \
git \
build-essential \
g++ \
cmake \
&& rm -rf /var/lib/apt... | [] | ['tests/integration_tests/document_loaders/test_whatsapp_chat.py:None:test_whatsapp_chat_loader'] | null | pytest /testbed/tests/integration_tests/document_loaders/test_whatsapp_chat.py -v --json-report | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["langchain/document_loaders/whatsapp_chat.py->module->class_definition:WhatsAppChatLoader->function_definition:load"] |
langchain-ai/langchain | 4,579 | langchain-ai__langchain-4579 | ['4167'] | 372a5113ff1cce613f78d58c9e79e7c49aa60fac | diff --git a/langchain/document_loaders/web_base.py b/langchain/document_loaders/web_base.py
--- a/langchain/document_loaders/web_base.py
+++ b/langchain/document_loaders/web_base.py
@@ -68,17 +68,19 @@ def __init__(
"bs4 package not found, please install it with " "`pip install bs4`"
)
... | diff --git a/tests/unit_tests/document_loader/test_web_base.py b/tests/unit_tests/document_loader/test_web_base.py
new file mode 100644
--- /dev/null
+++ b/tests/unit_tests/document_loader/test_web_base.py
@@ -0,0 +1,10 @@
+from langchain.document_loaders.web_base import WebBaseLoader
+
+
+class TestWebBaseLoader:
+ ... | User Agent on WebBaseLoader does not set header_template when passing `header_template`
### System Info
Hi Team,
When using WebBaseLoader and setting header_template the user agent does not get set and sticks with the default python user agend.
```
loader = WebBaseLoader(url, header_template={
'User-... | possible fix after setting session
```
self.session = requests.Session()
"""Default headers are set by session and spread them with custom headers when needed"""
if header_template is not None:
self.session.headers = {** self.session.headers, ** header_template}
``` | 2023-05-12 13:07:01+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies and C++ build tools
RUN apt-get update && apt-get install -y \
git \
build-essential \
g++ \
cmake \
&& rm -rf /var/lib/apt... | [] | ['tests/unit_tests/document_loader/test_web_base.py:TestWebBaseLoader:test_respect_user_specified_user_agent'] | null | pytest /testbed/tests/unit_tests/document_loader/test_web_base.py -v --json-report | Bug Fix | false | false | true | false | 0 | 1 | 1 | false | true | ["langchain/document_loaders/web_base.py->module->class_definition:WebBaseLoader->function_definition:__init__"] |
langchain-ai/langchain | 5,432 | langchain-ai__langchain-5432 | ['5423'] | ee57054d0596bf3176c73db64ad38f82e8e6f9a6 | diff --git a/langchain/agents/mrkl/output_parser.py b/langchain/agents/mrkl/output_parser.py
--- a/langchain/agents/mrkl/output_parser.py
+++ b/langchain/agents/mrkl/output_parser.py
@@ -44,7 +44,13 @@ def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
raise OutputParserException(f"Could no... | diff --git a/tests/unit_tests/agents/test_mrkl.py b/tests/unit_tests/agents/test_mrkl.py
--- a/tests/unit_tests/agents/test_mrkl.py
+++ b/tests/unit_tests/agents/test_mrkl.py
@@ -71,6 +71,23 @@ def test_get_action_and_input_newline_after_keyword() -> None:
assert action_input == "ls -l ~/.bashrc.d/\n"
+def tes... | SQLDatabaseToolkit doesn't work well with Postgresql, it will truncate the last double quotation marks in the SQL
### System Info
Langchain: 0.0.184
Python: 3.10.9
Platform: Windows 10 with Jupyter lab
### Who can help?
@vowelparrot
### Information
- [ ] The official example notebooks/scripts
- [X] My own modifi... | Could you include the full prefix and query you're using to generate this error please, I'm having a hard time recreating the issue locally? 🙇 | 2023-05-30 10:43:04+00:00 | Python | FROM python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
curl
# Install Poetry and add to PATH
ENV POETRY_HOME="/opt/poetry" \
POETRY_VERSION=1.4.2
RUN curl -sSL ht... | ['tests/unit_tests/agents/test_mrkl.py:None:test_get_final_answer_multiline', 'tests/unit_tests/agents/test_mrkl.py:None:test_bad_action_input_line', 'tests/unit_tests/agents/test_mrkl.py:None:test_get_action_and_input', 'tests/unit_tests/agents/test_mrkl.py:None:test_get_action_and_input_newline', 'tests/unit_tests/ag... | ['tests/unit_tests/agents/test_mrkl.py:None:test_get_action_and_input_sql_query'] | null | poetry run pytest /testbed/tests/unit_tests/agents/test_mrkl.py -v --json-report-file=test_results.json | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["langchain/agents/mrkl/output_parser.py->module->class_definition:MRKLOutputParser->function_definition:parse"] |
langchain-ai/langchain | 5,450 | langchain-ai__langchain-5450 | ['3605'] | 64b4165c8d9b8374295d4629ef57d4d58e9af7c8 | diff --git a/langchain/embeddings/huggingface.py b/langchain/embeddings/huggingface.py
--- a/langchain/embeddings/huggingface.py
+++ b/langchain/embeddings/huggingface.py
@@ -25,7 +25,12 @@ class HuggingFaceEmbeddings(BaseModel, Embeddings):
model_name = "sentence-transformers/all-mpnet-base-v2"
... | diff --git a/tests/integration_tests/embeddings/test_huggingface.py b/tests/integration_tests/embeddings/test_huggingface.py
--- a/tests/integration_tests/embeddings/test_huggingface.py
+++ b/tests/integration_tests/embeddings/test_huggingface.py
@@ -26,7 +26,8 @@ def test_huggingface_embedding_query() -> None:
def te... | Embeddings normalization and similarity metric
I am new to using Langchain and attempting to make it work with a locally running LLM (Alpaca) and Embeddings model (Sentence Transformer). When configuring the sentence transformer model with `HuggingFaceEmbeddings` no arguments can be passed to the encode method of the m... | null | 2023-05-30 16:11:31+00:00 | Python | FROM python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
curl
# Install Poetry and add to PATH
ENV POETRY_HOME="/opt/poetry" \
POETRY_VERSION=1.4.2
RUN curl -sSL ht... | ['tests/integration_tests/embeddings/test_huggingface.py:None:test_huggingface_instructor_embedding_documents', 'tests/integration_tests/embeddings/test_huggingface.py:None:test_huggingface_embedding_documents', 'tests/integration_tests/embeddings/test_huggingface.py:None:test_huggingface_embedding_query', 'tests/integ... | ['tests/integration_tests/embeddings/test_huggingface.py:None:test_huggingface_instructor_embedding_normalize'] | null | poetry run pytest /testbed/tests/integration_tests/embeddings/test_huggingface.py -v --json-report-file=test_results.json | Feature | false | false | false | true | 2 | 2 | 4 | false | false | ["langchain/embeddings/huggingface.py->module->class_definition:HuggingFaceInstructEmbeddings->function_definition:embed_documents", "langchain/embeddings/huggingface.py->module->class_definition:HuggingFaceEmbeddings", "langchain/embeddings/huggingface.py->module->class_definition:HuggingFaceInstructEmbeddings", "lang... |
langchain-ai/langchain | 5,584 | langchain-ai__langchain-5584 | ['5582'] | 4c572ffe959957b515528a9036b374f56cef027f | diff --git a/langchain/vectorstores/chroma.py b/langchain/vectorstores/chroma.py
--- a/langchain/vectorstores/chroma.py
+++ b/langchain/vectorstores/chroma.py
@@ -356,11 +356,11 @@ def update_document(self, document_id: str, document: Document) -> None:
raise ValueError(
"For update, you m... | diff --git a/tests/integration_tests/vectorstores/test_chroma.py b/tests/integration_tests/vectorstores/test_chroma.py
--- a/tests/integration_tests/vectorstores/test_chroma.py
+++ b/tests/integration_tests/vectorstores/test_chroma.py
@@ -3,7 +3,10 @@
from langchain.docstore.document import Document
from langchain.... | Chroma.update_document bug
### System Info
update_document only embeds a single document, but the single page_content string is cast to a list before embedding, resulting in a per-character embedding not a per-document embedding.
https://github.com/hwchase17/langchain/blob/4c572ffe959957b515528a9036b374f56cef027f/l... | null | 2023-06-01 23:21:18+00:00 | Python | FROM python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
curl
# Install Poetry and add to PATH
ENV POETRY_HOME="/opt/poetry" \
POETRY_VERSION=1.4.2
RUN curl -sSL ht... | ['tests/integration_tests/vectorstores/test_chroma.py:None:test_chroma_with_persistence', 'tests/integration_tests/vectorstores/test_chroma.py:None:test_chroma_with_include_parameter', 'tests/integration_tests/vectorstores/test_chroma.py:None:test_chroma_async', 'tests/integration_tests/vectorstores/test_chroma.py:None... | ['tests/integration_tests/vectorstores/test_chroma.py:None:test_chroma_update_document', 'tests/integration_tests/vectorstores/test_chroma.py:None:test_chroma'] | null | poetry run pytest /testbed/tests/integration_tests/vectorstores/test_chroma.py -v --json-report-file=test_results.json | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["langchain/vectorstores/chroma.py->module->class_definition:Chroma->function_definition:update_document"] |
langchain-ai/langchain | 5,625 | langchain-ai__langchain-5625 | ['5614'] | d0d89d39efb5f292f72e70973f3b70c4ca095047 | diff --git a/langchain/text_splitter.py b/langchain/text_splitter.py
--- a/langchain/text_splitter.py
+++ b/langchain/text_splitter.py
@@ -30,7 +30,9 @@
TS = TypeVar("TS", bound="TextSplitter")
-def _split_text(text: str, separator: str, keep_separator: bool) -> List[str]:
+def _split_text_with_regex(
+ text: s... | diff --git a/tests/unit_tests/test_text_splitter.py b/tests/unit_tests/test_text_splitter.py
--- a/tests/unit_tests/test_text_splitter.py
+++ b/tests/unit_tests/test_text_splitter.py
@@ -275,6 +275,12 @@ def test_rst_code_splitter() -> None:
- Item 1
- Item 2
- Item 3
+
+Comment
+*******
+Not a comment
+
+.. This is... | MarkdownTextSplitter: multiple repeat at position 4 (line 3, column 2)
### System Info
langchain 0.0.188
python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] ... | null | 2023-06-02 18:06:25+00:00 | Python | FROM python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
curl
# Install Poetry and add to PATH
ENV POETRY_HOME="/opt/poetry" \
POETRY_VERSION=1.4.2
RUN curl -sSL ht... | ['tests/unit_tests/test_text_splitter.py:None:test_merge_splits', 'tests/unit_tests/test_text_splitter.py:None:test_swift_code_splitter', 'tests/unit_tests/test_text_splitter.py:None:test_iterative_text_splitter', 'tests/unit_tests/test_text_splitter.py:None:test_character_text_splitter_short_words_first', 'tests/unit_... | ['tests/unit_tests/test_text_splitter.py:None:test_rst_code_splitter'] | null | poetry run pytest /testbed/tests/unit_tests/test_text_splitter.py -v --json-report-file=test_results.json | Bug Fix | false | true | false | false | 5 | 0 | 5 | false | false | ["langchain/text_splitter.py->module->function_definition:_split_text", "langchain/text_splitter.py->module->class_definition:RecursiveCharacterTextSplitter->function_definition:_split_text", "langchain/text_splitter.py->module->function_definition:_split_text_with_regex", "langchain/text_splitter.py->module->class_def... |
langchain-ai/langchain | 6,456 | langchain-ai__langchain-6456 | ['6431'] | 1300a4bc8cf5ebd30c77668473e178bfb24b6679 | diff --git a/langchain/prompts/chat.py b/langchain/prompts/chat.py
--- a/langchain/prompts/chat.py
+++ b/langchain/prompts/chat.py
@@ -168,6 +168,8 @@ def validate_input_variables(cls, values: dict) -> dict:
for message in messages:
if isinstance(message, BaseMessagePromptTemplate):
... | diff --git a/tests/unit_tests/prompts/test_chat.py b/tests/unit_tests/prompts/test_chat.py
--- a/tests/unit_tests/prompts/test_chat.py
+++ b/tests/unit_tests/prompts/test_chat.py
@@ -162,3 +162,31 @@ def test_infer_variables() -> None:
messages = [HumanMessagePromptTemplate.from_template("{foo}")]
prompt = Ch... | ChatPromptTemplate with partial variables is giving validation error
### System Info
langchain-0.0.205, python3.10
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models... | null | 2023-06-20 01:13:27+00:00 | Python | FROM python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
gcc \
python3-dev \
curl \
&& rm -rf /var/lib/apt/lists/*
# Install poetry and add to PATH
RUN curl -sS... | ['tests/unit_tests/prompts/test_chat.py:None:test_create_chat_prompt_template_from_template', 'tests/unit_tests/prompts/test_chat.py:None:test_chat_invalid_input_variables_extra', 'tests/unit_tests/prompts/test_chat.py:None:test_infer_variables', 'tests/unit_tests/prompts/test_chat.py:None:test_chat_prompt_template', '... | ['tests/unit_tests/prompts/test_chat.py:None:test_chat_valid_with_partial_variables', 'tests/unit_tests/prompts/test_chat.py:None:test_chat_valid_infer_variables'] | null | poetry run pytest /testbed/tests/unit_tests/prompts/test_chat.py -v --json-report | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["langchain/prompts/chat.py->module->class_definition:ChatPromptTemplate->function_definition:validate_input_variables"] |
langchain-ai/langchain | 6,483 | langchain-ai__langchain-6483 | ['5456'] | 10adec5f1bc1babbd7f5cbea8290d8b1e62554ba | diff --git a/langchain/tools/base.py b/langchain/tools/base.py
--- a/langchain/tools/base.py
+++ b/langchain/tools/base.py
@@ -82,7 +82,7 @@ def _get_filtered_args(
"""Get the arguments from a function's signature."""
schema = inferred_model.schema()["properties"]
valid_keys = signature(func).parameters
... | diff --git a/tests/unit_tests/tools/test_base.py b/tests/unit_tests/tools/test_base.py
--- a/tests/unit_tests/tools/test_base.py
+++ b/tests/unit_tests/tools/test_base.py
@@ -19,6 +19,7 @@
StructuredTool,
ToolException,
)
+from tests.unit_tests.callbacks.fake_callback_handler import FakeCallbackHandler
... | Tools: Inconsistent callbacks/run_manager parameter
### System Info
MacOS Ventura 13.3.1 (a)
python = "^3.9"
langchain = "0.0.185"
### Who can help?
@agola11 @vowelparrot
### Related Components
- Agents / Agent Executors
- Tools / Toolkits
- Callbacks/Tracing
### Reproduction
I want to use the... | I will gladly help fixing this issue :)
Thanks for raising! I can see how it is confusing that subclasses of the `BaseTool` expect a `run_manager` argument whereas instantiations of the `Tool` or `StructuredTool` using the `{Tool|StructuredTool}.from_function()` expect a `callback` argument.
We won't break backward... | 2023-06-20 15:53:03+00:00 | Python | FROM python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
gcc \
python3-dev \
curl \
&& rm -rf /var/lib/apt/lists/*
# Install poetry and add to PATH
RUN curl -sS... | ['tests/unit_tests/tools/test_base.py:None:test_tool_partial_function_args_schema', 'tests/unit_tests/tools/test_base.py:None:test_async_exception_handling_non_tool_exception', 'tests/unit_tests/tools/test_base.py:None:test_structured_tool_from_function', 'tests/unit_tests/tools/test_base.py:None:test_exception_handlin... | ['tests/unit_tests/tools/test_base.py:None:test_structured_tool_from_function_with_run_manager'] | null | pytest /testbed/tests/unit_tests/tools/test_base.py -v --json-report --json-report-file=report.json --override-ini=addopts= | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["langchain/tools/base.py->module->function_definition:_get_filtered_args", "langchain/tools/base.py->module->function_definition:create_schema_from_function"] |
langchain-ai/langchain | 6,765 | langchain-ai__langchain-6765 | ['6756'] | ba622764cb7ccf4667878289f959857348ef8c19 | diff --git a/langchain/agents/initialize.py b/langchain/agents/initialize.py
--- a/langchain/agents/initialize.py
+++ b/langchain/agents/initialize.py
@@ -51,7 +51,7 @@ def initialize_agent(
f"Got unknown agent type: {agent}. "
f"Valid types are: {AGENT_TO_CLASS.keys()}."
... | diff --git a/tests/unit_tests/agents/test_initialize.py b/tests/unit_tests/agents/test_initialize.py
new file mode 100644
--- /dev/null
+++ b/tests/unit_tests/agents/test_initialize.py
@@ -0,0 +1,23 @@
+"""Test the initialize module."""
+
+from langchain.agents.agent_types import AgentType
+from langchain.agents.initia... | Recent tags change causes AttributeError: 'str' object has no attribute 'value' on initialize_agent call
### System Info
- Langchain: 0.0.215
- Platform: ubuntu
- Python 3.10.12
### Who can help?
@vowelparrot
https://github.com/hwchase17/langchain/blob/d84a3bcf7ab3edf8fe1d49083e066d51c9b5f621/langchain/agents/... | yes i also got this error too. Apparently we have to use AgentType.ZERO_SHOT_REACT_DESCRIPTION , the old way of using just strings has been changed . At the very least they could have shown an exception error instead of this jargon.
agree!the same to me!
Will land a fix. Thanks for raising this! | 2023-06-26 15:12:34+00:00 | Python | FROM python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
gcc \
python3-dev \
curl \
&& rm -rf /var/lib/apt/lists/*
# Install poetry and add to PATH
RUN curl -sS... | [] | ['tests/unit_tests/agents/test_initialize.py:None:test_initialize_agent_with_str_agent_type'] | null | pytest /testbed/tests/unit_tests/agents/test_initialize.py -v --json-report --json-report-file=report.json --override-ini=addopts= | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["langchain/agents/initialize.py->module->function_definition:initialize_agent"] |
langchain-ai/langchain | 7,653 | langchain-ai__langchain-7653 | ['7652'] | a673a51efa3e03aaa7c8c7e0004dc5ff9c536f2e | diff --git a/langchain/cache.py b/langchain/cache.py
--- a/langchain/cache.py
+++ b/langchain/cache.py
@@ -180,6 +180,7 @@ def clear(self, **kwargs: Any) -> None:
"""Clear cache."""
with Session(self.engine) as session:
session.query(self.cache_schema).delete()
+ session.commit... | diff --git a/tests/unit_tests/test_cache.py b/tests/unit_tests/test_cache.py
--- a/tests/unit_tests/test_cache.py
+++ b/tests/unit_tests/test_cache.py
@@ -139,6 +139,26 @@ def test_chat_model_caching_params() -> None:
)
+def test_llm_cache_clear() -> None:
+ prompt = "How are you?"
+ response = "Test... | SQLite LLM cache clear does not take effect
### System Info
Langchain version: 0.0.231
Python version: 3.10.11
Bug:
There is an issue when clearing LLM cache for SQL Alchemy based caches.
langchain.llm_cache.clear() does not clear the cache for SQLite LLM cache.
Reason: it doesn't commit the deletion datab... | null | 2023-07-13 12:40:16+00:00 | Python | FROM python:3.8.1-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies including PDF-related packages
RUN apt-get update && apt-get install -y \
git \
curl \
build-essential \
poppler-utils \
python3-pip \
libgl1-mesa-... | ['tests/unit_tests/test_cache.py:None:test_chat_model_caching_params[InMemoryCache]', 'tests/unit_tests/test_cache.py:None:test_old_sqlite_llm_caching[InMemoryCache]', 'tests/unit_tests/test_cache.py:None:test_chat_model_caching[get_sqlite_cache]', 'tests/unit_tests/test_cache.py:None:test_chat_model_caching_params[get... | ['tests/unit_tests/test_cache.py:None:test_llm_cache_clear[get_sqlite_cache]'] | null | pytest /testbed/tests/unit_tests/test_cache.py -v --capture=no --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["langchain/cache.py->module->class_definition:SQLAlchemyCache->function_definition:clear"] |
langchain-ai/langchain | 14,350 | langchain-ai__langchain-14350 | ['14342'] | 7bdfc43766e72e4b67512bd85119b1c797035b86 | diff --git a/docs/docs/modules/data_connection/retrievers/multi_vector.ipynb b/docs/docs/modules/data_connection/retrievers/multi_vector.ipynb
--- a/docs/docs/modules/data_connection/retrievers/multi_vector.ipynb
+++ b/docs/docs/modules/data_connection/retrievers/multi_vector.ipynb
@@ -143,7 +143,7 @@
{
"dat... | diff --git a/libs/langchain/tests/unit_tests/indexes/test_indexing.py b/libs/langchain/tests/unit_tests/indexes/test_indexing.py
--- a/libs/langchain/tests/unit_tests/indexes/test_indexing.py
+++ b/libs/langchain/tests/unit_tests/indexes/test_indexing.py
@@ -80,7 +80,7 @@ def add_documents( # type: ignore
*,
... | Error:
### System Info
I try this example code
```
from langchain.retrievers import ParentDocumentRetriever
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.storage import InMemoryStore
# This text splitter is used to create the parent documents
parent_splitter = Recursive... | _🤖_
<!-- Greeting -->
Hello @dtthanh1971! I'm [Dosu](https://dosu.dev), a bot here to assist you. I can help with solving bugs, answering questions, and guiding you to become a contributor. While we wait for a human maintainer, feel free to ask me anything. I'm here to help you!
<!-- Answer -->
Based on the error mes... | 2023-12-06 16:47:14+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
g++ \
git \
postgresql-server-dev-all \
&& rm -rf /var/lib/apt... | ['tests/unit_tests/indexes/test_indexing.py:None:test_indexing_with_no_docs', 'tests/unit_tests/indexes/test_indexing.py:None:test_compatible_vectorstore_documentation', 'tests/unit_tests/indexes/test_indexing.py:None:test_index_simple_delete_full', 'tests/unit_tests/indexes/test_indexing.py:None:test_incremental_fails... | ['libs/langchain/tests/unit_tests/retrievers/test_multi_vector.py:None:test_multi_vector_retriever_initialization', 'libs/langchain/tests/unit_tests/retrievers/test_parent_document.py:None:test_parent_document_retriever_initialization'] | null | pytest /testbed/libs/langchain/tests/unit_tests/indexes/test_indexing.py /testbed/libs/langchain/tests/unit_tests/retrievers/test_multi_vector.py /testbed/libs/langchain/tests/unit_tests/retrievers/test_parent_document.py -v --junitxml=test-results.xml | Bug Fix | false | false | false | true | 1 | 2 | 3 | false | false | ["libs/langchain/langchain/retrievers/multi_vector.py->module->class_definition:MultiVectorRetriever", "libs/langchain/langchain/retrievers/multi_vector.py->module->class_definition:MultiVectorRetriever->function_definition:__init__", "libs/langchain/langchain/retrievers/multi_vector.py->module->class_definition:MultiV... |
langchain-ai/langchain | 19,331 | langchain-ai__langchain-19331 | ['19276'] | 5fc7bb01e9d6398452d0a7b4a50ce234408ca99c | diff --git a/libs/core/langchain_core/language_models/llms.py b/libs/core/langchain_core/language_models/llms.py
--- a/libs/core/langchain_core/language_models/llms.py
+++ b/libs/core/langchain_core/language_models/llms.py
@@ -115,17 +115,41 @@ def _before_sleep(retry_state: RetryCallState) -> None:
)
+def _re... | diff --git a/libs/core/tests/unit_tests/language_models/llms/test_cache.py b/libs/core/tests/unit_tests/language_models/llms/test_cache.py
new file mode 100644
--- /dev/null
+++ b/libs/core/tests/unit_tests/language_models/llms/test_cache.py
@@ -0,0 +1,105 @@
+from typing import Any, Dict, Optional, Tuple
+
+from langc... | langchain-core: Allow passing local cache to language models
### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
# Goal
Allow instantiating language models with specific caches provided as an init parameter. This will b... | i want try.
Is this test case runnable? If it works fine, what exactly is this issue?
https://github.com/langchain-ai/langchain/blob/40f846e65da37a1c00d72da9ea64ebb0f295b016/libs/core/tests/unit_tests/language_models/chat_models/test_cache.py#L43 | 2024-03-20 11:56:35+00:00 | Python | FROM public.ecr.aws/ubuntu/ubuntu:22.04
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
build-essential \
python3 \
python3-dev \
python3-pip \
software-prope... | [] | ['libs/core/tests/unit_tests/language_models/llms/test_cache.py:None:test_local_cache_generate_async', 'libs/core/tests/unit_tests/language_models/llms/test_cache.py:None:test_local_cache_generate_sync', 'libs/core/tests/unit_tests/language_models/llms/test_cache.py:None:test_no_cache_generate_sync', 'libs/core/tests/u... | null | python3 -m pytest /testbed/libs/core/tests/unit_tests/language_models/llms/test_cache.py -v --override-ini=addopts= --junitxml=test-results.xml | Feature | false | true | false | false | 7 | 0 | 7 | false | false | ["libs/core/langchain_core/language_models/llms.py->module->function_definition:aget_prompts", "libs/core/langchain_core/language_models/llms.py->module->class_definition:BaseLLM->function_definition:agenerate", "libs/core/langchain_core/language_models/llms.py->module->function_definition:get_prompts", "libs/core/lang... |
langchain-ai/langchain | 19,717 | langchain-ai__langchain-19717 | ['19646'] | 239dd7c0c03d0430c55c2c41cf56cf0dd537199b | diff --git a/libs/core/langchain_core/output_parsers/json.py b/libs/core/langchain_core/output_parsers/json.py
--- a/libs/core/langchain_core/output_parsers/json.py
+++ b/libs/core/langchain_core/output_parsers/json.py
@@ -137,16 +137,24 @@ def parse_json_markdown(
Returns:
The parsed JSON object as a Pyt... | diff --git a/libs/core/tests/unit_tests/output_parsers/test_json.py b/libs/core/tests/unit_tests/output_parsers/test_json.py
--- a/libs/core/tests/unit_tests/output_parsers/test_json.py
+++ b/libs/core/tests/unit_tests/output_parsers/test_json.py
@@ -69,6 +69,10 @@
}
```"""
+JSON_WITH_PART_MARKDOWN_CODE_BLOCK = """... | JsonOutputParser fails if a json value contains ``` inside it.
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure th... | Let me see. | 2024-03-28 15:50:23+00:00 | Python | FROM public.ecr.aws/ubuntu/ubuntu:22.04
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
build-essential \
python3 \
python3-dev \
python3-pip \
software-prope... | ['tests/unit_tests/output_parsers/test_json.py:None:test_parse_partial_json[json_strings6]', 'tests/unit_tests/output_parsers/test_json.py:None:test_partial_text_json_output_parser', 'tests/unit_tests/output_parsers/test_json.py:None:test_parse_json_with_code_blocks_and_newlines', 'tests/unit_tests/output_parsers/test_... | ['libs/core/tests/unit_tests/output_parsers/test_json.py:None:test_parse_json_with_part_code_blocks'] | null | python3 -m pytest /testbed/libs/core/tests/unit_tests/output_parsers/test_json.py -v --override-ini=addopts= --junitxml=test-results.xml | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["libs/core/langchain_core/output_parsers/json.py->module->function_definition:_parse_json", "libs/core/langchain_core/output_parsers/json.py->module->function_definition:parse_json_markdown"] |
langchain-ai/langchain | 20,064 | langchain-ai__langchain-20064 | ['11408'] | ebd24bb5d64078d7567eca4da0297260eb33dc31 | diff --git a/libs/langchain/langchain/output_parsers/boolean.py b/libs/langchain/langchain/output_parsers/boolean.py
--- a/libs/langchain/langchain/output_parsers/boolean.py
+++ b/libs/langchain/langchain/output_parsers/boolean.py
@@ -1,3 +1,5 @@
+import re
+
from langchain_core.output_parsers import BaseOutputParser
... | diff --git a/libs/langchain/tests/unit_tests/output_parsers/test_boolean_parser.py b/libs/langchain/tests/unit_tests/output_parsers/test_boolean_parser.py
--- a/libs/langchain/tests/unit_tests/output_parsers/test_boolean_parser.py
+++ b/libs/langchain/tests/unit_tests/output_parsers/test_boolean_parser.py
@@ -1,3 +1,5 ... | BooleanOutputParser expected output value error
### System Info
Hi, I am using LLMChainFilter.from_llm(llm) but while running, I am getting this error:
ValueError: BooleanOutputParser expected output value to either be YES or NO. Received Yes, the context is relevant to the question as it provides information about... | _🤖_
<!-- Answer -->
Hello,
The error you're encountering is due to the output of your language model (LLM) not matching the expected values "YES" or "NO" that the `BooleanOutputParser` class is designed to handle.
The `BooleanOutputParser` class in the LangChain codebase is used to parse the output of a language mo... | 2024-04-05 12:56:34+00:00 | Python | FROM public.ecr.aws/ubuntu/ubuntu:22.04
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
build-essential \
python3 \
python3-dev \
python3-pip \
software-prope... | [] | ['libs/langchain/tests/unit_tests/output_parsers/test_boolean_parser.py:None:test_boolean_output_parser_parse'] | null | python3 -m pytest /testbed/libs/langchain/tests/unit_tests/output_parsers/test_boolean_parser.py -v --override-ini=addopts= | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["libs/langchain/langchain/output_parsers/boolean.py->module->class_definition:BooleanOutputParser->function_definition:parse"] |
langchain-ai/langchain | 21,201 | langchain-ai__langchain-21201 | ['21196', '21196'] | df49404794d8f78c50020942497220154ec205ce | diff --git a/libs/partners/mistralai/langchain_mistralai/chat_models.py b/libs/partners/mistralai/langchain_mistralai/chat_models.py
--- a/libs/partners/mistralai/langchain_mistralai/chat_models.py
+++ b/libs/partners/mistralai/langchain_mistralai/chat_models.py
@@ -259,6 +259,7 @@ def _convert_message_to_mistral_chat_... | diff --git a/libs/partners/mistralai/tests/unit_tests/test_chat_models.py b/libs/partners/mistralai/tests/unit_tests/test_chat_models.py
--- a/libs/partners/mistralai/tests/unit_tests/test_chat_models.py
+++ b/libs/partners/mistralai/tests/unit_tests/test_chat_models.py
@@ -55,7 +55,7 @@ def test_mistralai_initializati... | ChatMistralAI with chat history : Assistant message must have either content or tool_calls error
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn'... | 2024-05-02 15:28:34+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
COPY . .
RUN pip install --no-cache-dir -e /testbed/libs/core
RUN pip install --no-cache-dir -e /testbed/libs/partners/mistralai
RUN pip install pytest pytest-asyncio | ['libs/partners/mistralai/tests/unit_tests/test_chat_models.py:None:test_stream_with_callback', 'libs/partners/mistralai/tests/unit_tests/test_chat_models.py:None:test_mistralai_initialization', 'libs/partners/mistralai/tests/unit_tests/test_chat_models.py:None:test_convert_message_to_mistral_chat_message[message1-expe... | ['libs/partners/mistralai/tests/unit_tests/test_chat_models.py:None:test_convert_message_to_mistral_chat_message[message2-expected2]'] | null | pytest /testbed/libs/partners/mistralai/tests/unit_tests/test_chat_models.py -v | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["libs/partners/mistralai/langchain_mistralai/chat_models.py->module->function_definition:_convert_message_to_mistral_chat_message"] | |
yt-dlp/yt-dlp | 1,649 | yt-dlp__yt-dlp-1649 | ['3855'] | bfd973ece3369c593b5e82a88cc16de80088a73e | diff --git a/README.md b/README.md
--- a/README.md
+++ b/README.md
@@ -546,14 +546,14 @@ You can also fork the project on github and run your fork's [build workflow](.gi
error (default is 3), or "infinite"
--fragment-retries RETRIES Number of retries for a fragment (defaul... | diff --git a/test/test_downloader_http.py b/test/test_downloader_http.py
--- a/test/test_downloader_http.py
+++ b/test/test_downloader_http.py
@@ -95,8 +95,8 @@ def download(self, params, ep):
try_rm(encodeFilename(filename))
self.assertTrue(downloader.real_download(filename, {
'url': 'ht... | Printing download HTTP errors to STDERR
### Checklist
- [X] I'm reporting a feature request
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I'm running yt-dlp version **2022.05.18** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (speci... | They are not error messages, but only notes about the retry - hence why they are written to stdout. Instead of changing it to stderr, I can instead add the reason for error to the last line (which is written to stderr) like:
ERROR: Giving up after 10 fragment retries - HTTP Error 429: Too Many Requests
Would ... | 2021-11-13 09:51:02+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository content into the container
COPY . .
# Install test dependencies and the package itself in editable mode
RUN pip install pytest
RUN pip instal... | [] | ['test/test_downloader_http.py:TestHttpFD:test_chunked'] | null | pytest /testbed/test/test_downloader_http.py -v --tb=short --junitxml=test-results.xml | Feature | false | false | false | true | 33 | 4 | 37 | false | false | ["yt_dlp/downloader/fragment.py->module->class_definition:FragmentFD->function_definition:report_retry_fragment", "yt_dlp/downloader/common.py->module->class_definition:FileDownloader->function_definition:wrap_file_access->function_definition:outer", "yt_dlp/downloader/fragment.py->module->class_definition:FragmentFD->... |
yt-dlp/yt-dlp | 3,435 | yt-dlp__yt-dlp-3435 | ['3333'] | afac4caa7db30804bebac33e53c3cb0237958224 | diff --git a/README.md b/README.md
--- a/README.md
+++ b/README.md
@@ -840,6 +840,15 @@ You can also fork the project on github and run your fork's [build workflow](.gi
interactively
--ap-list-mso List all supported multiple-system
... | diff --git a/test/test_http.py b/test/test_http.py
--- a/test/test_http.py
+++ b/test/test_http.py
@@ -85,6 +85,50 @@ def test_nocheckcertificate(self):
self.assertEqual(r['entries'][0]['url'], 'https://127.0.0.1:%d/vid.mp4' % self.port)
+class TestClientCert(unittest.TestCase):
+ def setUp(self):
+ ... | add '--client-certificate some.pem' to authenticate a site user to the remote machine
### Checklist
- [X] I'm reporting a feature request
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I'm running yt-dlp version **2022.03.08.1** ([update instructions](https://g... | null | 2022-04-15 03:09:29+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository content into the container
COPY . .
# Install test dependencies and the package itself in editable mode
RUN pip install pytest
RUN pip instal... | ['test/test_http.py:TestProxy:test_proxy_with_idn', 'test/test_http.py:TestProxy:test_proxy', 'test/test_http.py:TestHTTPS:test_nocheckcertificate'] | ['test/test_http.py:TestClientCert:test_certificate_nocombined_nopass', 'test/test_http.py:TestClientCert:test_certificate_combined_pass', 'test/test_http.py:TestClientCert:test_certificate_nocombined_pass', 'test/test_http.py:TestClientCert:test_certificate_combined_nopass'] | null | pytest /testbed/test/test_http.py -v --tb=short --junitxml=test-results/test-results.xml | Feature | false | false | false | true | 3 | 1 | 4 | false | false | ["yt_dlp/__init__.py->module->function_definition:parse_options", "yt_dlp/YoutubeDL.py->module->class_definition:YoutubeDL", "yt_dlp/utils.py->module->function_definition:make_HTTPS_handler", "yt_dlp/options.py->module->function_definition:create_parser"] |
yt-dlp/yt-dlp | 4,524 | yt-dlp__yt-dlp-4524 | ['4206', '4206'] | 565a4c594499eb4f2c218e12f8ad1cea3362aedd | diff --git a/yt_dlp/extractor/_extractors.py b/yt_dlp/extractor/_extractors.py
--- a/yt_dlp/extractor/_extractors.py
+++ b/yt_dlp/extractor/_extractors.py
@@ -1395,6 +1395,7 @@
RaiPlaySoundLiveIE,
RaiPlaySoundPlaylistIE,
RaiNewsIE,
+ RaiSudtirolIE,
RaiIE,
)
from .raywenderlich import (
diff --g... | diff --git a/test/test_utils.py b/test/test_utils.py
--- a/test/test_utils.py
+++ b/test/test_utils.py
@@ -368,6 +368,7 @@ def test_unified_dates(self):
self.assertEqual(unified_strdate('2012/10/11 01:56:38 +0000'), '20121011')
self.assertEqual(unified_strdate('1968 12 10'), '19681210')
self.... | [rai+generic] [Errno 54] Connection reset by peer
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I'm running yt-dlp version **2022.06.22.1** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URL... | The `https` version don't seem to actually exist. Does it open in browser for you?
hm, it does not. In fact, it's still behaving strangely.. If you try -F parameter it shows an mp4.. now when I try downloading it - it fails. But used to work when I initially tested this. Looking at the log, for some reason, http downlo... | 2022-08-01 12:12:22+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.12-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install test dependencies and the package itself in editable mode
RUN pip install -e ".[test]"
RUN pip install pytest-json-report
... | ['test/test_utils.py:TestUtil:test_remove_start', 'test/test_utils.py:TestUtil:test_sanitize_url', 'test/test_utils.py:TestUtil:test_float_or_none', 'test/test_utils.py:TestUtil:test_sanitize_ids', 'test/test_utils.py:TestUtil:test_get_elements_by_class', 'test/test_utils.py:TestUtil:test_determine_file_encoding', 'tes... | ['test/test_utils.py:TestUtil:test_unified_dates'] | null | pytest /testbed/test/test_utils.py -v --json-report | Feature | false | false | false | true | 1 | 1 | 2 | false | false | ["yt_dlp/extractor/rai.py->module->class_definition:RaiSudtirolIE", "yt_dlp/extractor/rai.py->module->class_definition:RaiSudtirolIE->function_definition:_real_extract"] |
yt-dlp/yt-dlp | 4,841 | yt-dlp__yt-dlp-4841 | ['4187'] | 07a1250e0e90515ff8142161536f9dafa6eaba1b | diff --git a/yt_dlp/utils.py b/yt_dlp/utils.py
--- a/yt_dlp/utils.py
+++ b/yt_dlp/utils.py
@@ -2479,7 +2479,7 @@ def url_basename(url):
def base_url(url):
- return re.match(r'https?://[^?#&]+/', url).group()
+ return re.match(r'https?://[^?#]+/', url).group()
def urljoin(base, path):
| diff --git a/test/test_utils.py b/test/test_utils.py
--- a/test/test_utils.py
+++ b/test/test_utils.py
@@ -566,6 +566,7 @@ def test_base_url(self):
self.assertEqual(base_url('http://foo.de/bar/'), 'http://foo.de/bar/')
self.assertEqual(base_url('http://foo.de/bar/baz'), 'http://foo.de/bar/')
... | DiscoveryPlusItaly error 403: Forbidden
### Checklist
- [X] I'm reporting a broken site
- [X] I've verified that I'm running yt-dlp version **2022.06.22.1** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser... | I think this related to #3757
Can u try passing the url as referer?
I have already tried to insert in the referer the url of the main page of the series, but nothing has changed.
```shell
[debug] Command-line config: ['-Uv', '--no-geo-bypass', '--referer', 'https://www.discoveryplus.com/it/show/killer-of-the-cosmos... | 2022-09-03 20:29:36+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.12-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install test dependencies and the package itself in editable mode
RUN pip install -e ".[test]"
RUN pip install pytest-json-report
... | ['test/test_utils.py:TestUtil:test_remove_start', 'test/test_utils.py:TestUtil:test_sanitize_url', 'test/test_utils.py:TestUtil:test_unified_dates', 'test/test_utils.py:TestUtil:test_float_or_none', 'test/test_utils.py:TestUtil:test_sanitize_ids', 'test/test_utils.py:TestUtil:test_get_elements_by_class', 'test/test_uti... | ['test/test_utils.py:TestUtil:test_base_url'] | null | pytest /testbed/test/test_utils.py -v --json-report | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["yt_dlp/utils.py->module->function_definition:base_url"] |
yt-dlp/yt-dlp | 5,933 | yt-dlp__yt-dlp-5933 | ['5953'] | f079514957401f49db30ec4cd25f8c8246b0c1de | diff --git a/README.md b/README.md
--- a/README.md
+++ b/README.md
@@ -1119,9 +1119,10 @@ You can configure yt-dlp by placing any supported command line option to a confi
* `yt-dlp.conf` in the home path given by `-P`
* If `-P` is not given, the current directory is searched
1. **User Configuration**:
+ *... | diff --git a/test/test_config.py b/test/test_config.py
new file mode 100644
--- /dev/null
+++ b/test/test_config.py
@@ -0,0 +1,227 @@
+#!/usr/bin/env python3
+
+# Allow direct execution
+import os
+import sys
+import unittest
+import unittest.mock
+
+sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__... | [Version 2023.01.02] /etc/yt-dlp.conf is not loaded
### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I'm running yt-dlp version **2023.01.0... | null | 2023-01-03 00:41:48+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.12-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install test dependencies and the package itself in editable mode
RUN pip install -e ".[test]"
RUN pip install pytest-json-report
... | ['test/test_config.py:TestConfig:test_config__ENVIRON_DEFAULTS_sanity', 'test/test_config.py:TestConfig:test_config_override_commandline', 'test/test_config.py:TestConfig:test_config_early_exit_commandline', 'test/test_config.py:TestConfig:test_config_early_exit_files'] | ['test/test_config.py:TestConfig:test_config_all_environ_values', 'test/test_config.py:TestConfig:test_config_default_expected_locations', 'test/test_config.py:TestConfig:test_config_override_files', 'test/test_config.py:TestConfig:test_config_default_grouping'] | null | pytest /testbed/test/test_config.py -v --json-report | Bug Fix | false | true | false | false | 11 | 0 | 11 | false | false | ["yt_dlp/options.py->module->function_definition:parseOpts->function_definition:_load_from_config_dirs", "yt_dlp/plugins.py->module->class_definition:PluginFinder->function_definition:search_locations", "yt_dlp/plugins.py->module->class_definition:PluginFinder->function_definition:search_locations->function_definition:... |
yt-dlp/yt-dlp | 8,917 | yt-dlp__yt-dlp-8917 | ['3944'] | 95e82347b398d8bb160767cdd975edecd62cbabd | diff --git a/README.md b/README.md
--- a/README.md
+++ b/README.md
@@ -1305,7 +1305,8 @@ The available fields are:
- `display_id` (string): An alternative identifier for the video
- `uploader` (string): Full name of the video uploader
- `license` (string): License name the video is licensed under
- - `creator` (s... | diff --git a/test/helper.py b/test/helper.py
--- a/test/helper.py
+++ b/test/helper.py
@@ -223,6 +223,10 @@ def sanitize(key, value):
if test_info_dict.get('display_id') == test_info_dict.get('id'):
test_info_dict.pop('display_id')
+ # Remove deprecated fields
+ for old in YoutubeDL._deprecated_mu... | Use ; as separator for metadata instead of , for vorbis comments and / for ID3
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I'm running yt-dlp version **2022.05.18** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I'v... | > Not Valid
You talking about the issue? There is a reason the field is mandatory!
@Rexadev that's a regular log. Please run the command with `--verbose` and send a log of it
Also, please explain exactly what tags you are talking about. yt-dlp doesn't add any kind of seperator anywhere. So I have no clue exactly wha... | 2024-01-03 02:11:22+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.12-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install test dependencies and the package itself in editable mode
RUN pip install -e ".[test]"
RUN pip install pytest-json-report
... | ['test/test_YoutubeDL.py:TestYoutubeDL:test_subtitles', 'test/test_YoutubeDL.py:TestYoutubeDL:test_ignoreerrors_for_playlist_with_url_transparent_iterable_entries', 'test/test_YoutubeDL.py:TestYoutubeDL:test_header_cookies', 'test/test_YoutubeDL.py:TestFormatSelection:test_audio_only_extractor_format_selection', 'test/... | ['test/test_YoutubeDL.py:TestYoutubeDL:test_infojson_cookies'] | null | pytest /testbed/test/helper.py /testbed/test/test_YoutubeDL.py -v --json-report | Feature | false | false | false | true | 4 | 3 | 7 | false | false | ["yt_dlp/YoutubeDL.py->module->class_definition:YoutubeDL->function_definition:_fill_common_fields", "yt_dlp/postprocessor/ffmpeg.py->module->class_definition:FFmpegMetadataPP->function_definition:_get_metadata_opts->function_definition:add", "yt_dlp/extractor/common.py->module->class_definition:InfoExtractor", "yt_dlp... |
tensorflow/models | 2,727 | tensorflow__models-2727 | ['2674'] | 176cf09c2d95f6cd2201e8a7fd215617d6be9453 | diff --git a/research/object_detection/README.md b/research/object_detection/README.md
--- a/research/object_detection/README.md
+++ b/research/object_detection/README.md
@@ -1,3 +1,4 @@
+
# Tensorflow Object Detection API
Creating accurate machine learning models capable of localizing and identifying
multiple objec... | diff --git a/research/object_detection/anchor_generators/multiple_grid_anchor_generator_test.py b/research/object_detection/anchor_generators/multiple_grid_anchor_generator_test.py
--- a/research/object_detection/anchor_generators/multiple_grid_anchor_generator_test.py
+++ b/research/object_detection/anchor_generators/... | Got error when restoring the frozen NAS-Net model for object detection.
Python version: 2.7
CUDA: 8.0
CUDNN 6.0
OS: Ubuntu16.04
TF version: 1.3.0 & 1.4.0rc1
When I test the new "faster-rcnn & nasnet" model using code pieces from the Jupyter-notebook tutorial like this:
```python
detection_graph = tf.Graph()
w... | null | 2017-11-07 19:31:26+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.7-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y git python3-pip protobuf-compiler && rm -rf /var/lib/apt/lists/*
# Copy the research directory
COPY .... | [':test_build_grid_anchor_generator_with_defaults', ':test_construct_multiple_grids_with_clipping', ':test_invalid_box_specs', ':test_construct_anchor_grid_non_square', ':test_build_grid_anchor_generator_with_non_default_parameters', ':test_build_ssd_anchor_generator_with_defaults', ':test_raise_value_error_on_empty_an... | [':test_construct_anchor_grid_normalized:', ':test_build_ssd_anchor_generator_with_custom_interpolated_scale:', ':test_build_ssd_anchor_generator_with_custom_scales:'] | null | python -m unittest /testbed/research/object_detection/anchor_generators/multiple_grid_anchor_generator_test.py /testbed/research/object_detection/builders/anchor_generator_builder_test.py -v | Bug Fix | false | false | false | true | 3 | 2 | 5 | false | false | ["research/object_detection/anchor_generators/multiple_grid_anchor_generator.py->module->class_definition:MultipleGridAnchorGenerator", "research/object_detection/anchor_generators/multiple_grid_anchor_generator.py->module->class_definition:MultipleGridAnchorGenerator->function_definition:__init__", "research/object_de... |
tensorflow/models | 4,628 | tensorflow__models-4628 | ['3564'] | 7c5c01482f48f9f2532586e679686d821d516ae6 | diff --git a/research/astronet/astronet/data/generate_download_script.py b/research/astronet/astronet/data/generate_download_script.py
--- a/research/astronet/astronet/data/generate_download_script.py
+++ b/research/astronet/astronet/data/generate_download_script.py
@@ -33,6 +33,7 @@
import argparse
import csv
impor... | diff --git a/research/astronet/light_curve_util/periodic_event_test.py b/research/astronet/light_curve_util/periodic_event_test.py
--- a/research/astronet/light_curve_util/periodic_event_test.py
+++ b/research/astronet/light_curve_util/periodic_event_test.py
@@ -25,6 +25,13 @@
class EventTest(absltest.TestCase):
+... | SyntaxError: invalid token
The line throws a SyntaxError: invalid token:
https://github.com/tensorflow/models/blob/3f78f4cfd21c786c62bf321c07830071027ebb5e/research/astronet/astronet/data/generate_download_script.py#L93
| Thank you for your post. We noticed you have not filled out the following field in the issue template. Could you update them if they are relevant in your case, or leave them as N/A? Thanks.
What is the top-level directory of the model you are using
Have I written custom code
OS Platform and Distribution
TensorFlow inst... | 2018-06-25 23:01:51+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.7-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
python3-pip \
protobuf-compiler \
&& rm -rf /var/lib/apt/lists/*
# Copy the r... | [':testEquals'] | [':testRepr:', ':testStr:'] | null | python -m unittest /testbed/research/astronet/light_curve_util/periodic_event_test.py -v | Bug Fix | false | false | false | true | 4 | 1 | 5 | false | false | ["research/astronet/astronet/ops/dataset_ops.py->module->function_definition:build_dataset->function_definition:_example_parser", "research/astronet/light_curve_util/periodic_event.py->module->class_definition:Event->function_definition:__str__", "research/astronet/light_curve_util/periodic_event.py->module->class_defi... |
keras-team/keras | 1,767 | keras-team__keras-1767 | ['1730'] | b8a9f84fad1be2f27365a25b4e7f188d382d70d0 | diff --git a/keras/layers/containers.py b/keras/layers/containers.py
--- a/keras/layers/containers.py
+++ b/keras/layers/containers.py
@@ -156,9 +156,9 @@ def get_weights(self):
return weights
def set_weights(self, weights):
- for i in range(len(self.layers)):
- nb_param = len(self.lay... | diff --git a/tests/keras/test_models.py b/tests/keras/test_models.py
--- a/tests/keras/test_models.py
+++ b/tests/keras/test_models.py
@@ -125,6 +125,70 @@ def test_sequential():
model = model_from_yaml(yaml_data)
+def test_nested_sequential():
+ (X_train, y_train), (X_test, y_test) = _get_test_data()
+
+ ... | unable to load weights in models with siamese branches
The problem is that the set_weights() function in sequential tries to concatenate trainable_weights and non_trainable together
However if one of your layers is another sequential container, this does not have a non_trainable_weights parameter
This needs to be imple... | +1
I think the actual fix is to change `Sequential.set_weights` to something very similar to `Graph.set_weights`. I'll submit a PR when I get time.
It turns out that this has nothing to do with Siamese models. It happens when you have triple-nested Sequential layers.
| 2016-02-19 20:27:35+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.7
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
COPY . .
RUN pip install -e .
RUN pip install pytest pytest-json-report pytest-cov numpy==1.16.6 scipy==1.2.3 theano==0.8.2 pyyaml==5.4.1 six h5py==2.10.0 | ['tests/keras/test_models.py:None:test_lambda', 'tests/keras/test_models.py:None:test_siamese_1', 'tests/keras/test_models.py:None:test_sequential', 'tests/keras/test_models.py:None:test_merge_overlap', 'tests/keras/test_models.py:None:test_merge_concat', 'tests/keras/test_models.py:None:test_merge_recursivity', 'tests... | ['tests/keras/test_models.py:None:test_nested_sequential'] | null | python -m pytest /testbed/tests/keras/test_models.py --override-ini addopts= -v --json-report --json-report-file=test_results.json | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/layers/containers.py->module->class_definition:Sequential->function_definition:set_weights"] |
keras-team/keras | 3,907 | keras-team__keras-3907 | ['3905'] | 7df184d3aa8a9790d181c837ab22a31b5aebb5ae | diff --git a/docs/templates/getting-started/sequential-model-guide.md b/docs/templates/getting-started/sequential-model-guide.md
--- a/docs/templates/getting-started/sequential-model-guide.md
+++ b/docs/templates/getting-started/sequential-model-guide.md
@@ -121,7 +121,7 @@ Before training a model, you need to configur... | diff --git a/tests/keras/engine/test_training.py b/tests/keras/engine/test_training.py
--- a/tests/keras/engine/test_training.py
+++ b/tests/keras/engine/test_training.py
@@ -148,15 +148,24 @@ def test_model_methods():
# test with a custom metric function
mse = lambda y_true, y_pred: K.mean(K.pow(y_true - y... | New Feature: Add ability to return more than one metric from metric function
Following discussion in gitter:
Add ability to return dict from metric function. Would be useful for e.g. confusion matrix. Proposed behavior
`r = f(y_true,y_pred)`
1. If `r` is a dict - report every `(key, value)` pair as metric with name `... | null | 2016-09-29 09:31:05+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.7
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
COPY . .
RUN pip install -e .
RUN pip install pytest pytest-json-report pytest-cov numpy scipy theano pyyaml six h5py protobuf==3.20.0 tensorflow==1.15.0 | ['tests/keras/engine/test_training.py:None:test_trainable_argument'] | ['tests/keras/engine/test_training.py:None:test_model_methods'] | null | python -m pytest /testbed/tests/keras/engine/test_training.py --override-ini addopts= -v --json-report --json-report-file=test_results.json | Feature | false | true | false | false | 2 | 0 | 2 | false | false | ["keras/engine/training.py->module->class_definition:Model->function_definition:compile->function_definition:append_metric", "keras/engine/training.py->module->class_definition:Model->function_definition:compile"] |
keras-team/keras | 3,983 | keras-team__keras-3983 | ['3942'] | 4de7eaa6a80fd4257b866a6b695450c40b72dd28 | diff --git a/keras/layers/pooling.py b/keras/layers/pooling.py
--- a/keras/layers/pooling.py
+++ b/keras/layers/pooling.py
@@ -519,3 +519,83 @@ def call(self, x, mask=None):
return K.max(x, axis=[1, 2])
else:
return K.max(x, axis=[2, 3])
+
+
+class _GlobalPooling3D(Layer):
+
+ def ... | diff --git a/tests/keras/layers/test_convolutional.py b/tests/keras/layers/test_convolutional.py
--- a/tests/keras/layers/test_convolutional.py
+++ b/tests/keras/layers/test_convolutional.py
@@ -269,6 +269,22 @@ def test_globalpooling_2d():
input_shape=(3, 5, 6, 4))
+@keras_test
+def test_globalpool... | GlobalPooling for 3D inputs
Hello,
I was wondering why there is [GlobalMaxPooling2D](https://keras.io/layers/pooling/#globalmaxpooling2d) and [GlobalAveragePooling2D](https://keras.io/layers/pooling/#globalaveragepooling2d), but no 3D versions of both.
Looking at the code, one could easily extend both to work with 3... | Feel free to make a PR.
| 2016-10-06 12:10:06+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.7
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
COPY . .
RUN pip install -e .
RUN pip install pytest pytest-json-report pytest-cov numpy scipy theano pyyaml six h5py protobuf==3.20.0 tensorflow==1.15.0 | ['tests/keras/layers/test_convolutional.py:None:test_convolution_3d', 'tests/keras/layers/test_convolutional.py:None:test_maxpooling_2d', 'tests/keras/layers/test_convolutional.py:None:test_globalpooling_1d', 'tests/keras/layers/test_convolutional.py:None:test_averagepooling_3d', 'tests/keras/layers/test_convolutional.... | ['tests/keras/layers/test_convolutional.py:None:test_globalpooling_3d'] | null | python -m pytest /testbed/tests/keras/layers/test_convolutional.py --override-ini addopts= -v --json-report --json-report-file=test_results.json | Feature | false | false | false | true | 5 | 4 | 9 | false | false | ["keras/layers/pooling.py->module->class_definition:_GlobalPooling3D", "keras/layers/pooling.py->module->class_definition:GlobalMaxPooling3D", "keras/layers/pooling.py->module->class_definition:_GlobalPooling3D->function_definition:__init__", "keras/layers/pooling.py->module->class_definition:GlobalMaxPooling3D->functi... |
keras-team/keras | 4,856 | keras-team__keras-4856 | ['4846'] | 50f7f03f6bc373b81ae9407f7857112e062c526f | diff --git a/keras/engine/topology.py b/keras/engine/topology.py
--- a/keras/engine/topology.py
+++ b/keras/engine/topology.py
@@ -927,7 +927,10 @@ def add_update(self, updates, inputs=None):
def get_updates_for(self, inputs):
if not hasattr(self, '_per_input_updates'):
return []
- inp... | diff --git a/tests/keras/engine/test_topology.py b/tests/keras/engine/test_topology.py
--- a/tests/keras/engine/test_topology.py
+++ b/tests/keras/engine/test_topology.py
@@ -9,6 +9,27 @@
from keras.models import model_from_json, model_from_yaml
from keras.utils.test_utils import keras_test
+@keras_test
+def test_g... | Layer regularizers are not shared across models in 1.2.0
If I share a layer with regularizers with another model, the regularizers are not copied correctly. Reusing keras test for regularizers:
```{python}
from keras.models import *
model = Sequential()
model.add(wrappers.TimeDistributed(core.Dense(2, W_regulariz... | null | 2016-12-27 19:00:13+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.7
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
COPY . .
RUN pip install -e .
RUN pip install pytest pytest-json-report pytest-cov numpy scipy theano pyyaml six h5py protobuf==3.20.0 tensorflow==1.15.0 | ['tests/keras/engine/test_topology.py:None:test_node_construction', 'tests/keras/engine/test_topology.py:None:test_trainable_weights'] | ['tests/keras/engine/test_topology.py:None:test_get_updates_for', 'tests/keras/engine/test_topology.py:None:test_get_losses_for'] | null | python -m pytest /testbed/tests/keras/engine/test_topology.py --override-ini addopts= -v --json-report --json-report-file=test_results.json | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["keras/engine/topology.py->module->class_definition:Layer->function_definition:get_losses_for", "keras/engine/topology.py->module->class_definition:Layer->function_definition:get_updates_for"] |
keras-team/keras | 18,553 | keras-team__keras-18553 | ['18535'] | c8a5a8969a8712a9a1939937ce34158e04cfc09d | diff --git a/keras/ops/nn.py b/keras/ops/nn.py
--- a/keras/ops/nn.py
+++ b/keras/ops/nn.py
@@ -592,7 +592,7 @@ def __init__(
super().__init__()
self.pool_size = pool_size
self.strides = strides
- self.padding = padding
+ self.padding = padding.lower()
self.data_format =... | diff --git a/keras/ops/nn_test.py b/keras/ops/nn_test.py
--- a/keras/ops/nn_test.py
+++ b/keras/ops/nn_test.py
@@ -121,12 +121,16 @@ def test_conv(self):
# Test 1D conv.
inputs_1d = KerasTensor([None, 20, 3])
kernel = KerasTensor([4, 3, 2])
- self.assertEqual(
- knn.conv(inp... | depthwise_conv ops padding same is not working in on torch backend
```python
import numpy as np
import os
os.environ["KERAS_BACKEND"] = "jax" # 'tensorflow', 'torch', 'jax'
import keras_core as keras
from keras_core import ops
input = np.ones((1, 613, 696, 3))
kernel = np.ones((1, 5, 3, 1))
```
```pyt... | null | 2023-10-05 20:35:56+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev
# Copy the repository contents
COPY . .
# Install JAX and other required dependencies
RUN pip install --upgrade ... | ['keras/ops/nn_test.py:NNOpsDynamicShapeTest:test_relu', 'keras/ops/nn_test.py:NNOpsDynamicShapeTest:test_silu', 'keras/ops/nn_test.py:NNOpsDynamicShapeTest:test_leaky_relu', 'keras/ops/nn_test.py:NNOpsCorrectnessTest:test_max_pool', 'keras/ops/nn_test.py:NNOpsDynamicShapeTest:test_one_hot_dtype1', 'keras/ops/nn_test.p... | ['keras/ops/nn_test.py:NNOpsDynamicShapeTest:test_depthwise_conv', 'keras/ops/nn_test.py:NNOpsDynamicShapeTest:test_conv'] | null | pytest /testbed/keras/ops/nn_test.py -v --junitxml=test-results.xml | Bug Fix | false | false | false | true | 6 | 6 | 12 | false | false | ["keras/ops/nn.py->module->function_definition:conv_transpose", "keras/ops/nn.py->module->function_definition:separable_conv", "keras/ops/nn.py->module->class_definition:MaxPool->function_definition:__init__", "keras/ops/nn.py->module->function_definition:conv", "keras/ops/nn.py->module->function_definition:max_pool", ... |
keras-team/keras | 18,649 | keras-team__keras-18649 | ['18409'] | b00065c7878ade450286ad2c298148f50e098f0c | diff --git a/keras/backend/jax/numpy.py b/keras/backend/jax/numpy.py
--- a/keras/backend/jax/numpy.py
+++ b/keras/backend/jax/numpy.py
@@ -440,6 +440,22 @@ def maximum(x1, x2):
return jnp.maximum(x1, x2)
+def median(x, axis=None, keepdims=False):
+ # axis of jnp.median must be hashable
+ if isinstance(ax... | diff --git a/keras/ops/numpy_test.py b/keras/ops/numpy_test.py
--- a/keras/ops/numpy_test.py
+++ b/keras/ops/numpy_test.py
@@ -193,6 +193,22 @@ def test_outer(self):
y = KerasTensor((2, None))
self.assertEqual(knp.outer(x, y).shape, (None, None))
+ def test_quantile(self):
+ x = KerasTenso... | Add Median to `keras_core.ops`
Feature Request for a Median function to keras_core.ops.
It is an important function which is present within [`torch`](https://pytorch.org/docs/stable/generated/torch.median.html) and [`jax.numpy`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.median.html) as well.
| @suvadityamuk Thanks for filing the issue! would you be interested in filing a PR?
Sure, can do! Any chance you can reference a similar example here so I can follow its rubrics?
may be this one - https://github.com/keras-team/keras-core/pull/907 | 2023-10-19 08:50:28+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev
# Copy the repository contents
COPY . .
# Install JAX and other required dependencies
RUN pip install --upgrade ... | ['keras/ops/numpy_test.py:NumpyTwoInputOpsCorretnessTest:test_take_sparse_axis_0_float64', 'keras/ops/numpy_test.py:NumpyOneInputOpsDynamicShapeTest:test_transpose', 'keras/ops/numpy_test.py:NumpyTwoInputOpsStaticShapeTest:test_less_equal', 'keras/ops/numpy_test.py:NumpyOneInputOpsCorrectnessTest:test_squeeze_sparse', ... | ['keras/ops/numpy_test.py:NumpyDtypeTest:test_median_int64', 'keras/ops/numpy_test.py:NumpyDtypeTest:test_quantile_int8', 'keras/ops/numpy_test.py:NumpyDtypeTest:test_quantile_float64', 'keras/ops/numpy_test.py:NumpyTwoInputOpsDynamicShapeTest:test_quantile', 'keras/ops/numpy_test.py:NumpyDtypeTest:test_median_uint32',... | null | pytest /testbed/keras/ops/numpy_test.py -v --junitxml=test-results.xml | Feature | false | false | false | true | 17 | 4 | 21 | false | false | ["keras/backend/jax/numpy.py->module->function_definition:quantile", "keras/backend/torch/numpy.py->module->function_definition:median", "keras/backend/tensorflow/numpy.py->module->function_definition:_quantile->function_definition:_get_indices", "keras/ops/numpy.py->module->class_definition:Quantile", "keras/ops/numpy... |
keras-team/keras | 18,766 | keras-team__keras-18766 | ['18754'] | 4803b5497ad060cce345a323be2546152315ec3d | diff --git a/keras/layers/attention/attention.py b/keras/layers/attention/attention.py
--- a/keras/layers/attention/attention.py
+++ b/keras/layers/attention/attention.py
@@ -27,6 +27,7 @@ class Attention(Layer):
attention scores.
dropout: Float between 0 and 1. Fraction of the units to drop for t... | diff --git a/keras/layers/attention/additive_attention_test.py b/keras/layers/attention/additive_attention_test.py
--- a/keras/layers/attention/additive_attention_test.py
+++ b/keras/layers/attention/additive_attention_test.py
@@ -17,12 +17,12 @@ def test_attention_basics(self):
expected_output_shape=(2, 3... | `noise_shape` Attribute Not Found in Attention Layer
The source of this issue is at training time with the Attention layer. This is where self.noise_shape is referenced, but it is never assigned:
https://github.com/keras-team/keras/blob/d4feb16c82b8e3d47721520e9b45ef4bebc1ead0/keras/layers/attention/attention.py#L17... | @nkovela1 ,
IMO we can set `noise_shape` to `None` here since this is being called inside the function `backend.random.dropout()` which has argument `noise_shape`. I think if the default value for this arg is `None` it will its value infer from inputs.
I have referred legacy dropout API below.
https://github.com... | 2023-11-12 07:42:14+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev
# Copy the repository contents
COPY . .
# Install JAX and other required dependencies
RUN pip install --upgrade ... | ['keras/layers/attention/additive_attention_test.py:AdditiveAttentionTest:test_attention_with_mask', 'keras/layers/attention/additive_attention_test.py:AdditiveAttentionTest:test_attention_correctness', 'keras/layers/attention/attention_test.py:AttentionTest:test_attention_errors', 'keras/layers/attention/attention_tes... | ['keras/layers/attention/attention_test.py:AttentionTest:test_attention_basics', 'keras/layers/attention/additive_attention_test.py:AdditiveAttentionTest:test_attention_basics', 'keras/layers/attention/attention_test.py:AttentionTest:test_attention_with_dropout'] | null | pytest /testbed/keras/layers/attention/additive_attention_test.py /testbed/keras/layers/attention/attention_test.py -v --junitxml=test-results.xml | Bug Fix | false | false | false | true | 1 | 2 | 3 | false | false | ["keras/layers/attention/attention.py->module->class_definition:Attention->function_definition:_apply_scores", "keras/layers/attention/attention.py->module->class_definition:Attention", "keras/layers/attention/attention.py->module->class_definition:Attention->function_definition:__init__"] |
keras-team/keras | 18,852 | keras-team__keras-18852 | ['18842'] | 9c62839cbb0e54b7bac09ce20471a0dfaa65ff55 | diff --git a/.github/workflows/actions.yml b/.github/workflows/actions.yml
--- a/.github/workflows/actions.yml
+++ b/.github/workflows/actions.yml
@@ -53,7 +53,7 @@ jobs:
- name: Test applications with pytest
if: ${{ steps.filter.outputs.applications == 'true' }}
run: |
- pytest keras/... | diff --git a/keras/activations/activations_test.py b/keras/activations/activations_test.py
--- a/keras/activations/activations_test.py
+++ b/keras/activations/activations_test.py
@@ -40,6 +40,10 @@ def _ref_hard_sigmoid(x):
return z
+def _ref_hard_swish(x):
+ return x * np.minimum(np.maximum(0.0, x + 3.0), ... | Add HardSwish activation
HardSwish has been supported by TFLite for quite some time, but it is still missing in Keras.
I believe adding this activation would be beneficial for those working on INT8 quantized models.
I already have a working implementation and can submit the PR if it sounds good.
References that ... | null | 2023-11-30 01:14:54+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev
# Copy the repository contents
COPY . .
# Install JAX and other required dependencies
RUN pip install --upgrade ... | ['keras/activations/activations_test.py:ActivationsTest:test_tanh', 'keras/applications/applications_test.py:ApplicationsTest:test_application_pooling_MobileNet_channels_first', 'keras/applications/applications_test.py:ApplicationsTest:test_application_pooling_EfficientNetB1_channels_first', 'keras/applications/applica... | ['keras/activations/activations_test.py:ActivationsTest:test_hard_swish'] | null | pytest /testbed/keras/activations/activations_test.py /testbed/keras/applications/applications_test.py /testbed/keras/applications/imagenet_utils_test.py -v --junitxml=test-results.xml | Feature | false | true | false | false | 2 | 0 | 2 | false | false | ["keras/applications/mobilenet_v3.py->module->function_definition:hard_swish", "keras/activations/activations.py->module->function_definition:hard_swish"] |
keras-team/keras | 18,871 | keras-team__keras-18871 | ['18864'] | 10252a9e7d68c6818423deee1c4c8549038e4171 | diff --git a/keras/models/model.py b/keras/models/model.py
--- a/keras/models/model.py
+++ b/keras/models/model.py
@@ -7,7 +7,6 @@
from keras import utils
from keras.api_export import keras_export
from keras.layers.layer import Layer
-from keras.legacy.saving import legacy_h5_format
from keras.models.variable_mappi... | diff --git a/keras/saving/saving_api_test.py b/keras/saving/saving_api_test.py
--- a/keras/saving/saving_api_test.py
+++ b/keras/saving/saving_api_test.py
@@ -171,8 +171,10 @@ def test_h5_deprecation_warning(self):
with mock.patch.object(logging, "warning") as mock_warn:
saving_api.save_model(mode... | Feature duplication on model.save() and keras.saving.save_model()
When I was reading the code of model saving, I got strange feeling.
https://github.com/keras-team/keras/blob/724321c7b39a90f6125b79931284aa9932c673a0/keras/models/model.py#L294-L297
It says `model.save()` is an alias for `keras.saving.save_model()`. ... | Yes, feel free to open a PR to reduce code redundancy. Thanks! | 2023-12-02 09:56:38+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev
# Copy the repository contents
COPY . .
# Install JAX and other required dependencies
RUN pip install --upgrade ... | ['keras/saving/saving_api_test.py:LoadWeightsTests:test_load_keras_weights', 'keras/saving/saving_api_test.py:LoadModelTests:test_load_model_with_custom_objects', 'keras/saving/saving_api_test.py:LoadWeightsTests:test_load_h5_weights_by_name', 'keras/saving/saving_api_test.py:LoadModelTests:test_basic_load', 'keras/sav... | ['keras/saving/saving_api_test.py:SaveModelTestsWarning:test_h5_deprecation_warning'] | null | pytest /testbed/keras/saving/saving_api_test.py -v --junitxml=test-results.xml | Refactoring | false | true | false | false | 2 | 0 | 2 | false | false | ["keras/saving/saving_api.py->module->function_definition:save_model", "keras/models/model.py->module->class_definition:Model->function_definition:save"] |
keras-team/keras | 18,977 | keras-team__keras-18977 | ['18976'] | fe2f54aa5bc42fb23a96449cf90434ab9bb6a2cd | diff --git a/keras/utils/tracking.py b/keras/utils/tracking.py
--- a/keras/utils/tracking.py
+++ b/keras/utils/tracking.py
@@ -107,7 +107,6 @@ def add_to_store(self, store_name, value):
class TrackedList(list):
- # TODO: override item removal methods?
def __init__(self, values=None, tracker=None):
... | diff --git a/keras/utils/tracking_test.py b/keras/utils/tracking_test.py
--- a/keras/utils/tracking_test.py
+++ b/keras/utils/tracking_test.py
@@ -33,3 +33,24 @@ def test_untracking_in_tracked_list(self):
lst.remove(v2)
self.assertLen(lst, 2)
self.assertLen(tracked_variables, 0)
+
+ ls... | chore: override item removal methods in tracking
Based on the TODO comments in keras/keras/utils/tracking.py
| null | 2023-12-21 07:57:15+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev
# Copy the repository contents
COPY . .
# Install JAX and other required dependencies
RUN pip install --upgrade ... | [] | ['keras/utils/tracking_test.py:TrackingTest:test_untracking_in_tracked_list'] | null | pytest /testbed/keras/utils/tracking_test.py -v --junitxml=test-results.xml | Refactoring | false | false | false | true | 8 | 3 | 11 | false | false | ["keras/utils/tracking.py->module->class_definition:TrackedSet->function_definition:pop", "keras/utils/tracking.py->module->class_definition:TrackedList->function_definition:pop", "keras/utils/tracking.py->module->class_definition:TrackedDict->function_definition:popitem", "keras/utils/tracking.py->module->class_defini... |
keras-team/keras | 19,201 | keras-team__keras-19201 | ['19199'] | ec67b760ba25e1ccc392d288f7d8c6e9e153eea2 | diff --git a/keras/backend/jax/distribution_lib.py b/keras/backend/jax/distribution_lib.py
--- a/keras/backend/jax/distribution_lib.py
+++ b/keras/backend/jax/distribution_lib.py
@@ -200,12 +200,12 @@ def initialize(job_addresses, num_processes, process_id):
f"{len(job_addresses)} jobs, but num_process... | diff --git a/keras/backend/jax/distribution_lib_test.py b/keras/backend/jax/distribution_lib_test.py
--- a/keras/backend/jax/distribution_lib_test.py
+++ b/keras/backend/jax/distribution_lib_test.py
@@ -50,7 +50,7 @@ def test_device_conversion(self):
def test_initialize_with_all_job_addresses(self, mock_jax_initia... | Typo in keras.distribution.initialize()
Hi,
There is a typo when calling `keras.distribution.initialize` due to a typo in the jax backend. The function pass the `corrdinator_address` argument instead of `coordinator_address` to `jax.distributed.initialize`
```log
---> 13 keras.distribution.initialize()
File /... | null | 2024-02-19 18:18:24+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
python3-dev \
&& rm -rf /var/lib/apt/lists/*
# Copy the project files... | ['keras/backend/jax/distribution_lib_test.py:JaxDistributionLibTest:test_processes', 'keras/backend/jax/distribution_lib_test.py:JaxDistributionLibTest:test_distribute_tensor', 'keras/backend/jax/distribution_lib_test.py:JaxDistributionLibTest:test_distribute_variable', 'keras/backend/jax/distribution_lib_test.py:JaxDi... | ['keras/backend/jax/distribution_lib_test.py:JaxDistributionLibTest:test_initialize_with_all_job_addresses', 'keras/backend/jax/distribution_lib_test.py:JaxDistributionLibTest:test_initialize_with_coordinater_address'] | null | python -m pytest /testbed/keras/backend/jax/distribution_lib_test.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/backend/jax/distribution_lib.py->module->function_definition:initialize"] |
keras-team/keras | 19,284 | keras-team__keras-19284 | ['19257'] | 4c356306273153d5dc26fc5772b106b4f750095f | diff --git a/keras/dtype_policies/dtype_policy.py b/keras/dtype_policies/dtype_policy.py
--- a/keras/dtype_policies/dtype_policy.py
+++ b/keras/dtype_policies/dtype_policy.py
@@ -173,9 +173,6 @@ def _parse_name(self, name):
return "float16", "float32"
elif name == "mixed_bfloat16":
re... | diff --git a/keras/layers/attention/attention_test.py b/keras/layers/attention/attention_test.py
--- a/keras/layers/attention/attention_test.py
+++ b/keras/layers/attention/attention_test.py
@@ -342,3 +342,19 @@ def test_attention_compute_mask_with_different_input_shapes(self):
computed_mask = layer.comput... | Keras 3 Attention layer value tensor dimension
hi,
I found the below would not return the proper size output in Keras 3 (but works fine in Keras 2)
Please help to fix it,
Thanks.
```python
import keras
from keras import layers
i = layers.Input((8,4))
xq = layers.Conv1D(5,1)(i)
xk = layers.Conv1D(... | null | 2024-03-11 17:59:37+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy the project files
COPY . .
# Inst... | ['keras/layers/attention/attention_test.py:AttentionTest:test_attention_compute_mask_with_tolerance_1e_3', 'keras/layers/attention/attention_test.py:AttentionTest:test_attention_compute_mask_returns_correct_tensor_with_all_true_mask', 'keras/layers/attention/attention_test.py:AttentionTest:test_attention_compute_mask_w... | ['keras/layers/attention/attention_test.py:AttentionTest:test_attention_compute_output_shape'] | null | python -m pytest /testbed/keras/layers/attention/attention_test.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["keras/dtype_policies/dtype_policy.py->module->class_definition:FloatDTypePolicy->function_definition:_parse_name", "keras/layers/attention/attention.py->module->class_definition:Attention->function_definition:compute_output_shape"] |
keras-team/keras | 19,300 | keras-team__keras-19300 | ['19299'] | df705d4fc719ab617705197248804d689ad74767 | diff --git a/keras/ops/nn.py b/keras/ops/nn.py
--- a/keras/ops/nn.py
+++ b/keras/ops/nn.py
@@ -538,10 +538,13 @@ def softmax(x, axis=-1):
array([0.09003057, 0.24472847, 0.66524096], shape=(3,), dtype=float64)
"""
- if isinstance(axis, int) and backend.shape(x)[axis] == 1:
+ # Don't use `backend.shape`... | diff --git a/keras/ops/nn_test.py b/keras/ops/nn_test.py
--- a/keras/ops/nn_test.py
+++ b/keras/ops/nn_test.py
@@ -2,10 +2,12 @@
import pytest
from absl.testing import parameterized
+import keras
from keras import backend
from keras import layers
from keras import losses
from keras import models
+from keras imp... | `keras.ops.softmax` errors out when used in a TensorFlow compiled function
## MRE
```python
import keras
from keras import ops
class SoftmaxLayer(keras.Layer):
def call(self, x):
return ops.softmax(x, axis=-1)
class Model(keras.Model):
def __init__(self):
x = keras.Input(shape=(None,))
y... | null | 2024-03-13 07:57:31+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy the project files
COPY . .
# Inst... | ['keras/ops/nn_test.py:NNOpsDynamicShapeTest:test_relu', 'keras/ops/nn_test.py:NNOpsDtypeTest:test_hard_silu_float32', 'keras/ops/nn_test.py:NNOpsDynamicShapeTest:test_silu', 'keras/ops/nn_test.py:NNOpsCorrectnessTest:test_ctc_loss', 'keras/ops/nn_test.py:NNOpsCorrectnessTest:test_separable_conv_2d1', 'keras/ops/nn_tes... | ['keras/ops/nn_test.py:NNOpsDynamicShapeTest:test_softmax_in_graph'] | null | python -m pytest /testbed/keras/ops/nn_test.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/ops/nn.py->module->function_definition:softmax"] |
keras-team/keras | 19,459 | keras-team__keras-19459 | ['19437'] | 68e0368c680decbc7c9e1da57b56b3a8212b3ec2 | diff --git a/keras/backend/numpy/random.py b/keras/backend/numpy/random.py
--- a/keras/backend/numpy/random.py
+++ b/keras/backend/numpy/random.py
@@ -67,6 +67,7 @@ def truncated_normal(shape, mean=0.0, stddev=1.0, dtype=None, seed=None):
def dropout(inputs, rate, noise_shape=None, seed=None):
+ dtype = inputs.... | diff --git a/keras/layers/regularization/alpha_dropout_test.py b/keras/layers/regularization/alpha_dropout_test.py
--- a/keras/layers/regularization/alpha_dropout_test.py
+++ b/keras/layers/regularization/alpha_dropout_test.py
@@ -15,6 +15,7 @@ def test_alpha_dropout_basics(self):
"rate": 0.2,
... | Keras with TF backend GaussianDropout gives error with mixed_bfloat16
When using Keras with 3.1.1 with Tensorflow 2.16.1 backend, using GaussianDropout layer with mixed_bfloat16 results in the following error message:
```
TypeError: Exception encountered when calling GaussianDropout.call().
Input 'y' of 'Mul' Op h... | BTW, I can see that Keras 2.15 uses dtype=inputs.dtype when calling self._random_generator.random_normal function.
Another addition: Keras 3 Documentation suggests setting mixed policy with following line:
`tf.keras.config.set_dtype_policy('mixed_bfloat16')`
instead of the one I supplied above. Still same error. | 2024-04-08 07:27:18+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install dependencies and the package itself
RUN pip install -e . && \
pip install pytest pytest-json-report && \
pip instal... | ['keras/random/random_test.py:RandomDTypeTest:test_normal_float64', 'keras/random/random_test.py:RandomDTypeTest:test_categorical_int8', 'keras/random/random_test.py:RandomDTypeTest:test_randint_uint8', 'keras/random/random_test.py:RandomTest:test_truncated_normal1', 'keras/random/random_test.py:RandomTest:test_shuffle... | ['keras/random/random_test.py:RandomDTypeTest:test_binomial_bfloat16', 'keras/layers/regularization/gaussian_dropout_test.py:GaussianDropoutTest:test_gaussian_dropout_basics', 'keras/random/random_test.py:RandomDTypeTest:test_gamma_bfloat16', 'keras/random/random_test.py:RandomDTypeTest:test_beta_bfloat16', 'keras/laye... | null | python -m pytest /testbed/keras/layers/regularization/alpha_dropout_test.py /testbed/keras/layers/regularization/dropout_test.py /testbed/keras/layers/regularization/gaussian_dropout_test.py /testbed/keras/layers/regularization/gaussian_noise_test.py /testbed/keras/random/random_test.py -v --json-report | Bug Fix | false | true | false | false | 7 | 0 | 7 | false | false | ["keras/layers/regularization/gaussian_noise.py->module->class_definition:GaussianNoise->function_definition:call", "keras/backend/tensorflow/random.py->module->function_definition:gamma", "keras/backend/tensorflow/random.py->module->function_definition:binomial", "keras/backend/numpy/random.py->module->function_defini... |
keras-team/keras | 19,466 | keras-team__keras-19466 | ['19407'] | 504716cb71973d4d4e485eb1724a3c4d3b621a69 | diff --git a/keras/ops/numpy.py b/keras/ops/numpy.py
--- a/keras/ops/numpy.py
+++ b/keras/ops/numpy.py
@@ -3992,6 +3992,9 @@ class Nonzero(Operation):
def call(self, x):
return backend.numpy.nonzero(x)
+ def compute_output_spec(self, x):
+ return KerasTensor([None] * len(x.shape))
+
@keras_... | diff --git a/keras/ops/numpy_test.py b/keras/ops/numpy_test.py
--- a/keras/ops/numpy_test.py
+++ b/keras/ops/numpy_test.py
@@ -1311,6 +1311,10 @@ def test_ndim(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.ndim(x).shape, (2,))
+ def test_nonzero(self):
+ x = KerasTensor((None, 5, ... | Numpy Ops function nonzero(x) appers to be missing check for symbolic tensors
In updating code from Keras 2 to 3, we noticed that nonzero function continues to throw errors for use of KerasTensor in TF functions, even when run though tf.keras.ops
Digging into the source, it appears that this function does not recei... | null | 2024-04-09 17:23:58+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install dependencies and the package itself
RUN pip install -e . && \
pip install pytest pytest-json-report && \
pip instal... | ['keras/ops/numpy_test.py:NumpyTwoInputOpsCorretnessTest:test_take_sparse_axis_0_float64', 'keras/ops/numpy_test.py:NumpyOneInputOpsDynamicShapeTest:test_transpose', 'keras/ops/numpy_test.py:NumpyTwoInputOpsStaticShapeTest:test_less_equal', 'keras/ops/numpy_test.py:NumpyDtypeTest:test_prod_none', 'keras/ops/numpy_test.... | ['keras/ops/numpy_test.py:NumpyOneInputOpsDynamicShapeTest:test_nonzero'] | null | python -m pytest /testbed/keras/ops/numpy_test.py -v --json-report | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["keras/ops/numpy.py->module->function_definition:nonzero", "keras/ops/numpy.py->module->class_definition:Nonzero->function_definition:compute_output_spec"] |
keras-team/keras | 19,484 | keras-team__keras-19484 | ['19411'] | 6a9bc4c051f0e4ee5e4ff48f08fd14230036dc46 | diff --git a/keras/optimizers/base_optimizer.py b/keras/optimizers/base_optimizer.py
--- a/keras/optimizers/base_optimizer.py
+++ b/keras/optimizers/base_optimizer.py
@@ -567,7 +567,7 @@ def _get_current_learning_rate(self):
):
return self._learning_rate(self.iterations)
elif callable(sel... | diff --git a/keras/optimizers/optimizer_test.py b/keras/optimizers/optimizer_test.py
--- a/keras/optimizers/optimizer_test.py
+++ b/keras/optimizers/optimizer_test.py
@@ -243,3 +243,12 @@ def test_tf_checkpointing(self):
checkpoint.restore(save_path)
pred = model.predict(x)
self.assertAllClos... | keras adamw optimizer failed with callable parameters in TensorFlow2.16
When we were working on upgrading keras 2 to keras 3 in TensorFlow plugin, one of our adamw related unit test failed, which is a sub unit test using callable lambda as learning_rate argument. We also found this ut failed in TensorFlow2.16 official... | https://github.com/keras-team/keras/blob/6c591d7d34c3ffaa50e805fd75c83d9c2a23414f/keras/optimizers/base_optimizer.py#L560
Here is the root cause. If learning_rate is a callable object, then it doesn't need any arguments.
I might give this one a stab if no one picks it up.
@kapoor1992 , You can create a PR
@sachinpras... | 2024-04-10 22:45:57+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install dependencies and the package itself
RUN pip install -e . && \
pip install pytest pytest-json-report && \
pip instal... | ['keras/optimizers/optimizer_test.py:OptimizerTest:test_set_weights', 'keras/optimizers/optimizer_test.py:OptimizerTest:test_ema', 'keras/optimizers/optimizer_test.py:OptimizerTest:test_get_method', 'keras/optimizers/optimizer_test.py:OptimizerTest:test_clip_args', 'keras/optimizers/optimizer_test.py:OptimizerTest:test... | ['keras/optimizers/optimizer_test.py:OptimizerTest:test_callable_learning_rate'] | null | python -m pytest /testbed/keras/optimizers/optimizer_test.py -v --json-report | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/optimizers/base_optimizer.py->module->class_definition:BaseOptimizer->function_definition:_get_current_learning_rate"] |
keras-team/keras | 19,641 | keras-team__keras-19641 | ['19591'] | 9f4da5159a098256dfbccd2c926107953a6812e5 | diff --git a/keras/src/backend/tensorflow/nn.py b/keras/src/backend/tensorflow/nn.py
--- a/keras/src/backend/tensorflow/nn.py
+++ b/keras/src/backend/tensorflow/nn.py
@@ -252,6 +252,12 @@ def _conv_xla():
# If kernel's in_channel does not match input's channels, it indicates
# convolution is broken d... | diff --git a/keras/src/ops/nn_test.py b/keras/src/ops/nn_test.py
--- a/keras/src/ops/nn_test.py
+++ b/keras/src/ops/nn_test.py
@@ -1445,23 +1445,29 @@ def test_conv_2d_group_2(self, strides, dilation_rate):
)
self.assertAllClose(outputs, expected)
- @parameterized.product(strides=(1, (1, 1, 1), 2... | Conv3D crash when the data_format is 'channels_first' and using Tensorflow backend
According to the [document](https://keras.io/api/layers/convolution_layers/convolution3d/) of Conv3D in keras website, Conv3D should accept inputs with data format 'channels_first' or 'channels_last'.
While in this [colab](https://colab... | According to the error message, the lack of support is only on CPU -- GPU should work fine. There's no CPU kernel for channels_first Conv3D. We can't fix that on the Keras side except by doing a transpose/counter-transpose in that case, which would be very inefficient.
Got it. I'll try it on GPU.
@fchollet
Sorry for ... | 2024-04-30 00:14:46+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install dependencies and the package itself
RUN pip install -e . && \
pip install pytest pytest-json-report && \
pip instal... | ['keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_depthwise_conv_2d2', 'keras/src/ops/nn_test.py:NNOpsDynamicShapeTest:test_log_sigmoid', 'keras/src/ops/nn_test.py:NNOpsDtypeTest:test_sigmoid_bfloat16', 'keras/src/ops/nn_test.py:NNOpsDynamicShapeTest:test_average_pool', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest... | ['keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d2', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d4', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d8', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d10', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d6', 'ke... | null | python -m pytest /testbed/keras/src/ops/nn_test.py -v --json-report | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/src/backend/tensorflow/nn.py->module->function_definition:conv"] |
keras-team/keras | 19,775 | keras-team__keras-19775 | ['19772'] | a243d91e43b4c43fe8d184b541b608b6ddd80f71 | diff --git a/keras/src/backend/tensorflow/numpy.py b/keras/src/backend/tensorflow/numpy.py
--- a/keras/src/backend/tensorflow/numpy.py
+++ b/keras/src/backend/tensorflow/numpy.py
@@ -1310,6 +1310,10 @@ def less_equal(x1, x2):
def linspace(
start, stop, num=50, endpoint=True, retstep=False, dtype=None, axis=0
):
... | diff --git a/keras/src/ops/numpy_test.py b/keras/src/ops/numpy_test.py
--- a/keras/src/ops/numpy_test.py
+++ b/keras/src/ops/numpy_test.py
@@ -2488,17 +2488,13 @@ def test_linspace(self):
np.linspace(start, stop, 5, retstep=True)[0],
)
self.assertAllClose(
- backend.convert_to_... | ops.linspace broken in Tensorflow when num is a tf.Tensor
When using ops.linspace with Tensorflow backend, if the `num` argument is a tf.Tensor the code will break here:
https://github.com/keras-team/keras/blob/a243d91e43b4c43fe8d184b541b608b6ddd80f71/keras/src/backend/tensorflow/numpy.py#L1332
Because `start` and ... | Hi @gustavoeb ,
Thanks for the report. I have reproduced the issue and attached [gist](https://colab.sandbox.google.com/gist/SuryanarayanaY/4bab4d097a48b487f32c28a1e89a2d9f/19772.ipynb) here. The Op `linspace` is breaking when the value of `num` is `int` or `float` | 2024-05-29 09:55:28+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install dependencies and the package itself
RUN pip install -e . && \
pip install pytest pytest-json-report && \
pip instal... | ['keras/src/ops/numpy_test.py:NumpyDtypeTest:test_expand_dims_float32', 'keras/src/ops/numpy_test.py:NumpyOneInputOpsCorrectnessTest:test_all', 'keras/src/ops/numpy_test.py:NumpyDtypeTest:test_expm1_float64', 'keras/src/ops/numpy_test.py:SparseTest:test_binary_correctness_sparse_tensor_multiply_sparse_dense_float32', '... | ['keras/src/ops/numpy_test.py:NumpyTwoInputOpsCorretnessTest:test_linspace'] | null | python -m pytest /testbed/keras/src/ops/numpy_test.py -v --json-report | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/src/backend/tensorflow/numpy.py->module->function_definition:linspace"] |
keras-team/keras | 19,799 | keras-team__keras-19799 | ['19792'] | c94663711d738b50af324214d89f895e046a2b66 | diff --git a/keras/src/models/functional.py b/keras/src/models/functional.py
--- a/keras/src/models/functional.py
+++ b/keras/src/models/functional.py
@@ -181,6 +181,10 @@ def compute_output_spec(self, inputs, training=None, mask=None):
# From Function
return super().compute_output_spec(inputs)
+ ... | diff --git a/keras/src/models/functional_test.py b/keras/src/models/functional_test.py
--- a/keras/src/models/functional_test.py
+++ b/keras/src/models/functional_test.py
@@ -118,6 +118,20 @@ def test_basic_flow_dict_io(self):
out_val = model(in_val)
self.assertEqual(out_val.shape, (2, 4))
+ def ... | TimeDistributed layer with nested model no longer working in TensorFlow 2.16.1
With TensorFlow `2.15.1`, the following code works fine:
```python3
import numpy as np
from tensorflow.keras.layers import Input, TimeDistributed, Flatten
from tensorflow.keras.models import Model, Sequential
inputs = [Input((17, 4)... | null | 2024-06-04 05:07:23+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install dependencies and the package itself
RUN pip install -e . && \
pip install pytest pytest-json-report && \
pip instal... | ['keras/src/models/functional_test.py:FunctionalTest:test_rank_standardization', 'keras/src/models/sequential_test.py:SequentialTest:test_dict_inputs', 'keras/src/models/functional_test.py:FunctionalTest:test_basic_flow_multi_output', 'keras/src/models/sequential_test.py:SequentialTest:test_basic_flow_deferred', 'keras... | ['keras/src/models/functional_test.py:FunctionalTest:test_basic_flow_as_a_submodel', 'keras/src/models/sequential_test.py:SequentialTest:test_basic_flow_as_a_submodel', 'keras/src/models/sequential_test.py:SequentialTest:test_compute_output_shape', 'keras/src/ops/function_test.py:FunctionTest:test_dynamic_shape_inferen... | null | python -m pytest /testbed/keras/src/models/functional_test.py /testbed/keras/src/models/sequential_test.py /testbed/keras/src/ops/function_test.py -v --json-report | Bug Fix | false | false | false | true | 3 | 3 | 6 | false | false | ["keras/src/models/sequential.py->module->class_definition:Sequential->function_definition:compute_output_shape", "keras/src/models/sequential.py->module->class_definition:Sequential", "keras/src/models/functional.py->module->class_definition:Functional->function_definition:compute_output_shape", "keras/src/models/func... |
keras-team/keras | 19,826 | keras-team__keras-19826 | ['19821'] | 2305fada8889e86463493bb4893b13ee8a8f0573 | diff --git a/keras/src/ops/numpy.py b/keras/src/ops/numpy.py
--- a/keras/src/ops/numpy.py
+++ b/keras/src/ops/numpy.py
@@ -4345,26 +4345,44 @@ def call(self, x):
def compute_output_spec(self, x):
x_shape = list(x.shape)
+ repeats = self.repeats
+ if isinstance(repeats, int):
+ r... | diff --git a/keras/src/ops/numpy_test.py b/keras/src/ops/numpy_test.py
--- a/keras/src/ops/numpy_test.py
+++ b/keras/src/ops/numpy_test.py
@@ -1364,7 +1364,7 @@ def test_repeat(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.repeat(x, 2).shape, (None,))
self.assertEqual(knp.repeat(x, 3... | `keras.ops.repeat` cannot return an exptected shape when `x` is a `KerasTensor` and the `axis` is `None`
Hello. Thank you for your contributions and maintenance for the best Keras.
I'm following the instructions of [Conditional GAN (code samples, uses Keras 3)](https://keras.io/examples/generative/conditional_gan/)... | I can look into this and report my findings in a few hours
This is due to an oversight caused by the different ways Keras and other backends handle the `repeats` parameter.
You can submit a PR after you solve it.
Edited: [Was confused about the expected dimensions of the output but I found the mistake in my logic] | 2024-06-10 15:05:53+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Copy the entire repository
COPY . .
# Install dependencies and the package itself
RUN pip install -e . && \
pip install pytest pytest-json-report && \
pip instal... | ['keras/src/ops/numpy_test.py:NumpyDtypeTest:test_expand_dims_float32', 'keras/src/ops/numpy_test.py:NumpyOneInputOpsCorrectnessTest:test_all', 'keras/src/ops/numpy_test.py:NumpyDtypeTest:test_expm1_float64', 'keras/src/ops/numpy_test.py:SparseTest:test_binary_correctness_sparse_tensor_multiply_sparse_dense_float32', '... | ['keras/src/ops/numpy_test.py:NumpyOneInputOpsStaticShapeTest:test_repeat', 'keras/src/ops/numpy_test.py:NumpyOneInputOpsDynamicShapeTest:test_repeat'] | null | python -m pytest /testbed/keras/src/ops/numpy_test.py -v | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/src/ops/numpy.py->module->class_definition:Repeat->function_definition:compute_output_spec"] |
keras-team/keras | 19,838 | keras-team__keras-19838 | ['19825'] | 26abe697a8802de40cb2761fc98b843fe1b2d5f6 | diff --git a/keras/src/losses/losses.py b/keras/src/losses/losses.py
--- a/keras/src/losses/losses.py
+++ b/keras/src/losses/losses.py
@@ -1711,6 +1711,9 @@ def sparse_categorical_crossentropy(
array([0.0513, 2.303], dtype=float32)
"""
+ if len(y_true.shape) == len(y_pred.shape) and y_true.shape[-1] == 1... | diff --git a/keras/src/losses/losses_test.py b/keras/src/losses/losses_test.py
--- a/keras/src/losses/losses_test.py
+++ b/keras/src/losses/losses_test.py
@@ -1055,7 +1055,7 @@ def test_no_reduction(self):
from_logits=True, reduction=None
)
loss = cce_obj(y_true, logits)
- self.ass... | sparse_categorical_crossentropy with ignore_class fails for 4D inputs
Using `ignore_class` with `keras.losses.sparse_categorical_crossentropy` and 4D inputs (Batch x Height x Width x Class) fails with a ValueError indicating wrong shapes.
Minimal example to reproduce:
```
import numpy as np
import tensorflow as t... | > y_true = np.zeros((1, 224, 224, 1))
=> `y_true = np.zeros((1, 224, 224))`
Shouldn't `y_true` has one dimension less than `y_pred`?
Oh, you are right, with `y_true = np.zeros((1, 224, 224))` it seems to work...
However, when omitting `ignore_class` from `sparse_categorical_crossentropy`, `y_true = np.zeros((1... | 2024-06-11 16:45:49+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy repository contents
COPY . .
# In... | ['keras/src/losses/losses_test.py:CategoricalFocalCrossentropyTest:test_label_smoothing', 'keras/src/losses/losses_test.py:SparseCategoricalCrossentropyTest:test_unweighted', 'keras/src/losses/losses_test.py:MeanAbsoluteErrorTest:test_zero_weighted', 'keras/src/losses/losses_test.py:CategoricalCrossentropyTest:test_con... | ['keras/src/losses/losses_test.py:SparseCategoricalCrossentropyTest:test_ignore_class'] | null | pytest /testbed/keras/src/losses/losses_test.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/src/losses/losses.py->module->function_definition:sparse_categorical_crossentropy"] |
keras-team/keras | 19,844 | keras-team__keras-19844 | ['19828'] | 1c60668f6bdd05dab619806e7b2dc25d3ed4ccbf | diff --git a/keras/src/initializers/__init__.py b/keras/src/initializers/__init__.py
--- a/keras/src/initializers/__init__.py
+++ b/keras/src/initializers/__init__.py
@@ -49,6 +49,7 @@
"uniform": RandomUniform,
"normal": RandomNormal,
"orthogonal": OrthogonalInitializer,
+ "Orthogonal"... | diff --git a/keras/src/initializers/random_initializers_test.py b/keras/src/initializers/random_initializers_test.py
--- a/keras/src/initializers/random_initializers_test.py
+++ b/keras/src/initializers/random_initializers_test.py
@@ -147,6 +147,10 @@ def test_orthogonal_initializer(self):
self.run_class_ser... | Keras 3.0 load h5 model with Orthogonal initializer fails
Hi guys,
I'm trying to load an h5 model that was working in earlier versions.
* This is a small part of the h5 file, where you can see (last part of the snippet) a recurrent initializer with a classname of **Orthogonal**.
```
{"name": "decoder_gru0", ... | Hi @mahnehsilla -
Thanks for raising the issue. Can you share the code snippet and h5 model with me where you are getting this error ? So I can reproduce it and try to help you on this.
| 2024-06-12 08:33:53+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential
# Copy the entire repository
COPY . .
# Install tensorflow and other backend dependencies first
RUN pip install tensorflow n... | ['keras/src/layers/rnn/gru_test.py:GRUTest:test_pass_initial_state', 'keras/src/initializers/random_initializers_test.py:InitializersTest:test_variance_scaling', 'keras/src/layers/rnn/gru_test.py:GRUTest:test_statefulness', 'keras/src/initializers/random_initializers_test.py:InitializersTest:test_variance_scaling_inval... | ['keras/src/layers/rnn/gru_test.py:GRUTest:test_legacy_implementation_argument', 'keras/src/initializers/random_initializers_test.py:InitializersTest:test_orthogonal_initializer'] | null | pytest /testbed/keras/src/initializers/random_initializers_test.py /testbed/keras/src/layers/rnn/gru_test.py -v --junitxml=test-results.xml | Bug Fix | false | false | true | false | 0 | 1 | 1 | false | true | ["keras/src/layers/rnn/gru.py->module->class_definition:GRU->function_definition:__init__"] |
keras-team/keras | 19,863 | keras-team__keras-19863 | ['19535'] | f6cf6a0e77dd504cfc35dd499dd8694b0b80b4ae | diff --git a/keras/src/utils/summary_utils.py b/keras/src/utils/summary_utils.py
--- a/keras/src/utils/summary_utils.py
+++ b/keras/src/utils/summary_utils.py
@@ -76,17 +76,31 @@ def bold_text(x, color=None):
def format_layer_shape(layer):
- if not layer._inbound_nodes:
+ if not layer._inbound_nodes and not ... | diff --git a/keras/src/utils/summary_utils_test.py b/keras/src/utils/summary_utils_test.py
--- a/keras/src/utils/summary_utils_test.py
+++ b/keras/src/utils/summary_utils_test.py
@@ -40,3 +40,37 @@ def print_to_variable(text, line_break=False):
self.assertNotIn("Optimizer params", summary_content)
... | model.summary() broken for custom models subclassed from keras.Model
### Current behavior?
**Custom model classes built from keras.Model do not think they get built properly, and the model.summary() is missing information.** However, the model will run just fine. In keras version 2.15.0, we see it working properly, ... | > the layer does not have a `build()` method implemented and it looks like
it has unbuilt state. This will cause the layer to be marked as built, despite not being actually built, which
may cause failures down the line. Make sure to implement a proper `build()` method.
As indicated by this message, you need to i... | 2024-06-17 09:58:10+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential
# Copy the entire repository
COPY . .
# Install tensorflow and other backend dependencies first
RUN pip install tensorflow n... | ['keras/src/utils/summary_utils_test.py:SummaryUtilsTest:test_print_model_summary1', 'keras/src/utils/summary_utils_test.py:SummaryUtilsTest:test_print_model_summary0'] | ['keras/src/utils/summary_utils_test.py:SummaryUtilsTest:test_print_model_summary_custom_build'] | null | pytest /testbed/keras/src/utils/summary_utils_test.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/src/utils/summary_utils.py->module->function_definition:format_layer_shape"] |
keras-team/keras | 19,903 | keras-team__keras-19903 | ['19708'] | 596d5ba420dd2865d576db2c5f860d9d77db8054 | diff --git a/keras/api/_tf_keras/keras/ops/__init__.py b/keras/api/_tf_keras/keras/ops/__init__.py
--- a/keras/api/_tf_keras/keras/ops/__init__.py
+++ b/keras/api/_tf_keras/keras/ops/__init__.py
@@ -16,6 +16,7 @@
from keras.src.ops.core import dtype
from keras.src.ops.core import fori_loop
from keras.src.ops.core im... | diff --git a/keras/src/ops/core_test.py b/keras/src/ops/core_test.py
--- a/keras/src/ops/core_test.py
+++ b/keras/src/ops/core_test.py
@@ -19,6 +19,23 @@
class CoreOpsStaticShapeTest(testing.TestCase):
+ def test_map(self):
+ def f(x):
+ return x**2
+
+ xs = KerasTensor((6,))
+ y... | Request for a map function like map_fn in TF and vmap in Jax
Currently, it seems there is no function to map a function to a tensor in keras 3.0. Such a function should do what map_fn in TF and vmap in Jax do. Otherwise, it is not very challenging to switch between the backends.
Perhaps I missed something here coul... | Do you mean `keras.ops.vectorized_map`?
Hi François,
Thank you for your quick response.
Sorry I am not so familiar with Jax. Now I found that vmap is similar to vectorized_map in TF and keras.
I am particularly interested in map_fn because my operation cannot be vectorialized due to the large intermediate va... | 2024-06-22 15:48:34+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy the entire repository
COPY . .
# ... | ['keras/src/ops/core_test.py:CoreOpsDtypeTest:test_convert_to_tensor20', 'keras/src/ops/core_test.py:CoreOpsCorrectnessTest:test_slice_update', 'keras/src/ops/core_test.py:CoreOpsDtypeTest:test_convert_to_tensor18', 'keras/src/ops/core_test.py:CoreOpsCallsTests:test_slice_basic_call', 'keras/src/ops/core_test.py:CoreOp... | ['keras/src/ops/core_test.py:CoreOpsStaticShapeTest:test_map', 'keras/src/ops/core_test.py:CoreOpsCorrectnessTest:test_map', 'keras/src/ops/core_test.py:CoreOpsCallsTests:test_map_basic_call'] | null | python -m pytest /testbed/keras/src/ops/core_test.py -v --junitxml=test-results.xml | Feature | false | false | false | true | 13 | 2 | 15 | false | false | ["keras/src/ops/core.py->module->function_definition:map", "keras/src/backend/numpy/core.py->module->function_definition:map", "keras/src/backend/numpy/core.py->module->function_definition:map->function_definition:g", "keras/src/backend/jax/core.py->module->function_definition:map", "keras/src/ops/core.py->module->clas... |
keras-team/keras | 19,915 | keras-team__keras-19915 | ['19913'] | f0bae912201bbd265a3485ccf4f490be2fc675c7 | diff --git a/keras/src/export/export_lib.py b/keras/src/export/export_lib.py
--- a/keras/src/export/export_lib.py
+++ b/keras/src/export/export_lib.py
@@ -654,13 +654,18 @@ def make_tensor_spec(structure):
# into plain Python structures because they don't work with jax2tf/JAX.
if isinstance(structure,... | diff --git a/keras/src/export/export_lib_test.py b/keras/src/export/export_lib_test.py
--- a/keras/src/export/export_lib_test.py
+++ b/keras/src/export/export_lib_test.py
@@ -196,6 +196,22 @@ def call(self, inputs):
)
revived_model.serve(bigger_input)
+ # Test with keras.saving_lib
+ t... | Unable to export reloaded model
Saving and reloading model makes it impossible to export it as a SavedModel artifact.
Reloaded model has shapes defined as lists while export function expect tuples.
Casting the shape to tuple in this particular place resolves the issue, but there may be other errors related to this ... | null | 2024-06-25 14:03:04+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev
# Copy the entire repository
COPY . .
# Install JAX with CPU support first (it has specific requirements)
RUN pi... | ['keras/src/export/export_lib_test.py:ExportArchiveTest:test_model_export_method_sequential', 'keras/src/export/export_lib_test.py:ExportArchiveTest:test_model_with_multiple_inputs', 'keras/src/export/export_lib_test.py:ExportArchiveTest:test_multi_input_output_functional_model', 'keras/src/export/export_lib_test.py:Ex... | ['keras/src/export/export_lib_test.py:ExportArchiveTest:test_model_with_input_structure_tuple', 'keras/src/export/export_lib_test.py:ExportArchiveTest:test_model_with_input_structure_array', 'keras/src/export/export_lib_test.py:ExportArchiveTest:test_model_with_input_structure_dict'] | null | pytest /testbed/keras/src/export/export_lib_test.py -v --junitxml=test-results.xml | Bug Fix | true | false | false | false | 0 | 0 | 0 | false | false | ["keras/src/export/export_lib.py->module->function_definition:_get_input_signature->function_definition:make_tensor_spec"] |
keras-team/keras | 19,924 | keras-team__keras-19924 | ['19921'] | a2e9a5252d2eab389bd19d359e6e7325a8232c79 | diff --git a/keras/src/saving/saving_lib.py b/keras/src/saving/saving_lib.py
--- a/keras/src/saving/saving_lib.py
+++ b/keras/src/saving/saving_lib.py
@@ -160,6 +160,9 @@ def _save_model_to_fileobj(model, fileobj, weights_format):
f.write(config_json.encode())
weights_file_path = None
+ w... | diff --git a/keras/src/saving/saving_lib_test.py b/keras/src/saving/saving_lib_test.py
--- a/keras/src/saving/saving_lib_test.py
+++ b/keras/src/saving/saving_lib_test.py
@@ -634,6 +634,7 @@ def save_own_variables(self, store):
with zipfile.ZipFile(filepath) as zf:
all_filenames = zf.namelist()
... | Bug in Keras 3.4.0: Loading model error 'No such file or directory: 'model.weights.h5'
### Environment:
Ubuntu 22.04
Tensorflow 2.16.1
Keras 3.4.0
### Reproducing steps
(1) Create the following python script `tf-save.py` to generate model file:
```
import os.path
import pandas as pd
import numpy as n... | We have confirmed this issue is not Tensorflow issue but bug introduced in Keras 3.4.0
https://github.com/tensorflow/tensorflow/issues/70273#issuecomment-2191371907
Our MLflow CI starting to fail since yesterday due to the same reason (becaus yesterday Keras 3.4.0 was released)
https://github.com/mlflow-automation/ml... | 2024-06-26 14:50:58+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev
# Copy the entire repository
COPY . .
# Install JAX with CPU support first (it has specific requirements)
RUN pi... | ['keras/src/saving/saving_lib_test.py:SavingBattleTest:test_bidirectional_lstm_saving', 'keras/src/saving/saving_lib_test.py:SavingTest:test_saved_module_paths_and_class_names', 'keras/src/saving/saving_lib_test.py:SavingBattleTest:test_nested_functional_model_saving', 'keras/src/saving/saving_lib_test.py:SavingTest:te... | ['keras/src/saving/saving_lib_test.py:SavingTest:test_save_model_exception_raised'] | null | pytest /testbed/keras/src/saving/saving_lib_test.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["keras/src/saving/saving_lib.py->module->function_definition:_load_model_from_fileobj", "keras/src/saving/saving_lib.py->module->function_definition:_save_model_to_fileobj"] |
keras-team/keras | 19,931 | keras-team__keras-19931 | ['19919'] | bba7b1a0d6cbee94b04f70514228fca9c1d165ae | diff --git a/keras/src/backend/tensorflow/numpy.py b/keras/src/backend/tensorflow/numpy.py
--- a/keras/src/backend/tensorflow/numpy.py
+++ b/keras/src/backend/tensorflow/numpy.py
@@ -2126,33 +2126,31 @@ def tri(N, M=None, k=0, dtype=None):
def tril(x, k=0):
x = convert_to_tensor(x)
- if k >= 0:
- retu... | diff --git a/keras/src/ops/numpy_test.py b/keras/src/ops/numpy_test.py
--- a/keras/src/ops/numpy_test.py
+++ b/keras/src/ops/numpy_test.py
@@ -4168,13 +4168,15 @@ def test_tril_in_layer(self):
y1 = keras.layers.Lambda(
lambda x: keras.ops.tril(
keras.ops.ones((keras.ops.shape(x)[1... | Ops inconsistency with tensorflow for tril and triu
`ops.tril` and `ops.triu` allow specifying a negative diagonal. For compiled runs, we use a python conditional instead of cond to check the sign of the diagonal which breaks. This is tensorflow specific, other backends allow negative diagonals.
```
../miniconda3/e... | null | 2024-06-28 01:31:19+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev
# Copy the entire repository
COPY . .
# Install JAX with CPU support first (it has specific requirements)
RUN pi... | ['keras/src/ops/numpy_test.py:NumpyDtypeTest:test_expand_dims_float32', 'keras/src/ops/numpy_test.py:NumpyOneInputOpsCorrectnessTest:test_all', 'keras/src/ops/numpy_test.py:NumpyDtypeTest:test_expm1_float64', 'keras/src/ops/numpy_test.py:SparseTest:test_binary_correctness_sparse_tensor_multiply_sparse_dense_float32', '... | ['keras/src/ops/numpy_test.py:NumpyOneInputOpsCorrectnessTest:test_triu_with_jit_in_tf', 'keras/src/ops/numpy_test.py:NumpyOneInputOpsCorrectnessTest:test_tril_with_jit_in_tf'] | null | pytest /testbed/keras/src/ops/numpy_test.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 4 | 0 | 4 | false | false | ["keras/src/backend/tensorflow/numpy.py->module->function_definition:triu", "keras/src/backend/tensorflow/numpy.py->module->function_definition:tril->function_definition:_negative_k_branch", "keras/src/backend/tensorflow/numpy.py->module->function_definition:tril", "keras/src/backend/tensorflow/numpy.py->module->functi... |
keras-team/keras | 19,937 | keras-team__keras-19937 | ['19932'] | 309f2c9c8959222e59d537b447c087a65c8b8998 | diff --git a/keras/src/losses/loss.py b/keras/src/losses/loss.py
--- a/keras/src/losses/loss.py
+++ b/keras/src/losses/loss.py
@@ -1,4 +1,5 @@
from keras.src import backend
+from keras.src import dtype_policies
from keras.src import ops
from keras.src import tree
from keras.src.api_export import keras_export
@@ -10... | diff --git a/keras/src/losses/loss_test.py b/keras/src/losses/loss_test.py
--- a/keras/src/losses/loss_test.py
+++ b/keras/src/losses/loss_test.py
@@ -4,6 +4,7 @@
import pytest
from keras.src import backend
+from keras.src import dtype_policies
from keras.src import losses as losses_module
from keras.src import o... | `unhashable type: 'DTypePolicy'` may leads problems in keras 3.4.1
Hello. Thank you for your contributions and maintenance for the best Keras.
I'm working on a customized loss and using `keras.DTypePolicy` to config the dtype in it, as the following:
```python
class MyCustomizedLoss(keras.losses.Loss):
def __... | Hi @Zhaopudark -
Thanks for reporting the issue. I have tested the code snippet and reproduces the reported behaviour. Attached [gist](https://colab.sandbox.google.com/gist/mehtamansi29/62c99255871ca72042fb42c3f3391c5a/19932-unhashable-type-dtypepolicy-may-leads-problems-in-keras-3-4-1.ipynb) file for reference.
We... | 2024-06-29 15:23:58+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev
# Copy the entire repository
COPY . .
# Install JAX with CPU support first (it has specific requirements)
RUN pi... | ['keras/src/losses/loss_test.py:LossTest:test_pickle', 'keras/src/losses/loss_test.py:LossTest:test_mask', 'keras/src/metrics/metric_test.py:MetricTest:test_serialization', 'keras/src/losses/loss_test.py:LossTest:test_get_method', 'keras/src/metrics/metric_test.py:MetricTest:test_pickle', 'keras/src/losses/loss_test.py... | ['keras/src/metrics/metric_test.py:MetricTest:test_dtype_arg', 'keras/src/losses/loss_test.py:LossTest:test_dtype_arg'] | null | pytest /testbed/keras/src/losses/loss_test.py /testbed/keras/src/metrics/metric_test.py -v --junitxml=test-results.xml | Bug Fix | false | false | false | true | 1 | 24 | 25 | false | false | ["keras/src/metrics/metric.py->module->class_definition:Metric->function_definition:__init__", "keras/src/losses/losses.py->module->class_definition:MeanAbsoluteError", "keras/src/losses/losses.py->module->class_definition:Huber", "keras/src/losses/losses.py->module->class_definition:Tversky", "keras/src/losses/losses.... |
keras-team/keras | 19,955 | keras-team__keras-19955 | ['19952'] | ca9519bf182650cd464d6825de451471b3243627 | diff --git a/keras/src/backend/common/keras_tensor.py b/keras/src/backend/common/keras_tensor.py
--- a/keras/src/backend/common/keras_tensor.py
+++ b/keras/src/backend/common/keras_tensor.py
@@ -90,6 +90,20 @@ def squeeze(self, axis=None):
return ops.Squeeze(axis)(self)
+ def __int__(self):
+ rai... | diff --git a/keras/src/backend/common/variables_test.py b/keras/src/backend/common/variables_test.py
--- a/keras/src/backend/common/variables_test.py
+++ b/keras/src/backend/common/variables_test.py
@@ -1,3 +1,5 @@
+import itertools
+
import numpy as np
import pytest
from absl.testing import parameterized
@@ -11,6 +... | Can't get learning rate as a value instead of a keras.Variable (and docs are incorrect)
I need to capture the learning rate from a compiled keras model (tensorflow backend but it shouldn't matter).
Previously I did this with tf.keras.backend.get_value(...), but keras 3.0 doesn't seem to have an equivalent.
I want to ... | null | 2024-07-04 12:57:24+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
WORKDIR /testbed
# Install git and build essentials for potential dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev
# Copy the entire repository
COPY . .
# Install JAX with CPU support first (it has specific requirements)
RUN pi... | ['keras/src/backend/common/variables_test.py:VariableDtypeShapeNdimRepr:test_variable_dtype', 'keras/src/backend/common/variables_test.py:VariablePropertiesTest:test_deferred_assignment', 'keras/src/backend/common/variables_test.py:VariablePropertiesTest:test_standardize_dtype_with_torch_dtype', 'keras/src/backend/comm... | ['keras/src/backend/common/variables_test.py:VariableOpsCorrentnessTest:test_round', 'keras/src/backend/common/variables_test.py:VariableOpsCorrentnessTest:test_float', 'keras/src/backend/common/variables_test.py:VariableOpsCorrentnessTest:test_int', 'keras/src/backend/common/variables_test.py:VariableOpsBehaviorTest:t... | null | pytest /testbed/keras/src/backend/common/variables_test.py -v --junitxml=test-results.xml | Bug Fix | false | false | false | true | 36 | 3 | 39 | false | false | ["keras/src/backend/common/variables.py->module->class_definition:KerasVariable->function_definition:__eq__", "keras/src/backend/common/variables.py->module->class_definition:KerasVariable->function_definition:__and__", "keras/src/backend/common/variables.py->module->class_definition:KerasVariable->function_definition:... |
keras-team/keras | 19,973 | keras-team__keras-19973 | ['19769'] | 10a008fac10e2eb7dd343c128cbf2e0f971fa993 | diff --git a/keras/src/layers/attention/multi_head_attention.py b/keras/src/layers/attention/multi_head_attention.py
--- a/keras/src/layers/attention/multi_head_attention.py
+++ b/keras/src/layers/attention/multi_head_attention.py
@@ -210,6 +210,21 @@ def build(
key: Optional shape of the `key` tensor.
... | diff --git a/keras/src/layers/attention/multi_head_attention_test.py b/keras/src/layers/attention/multi_head_attention_test.py
--- a/keras/src/layers/attention/multi_head_attention_test.py
+++ b/keras/src/layers/attention/multi_head_attention_test.py
@@ -148,6 +148,10 @@ def test_shape_mismatch_error(self, query_shape,... | Inconsistent assertion in keras.layers.MultiHeadAttention
I've noticed that depending on what is fed as the key, query and value to the keras.layers.MultiHeadAttention the assertion query_shape==value_shape is only _sometimes_ activated.
Minimal working example (no assertion error):
```
`import os`
`os.environ["K... | null | 2024-07-11 01:00:28+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy the entire repository
COPY . .
# ... | ['keras/src/layers/attention/multi_head_attention_test.py:MultiHeadAttentionTest:test_compute_output_shape_without_key_same_proj', 'keras/src/layers/attention/multi_head_attention_test.py:MultiHeadAttentionTest:test_basics', 'keras/src/layers/attention/multi_head_attention_test.py:MultiHeadAttentionTest:test_high_dim_a... | ['keras/src/layers/attention/multi_head_attention_test.py:MultiHeadAttentionTest:test_shape_mismatch_error_key_value_dim_mismatch', 'keras/src/layers/attention/multi_head_attention_test.py:MultiHeadAttentionTest:test_shape_mismatch_error_query_value_dim_mismatch', 'keras/src/layers/attention/multi_head_attention_test.p... | null | python -m pytest /testbed/keras/src/layers/attention/multi_head_attention_test.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/src/layers/attention/multi_head_attention.py->module->class_definition:MultiHeadAttention->function_definition:build"] |
keras-team/keras | 20,002 | keras-team__keras-20002 | ['19982'] | 576daec845cbc83cebb040e018ba9fdae1902738 | diff --git a/keras/src/models/sequential.py b/keras/src/models/sequential.py
--- a/keras/src/models/sequential.py
+++ b/keras/src/models/sequential.py
@@ -137,6 +137,12 @@ def _maybe_rebuild(self):
if isinstance(self._layers[0], InputLayer) and len(self._layers) > 1:
input_shape = self._layers[0].... | diff --git a/keras/src/models/sequential_test.py b/keras/src/models/sequential_test.py
--- a/keras/src/models/sequential_test.py
+++ b/keras/src/models/sequential_test.py
@@ -150,6 +150,58 @@ def test_basic_flow_as_a_submodel(self):
y = model(x)
self.assertEqual(y.shape, (2, 3, 4))
+ def test_bas... | "ValueError: Undefined shapes are not supported." when calling model.call()
hello everybody.
I'm having trouble creating a Siamese network class, which extends keras.Model , from a function that returns the same model. My knowledge about [keras.Model](https://keras.io/api/models/model/) isn't good, so I don't know i... | Got the same Error,


Hey @jpeg-souza
you can try the following:
```python
import keras
from keras import ops
def euclidean_di... | 2024-07-17 03:10:57+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy the entire repository
COPY . .
# ... | ['keras/src/models/sequential_test.py:SequentialTest:test_compute_output_shape', 'keras/src/models/sequential_test.py:SequentialTest:test_functional_properties', 'keras/src/models/sequential_test.py:SequentialTest:test_legacy_flow_with_input_shape', 'keras/src/models/sequential_test.py:SequentialTest:test_list_inputs',... | ['keras/src/models/sequential_test.py:SequentialTest:test_basic_flow_with_functional_model_as_first_layer', 'keras/src/models/sequential_test.py:SequentialTest:test_basic_flow_with_sequential_model_as_first_layer'] | null | python -m pytest /testbed/keras/src/models/sequential_test.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["keras/src/utils/summary_utils.py->module->function_definition:format_layer_shape", "keras/src/models/sequential.py->module->class_definition:Sequential->function_definition:_maybe_rebuild"] |
keras-team/keras | 20,008 | keras-team__keras-20008 | ['19991', '19991'] | 0ed820f5649bcb27531d73cfc023763712fc8bf9 | diff --git a/keras/src/backend/tensorflow/nn.py b/keras/src/backend/tensorflow/nn.py
--- a/keras/src/backend/tensorflow/nn.py
+++ b/keras/src/backend/tensorflow/nn.py
@@ -237,28 +237,25 @@ def _conv():
dilations=dilation_rate,
)
- # Reason for making this function is in Tensorflow, `groups > ... | diff --git a/keras/src/ops/nn_test.py b/keras/src/ops/nn_test.py
--- a/keras/src/ops/nn_test.py
+++ b/keras/src/ops/nn_test.py
@@ -1479,6 +1479,19 @@ def test_conv_3d(self, strides, padding, data_format):
)
self.assertAllClose(outputs, expected, rtol=1e-5, atol=1e-5)
+ # Test for tracing erro... | Regression bug when using 3D convolution with channels_first on GPU
The following code stopped working after release 3.3.3 when running on GPU and using `run_eagerly=False`
```python
import keras
import numpy as np
# 3D input with channels_first
model_input = keras.Input(shape=(1, 10, 10, 10))
# (None, 1, 10,... | I'm running on Nvidia driver 550.54.15, CUDA version 12.4 and am using a H100XM-80C GPU
I was able to replicate the issue using Keras 3.4.1 on GPU, attaching the Gist for reference
[](https://colab.sandbox.google.com/gist/sachinprasadhs/5cea3254fc749928420f... | 2024-07-18 05:28:29+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.9-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy the entire repository
COPY . .
# ... | ['keras/src/ops/nn_test.py:NNOpsDynamicShapeTest:test_multi_hot_dtype_float32_true', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_depthwise_conv_2d2', 'keras/src/ops/nn_test.py:NNOpsDynamicShapeTest:test_log_sigmoid', 'keras/src/ops/nn_test.py:NNOpsDtypeTest:test_sigmoid_bfloat16', 'keras/src/ops/nn_test.py:NNOp... | ['keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d2', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d4', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d8', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d10', 'keras/src/ops/nn_test.py:NNOpsCorrectnessTest:test_conv_3d6', 'ke... | null | python -m pytest /testbed/keras/src/ops/nn_test.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["keras/src/backend/tensorflow/nn.py->module->function_definition:conv"] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.