commitId
stringlengths
40
40
datetime
stringlengths
30
31
subject
stringlengths
37
266
comment
stringlengths
109
15.2k
diff
stringlengths
238
914k
gitVersion
stringclasses
9 values
16f53275378de95723b41dc23c0ec52ef54ae29
Thu, 11 Apr 2024 06:39:54 +0000
[PATCH 0001/1000] [AOTI] Serialize large weights (#123002)
But appending them to the end of the shared library and mmaping afterwards Disabled by default, but overridable by `config.aot_inductor.force_mmap_weights` Implemented by adding `USE_MMAP_SELF` define to `inductor/aoti_runtime/model.h` which is defined when weights are appended to the binary. In that case, shared libr...
diff --git a/test/inductor/test_aot_inductor.py b/test/inductor/test_aot_inductor.py index ea21e5f140..5de6d91a0b 100644 --- a/test/inductor/test_aot_inductor.py +++ b/test/inductor/test_aot_inductor.py @@ -269,6 +269,22 @@ class AOTInductorTestsTemplate: ) self.check_model(Model(), example_inputs) ...
2.41.0
aad72b0d3f2b03ae6d268b0c78a3cf349c0ae9f
Wed, 10 Apr 2024 18:05:40 -0700
[PATCH 0002/1000] Support all unsigned int sizes on unique (#123643)
Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/123643 Approved by: https://github.com/albanD, https://github.com/kit1980
diff --git a/aten/src/ATen/cuda/cub-RadixSortKeys.cu b/aten/src/ATen/cuda/cub-RadixSortKeys.cu index cf88c8aa0c..74e82ae55c 100644 --- a/aten/src/ATen/cuda/cub-RadixSortKeys.cu +++ b/aten/src/ATen/cuda/cub-RadixSortKeys.cu @@ -51,5 +51,8 @@ void radix_sort_keys( int64_t end_bit); AT_FORALL_SCALAR_TYPES_AND2(B...
2.41.0
2f687f32c3abddc0999733e26761a1f608029f3
Thu, 11 Apr 2024 06:53:10 +0000
[PATCH 0003/1000] Option to include stride and device annotation in gm.print_readable() (#123690)
Summary: Sample output for gm.print_readable(include_stride=True, include_device=True) ``` getitem_21: "i32[1200][1]cuda:0" = auto_functionalized_4[1] copy_2: "f32[2, 60][60, 1]cuda:1" = .... ``` Test Plan: CI Differential Revision: D55949129 Pull Request resolved: https://github.com/pytorch/pytorch/pull/123690 Ap...
diff --git a/test/expect/TestFXAPIBackwardCompatibility.test_function_back_compat-fx_backcompat_function_signatures.expect b/test/expect/TestFXAPIBackwardCompatibility.test_function_back_compat-fx_backcompat_function_signatures.expect index d6630cff36..2996edd485 100644 --- a/test/expect/TestFXAPIBackwardCompatibility....
2.41.0
8d2504eece2ba5e464a42b253ea07f70e9ba5b6
Tue, 9 Apr 2024 12:11:09 -0700
[PATCH 0004/1000] [aot] always pass inputs to runtime_wrapper as list and add type annotations (#123630)
`runtime_wrapper` unpacking the arguments as a Tuple[arg] will prevent them from being freed within its scope. This is problematic if inductors wants to free those inputs, which could be activations in the compiled backwards case. This PR only changes the signature to pass as list, but does not clear it, keeping same r...
diff --git a/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py b/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py index dda3144b24..5c9c3424d3 100644 --- a/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py +++ b/torch/_functorch/_aot_autograd/jit_compile_runtime_wrappers.py @@ -1...
2.41.0
510afb8857e6565612862496a6478733fe7b8db
Wed, 10 Apr 2024 17:53:07 -0700
[PATCH 0005/1000] [aot] refactor runtime_wrapper's epilogue args access (#123674)
I want runtime_wrapper args to be stealable by call_func_at_runtime_with_args, since the args may contain activations which we don't want to hold alive in this scope. The args to runtime_wrapper **should always be** from a list created within aot_autograd, so it **should always be** safe to steal them: https://github....
diff --git a/torch/_functorch/_aot_autograd/runtime_wrappers.py b/torch/_functorch/_aot_autograd/runtime_wrappers.py index 1ef2df56a2..3d11c01fe9 100644 --- a/torch/_functorch/_aot_autograd/runtime_wrappers.py +++ b/torch/_functorch/_aot_autograd/runtime_wrappers.py @@ -72,8 +72,29 @@ def create_runtime_wrapper( i...
2.41.0
00282fecfcb53790aebfb24cc48a8703577778e
Wed, 10 Apr 2024 18:33:29 -0700
[PATCH 0006/1000] [c10d] make monitorThread sleep when we try to dump (#123788)
Summary: We seperated the FR dump logic from the desync debug logic, so we no longer set collectiveDebugInfoMode_ to true when we just need FR dump. That's why monitor thread did not sleep and try to kill the process without waiting for the dump. The fix is simple, we should sleep whenever shouldDump_ is true Test Pla...
diff --git a/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp b/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp index d9f9e6e574..def79cde2b 100644 --- a/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp +++ b/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp @@ -1268,6 +1268,7 @@ void ProcessGroupNCCL::heartbeatMonitor...
2.41.0
ac99d539be35e806d8d719fa69ceddaf63c6373
Thu, 11 Apr 2024 08:56:02 +0000
[PATCH 0007/1000] Only initialize state if needed in SGD (#123757)
Fixes [T184381726](https://www.internalfb.com/intern/tasks/?t=184381726) Pull Request resolved: https://github.com/pytorch/pytorch/pull/123757 Approved by: https://github.com/janeyx99
diff --git a/test/inductor/test_compiled_optimizers.py b/test/inductor/test_compiled_optimizers.py index 7c93f326f4..d076f27b17 100644 --- a/test/inductor/test_compiled_optimizers.py +++ b/test/inductor/test_compiled_optimizers.py @@ -310,7 +310,8 @@ def make_recompile_test(optim_cls, closure=None, kernel_count=2, **kw...
2.41.0
b7741546b1ee53e5aa3768616c50eab72372a3a
Thu, 11 Apr 2024 09:02:31 +0000
[PATCH 0008/1000] Fixed arange decomp for float dtype (#121013)
## Description: - [x] Fixed arange decomp for float dtype - [x] Added a test ## Current state Arange graph and C++ generated code are not optimal when arange is created directly using float32 dtype: ```python import torch def func(x): s = x.shape[-1] a = torch.arange(s, dtype=torch.float32) return s + a c_func = t...
diff --git a/test/test_decomp.py b/test/test_decomp.py index 4e482a92d5..39d0c2eef2 100644 --- a/test/test_decomp.py +++ b/test/test_decomp.py @@ -38,6 +38,7 @@ from torch._ops import DispatchKey import itertools import functools from functools import partial +import re import unittest aten = torch.ops.aten @@ -...
2.41.0
798f5bf0d58fb9655c4da9c0a8bc1ec8af31aea
Wed, 10 Apr 2024 23:23:28 -0700
[PATCH 0009/1000] Add Quantization recipe filter per operator type for x86_inductor_quantizer (#122775)
**Summary** Default recipes are enabled in `X86InductorQuantizer` and request comes to customize recipes based on these defaults. - Avoid annotation propagation and restrict annotation only to annotate `conv`/`linear`. - Add `matmul` in the quantization recipes, noting that it's not a general recipe but tailored to m...
diff --git a/test/quantization/pt2e/test_x86inductor_quantizer.py b/test/quantization/pt2e/test_x86inductor_quantizer.py index 06e2e6c9f9..c9df319bfd 100644 --- a/test/quantization/pt2e/test_x86inductor_quantizer.py +++ b/test/quantization/pt2e/test_x86inductor_quantizer.py @@ -1346,3 +1346,105 @@ class TestQuantizePT2...
2.41.0
8e9261b906f69b397e4027362be801f98a68d62
Wed, 10 Apr 2024 23:23:28 -0700
[PATCH 0010/1000] Add Matmul recipe into x86_inductor_quantizer (#122776)
**Summary** Add `matmul` in the quantization recipes, noting that it's not a general recipe but tailored to meet accuracy criteria for specific models. `matmul` recipe is disabled by default. **Test Plan** ``` python -m pytest quantization/pt2e/test_x86inductor_quantizer.py -k test_attention_block ``` Pull Request re...
diff --git a/test/quantization/pt2e/test_x86inductor_quantizer.py b/test/quantization/pt2e/test_x86inductor_quantizer.py index c9df319bfd..4af5a30ddf 100644 --- a/test/quantization/pt2e/test_x86inductor_quantizer.py +++ b/test/quantization/pt2e/test_x86inductor_quantizer.py @@ -289,21 +289,42 @@ class TestHelperModules...
2.41.0
4580f76d9e4a81b70a94062b762e3af919d95d0
Wed, 10 Apr 2024 21:38:33 -0700
[PATCH 0011/1000] fix flop counter issue with out parameters (#123768)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123768 Approved by: https://github.com/zou3519
diff --git a/test/test_flop_counter.py b/test/test_flop_counter.py index 74bc666db6..1a9a757f9f 100644 --- a/test/test_flop_counter.py +++ b/test/test_flop_counter.py @@ -248,8 +248,8 @@ class TestFlopCounter(TestCase): self.assertExpectedInline(get_total_flops(mode), """5""") - def count(*args, out...
2.41.0
a5e7a01b5368b8ba11edcb62942630a1474e6e3
Wed, 10 Apr 2024 11:02:32 -0700
[PATCH 0015/1000] [custom_op] Schema inference now includes default values (#123453)
If the function has default values, we should be able to do schema inference and put the default values into the schema. Test Plan: - new tests Pull Request resolved: https://github.com/pytorch/pytorch/pull/123453 Approved by: https://github.com/albanD
diff --git a/test/test_custom_ops.py b/test/test_custom_ops.py index 7479225785..10cb60e8ae 100644 --- a/test/test_custom_ops.py +++ b/test/test_custom_ops.py @@ -688,20 +688,6 @@ class TestCustomOp(CustomOpTestCaseBase): infer_schema(foo) - with self.assertRaisesRegex(ValueError, "default value...
2.41.0
b4419dc4d9a4e5555de2a4def0eb77f10c8832a
Wed, 10 Apr 2024 11:02:32 -0700
[PATCH 0016/1000] Refresh OpOverloadPacket if a new OpOverload gets added (#123578)
If a user accesses an OpOverloadPacket, then creates a new OpOverload, then uses the OpOverloadPacket, the new OpOverload never gets hit. This is because OpOverloadPacket caches OpOverloads when it is constructed. This PR fixes the problem by "refreshing" the OpOverloadPacket if a new OpOverload gets constructed and t...
diff --git a/test/test_custom_ops.py b/test/test_custom_ops.py index 10cb60e8ae..86c21f228d 100644 --- a/test/test_custom_ops.py +++ b/test/test_custom_ops.py @@ -2393,6 +2393,30 @@ Please use `add.register_fake` to add an fake impl.""", y = f(x) self.assertEqual(y, x.sin()) + @skipIfTorchDynamo(...
2.41.0
38729c0cdf3ce4274f4d68f8e46e5a1cd36cbe8
Wed, 10 Apr 2024 11:02:33 -0700
[PATCH 0017/1000] Switch quantized_decomposed over to new custom ops API (#123454)
We are taking API feedback. Changes: - I removed some of the default values (they weren't being used). - I was unable to convert the last op (which is essentially an autograd.Function registered as CompositeImplicitAutograd). That one is "incorrectly registered"; I punt fixing it to the future. Test Plan: - existing t...
diff --git a/torch/_custom_op/impl.py b/torch/_custom_op/impl.py index fefd7cedf9..6f25e2b9af 100644 --- a/torch/_custom_op/impl.py +++ b/torch/_custom_op/impl.py @@ -882,6 +882,11 @@ SUPPORTED_RETURN_TYPES = { def parse_return(annotation, error_fn): + if annotation == inspect.Signature.empty: + error_fn...
2.41.0
34e56fa3352aefa208b33b0a86aaabed8033f7a
Wed, 10 Apr 2024 15:10:59 -0700
[PATCH 0018/1000] inductor: log unique id to match output_code to aot graphs (#118647)
I found it helpful to be able to see, given some inductor output code, which AOT graph it came from. When you have large models with multiple graphs floating around this can be difficult, so I added the aot_config.aot_id to the printed inductor output. Pull Request resolved: https://github.com/pytorch/pytorch/pull/118...
diff --git a/torch/_functorch/_aot_autograd/logging_utils.py b/torch/_functorch/_aot_autograd/logging_utils.py index 28f82555ac..414166cbdd 100644 --- a/torch/_functorch/_aot_autograd/logging_utils.py +++ b/torch/_functorch/_aot_autograd/logging_utils.py @@ -46,12 +46,22 @@ def track_graph_compiling(aot_config, graph_n...
2.41.0
83900887f2fb5c7a04e7fd78ad8de7a20f356d4
Wed, 10 Apr 2024 14:19:07 -0700
[PATCH 0019/1000] [quant] Enable backward for choose_qparams_per_token_asymmetric (#123452)
Summary: When running the backward for this op, we get the error: ``` RuntimeError: derivative for aten::aminmax is not implemented ``` This commit replaces this call with separate amin and amax calls instead, which do have implemented derivatives. Test Plan: python test/test_quantization.py -k test_decomposed_choose_...
diff --git a/test/quantization/core/test_quantized_tensor.py b/test/quantization/core/test_quantized_tensor.py index b2bd97bdc3..228f1f8ee7 100644 --- a/test/quantization/core/test_quantized_tensor.py +++ b/test/quantization/core/test_quantized_tensor.py @@ -1602,6 +1602,14 @@ class TestQuantizedTensor(TestCase): ...
2.41.0
fa36ef09210b67022439b49eee01d7b63bd6d96
Wed, 10 Apr 2024 19:31:01 -0400
[PATCH 0020/1000] Natively support int truncation, don't guard on positive/negative (#122827)
This doesn't entirely fix the original problem that prompted this, but it seems to just be getting stuck in export constraint formatting now which seems like progress to me. Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/122827 Approved by: https://githu...
diff --git a/test/export/test_export.py b/test/export/test_export.py index 80a6f0b993..d04ab18384 100644 --- a/test/export/test_export.py +++ b/test/export/test_export.py @@ -1086,6 +1086,36 @@ class TestExport(TestCase): inps = (torch.ones(6, 4), torch.tensor(5), torch.tensor(4)) self._test_export_sa...
2.41.0
02374cc091e549c586b72c9b252d33256ec921e
Thu, 11 Apr 2024 17:34:47 +0000
[PATCH 0023/1000] [CI] show doc coverage repro instructions (#123688)MIME-Version: 1.0Content-Type: text/plain; charset=UTF-8Content-Transfer-Encoding: 8bit
remind devs they can reproduce the doc coverage error locally with following msg ```You can reproduce locally by running 'cd pytorch/docs && make coverage && cat build/coverage/python.txt'``` I spent 20min to figure out how to test locally so want to enrich the error msg <img width="542" alt="Screenshot 2024-04-09 at ...
diff --git a/.ci/pytorch/python_doc_push_script.sh b/.ci/pytorch/python_doc_push_script.sh index ce14ac1d02..d4076d3469 100755 --- a/.ci/pytorch/python_doc_push_script.sh +++ b/.ci/pytorch/python_doc_push_script.sh @@ -105,6 +105,7 @@ if [ "$is_main_doc" = true ]; then echo undocumented objects found: cat bui...
2.41.0
9c565b24e6c305c09c8c908e27f4023f41dd567
Wed, 10 Apr 2024 18:54:51 -0700
[PATCH 0024/1000] [inductor] Write generated files from parent process (#123409)
Before this PR we would pass generated source code over a pipe to the compile worker then the compile worker would write out the file. Doing it this way is faster and results in smaller messages to the workers (and lets us skip creating the workers in the warm start case). Pull Request resolved: https://github.com/py...
diff --git a/torch/_inductor/codecache.py b/torch/_inductor/codecache.py index 98cf75fc23..4e84838504 100644 --- a/torch/_inductor/codecache.py +++ b/torch/_inductor/codecache.py @@ -59,12 +59,7 @@ from torch._dynamo.device_interface import ( from torch._dynamo.utils import counters, dynamo_timed from torch._inductor...
2.41.0
c451798cc5a7882e95b01600aa643b042b11b1e
Wed, 10 Apr 2024 12:50:21 -0700
[PATCH 0025/1000] [inductor] Disable channels_last heuristic when channels==1 (#123758)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123758 Approved by: https://github.com/shunting314
diff --git a/test/inductor/test_cpu_repro.py b/test/inductor/test_cpu_repro.py index 9cc0e9b93a..80a0fed789 100644 --- a/test/inductor/test_cpu_repro.py +++ b/test/inductor/test_cpu_repro.py @@ -1630,6 +1630,19 @@ class CPUReproTests(TestCase): self.common(fn, (value, mask)) ...
2.41.0
9249a218b941f7f9cb5391692bacec013039cfc
Thu, 11 Apr 2024 01:43:47 +0000
[PATCH 0028/1000] Avoid COW materialization in backward ops (3) (#123797)
Affected ops: * conv ops * glu * prelu * scaled_dot_product_attention * threshold * logsigmoid * binary_cross_entropy * gelu * unfold * smooth_l1_loss * embedding Part of #97856 Pull Request resolved: https://github.com/pytorch/pytorch/pull/123797 Approved by: https://github.com/ezyang
diff --git a/aten/src/ATen/native/Activation.cpp b/aten/src/ATen/native/Activation.cpp index 34617ef47f..75f373811c 100644 --- a/aten/src/ATen/native/Activation.cpp +++ b/aten/src/ATen/native/Activation.cpp @@ -411,8 +411,8 @@ TORCH_IMPL_FUNC(gelu_backward_out_cpu) ( auto approximate_type = get_gelutype_enum(approxima...
2.41.0
e869c9bb78143c10a765eb2af64d57925633065
Thu, 11 Apr 2024 01:43:53 +0000
[PATCH 0029/1000] Avoid COW materialization in backward ops (4) (#123798)
Affected ops: * embedding_bag * mse_loss * huber_loss * grid_sample * ctc_loss * nll_loss * pdist * _segment_reduce Part of #97856 Pull Request resolved: https://github.com/pytorch/pytorch/pull/123798 Approved by: https://github.com/ezyang ghstack dependencies: #123797
diff --git a/aten/src/ATen/native/EmbeddingBag.cpp b/aten/src/ATen/native/EmbeddingBag.cpp index 64e5d2775a..8b6c90dae2 100644 --- a/aten/src/ATen/native/EmbeddingBag.cpp +++ b/aten/src/ATen/native/EmbeddingBag.cpp @@ -1478,7 +1478,7 @@ static Tensor _embedding_bag_dense_backward_cpu_max( template<typename index_t> s...
2.41.0
16ca546aa833b2a337e556dfb0b759f2b759a25
Thu, 11 Apr 2024 19:10:56 +0000
[PATCH 0030/1000] Adding health check server hook in torch elastic (#122750) (#123504)
Summary: Building hook for external mechanism to monitor the health of torch elastic launcher. Health check server takes dependency on FileTimerServer to check if launcher is healthy or not. It will be always healthy if FileTimerServer is disabled. Implementation of start_healthcheck_server is unsupported, however tc...
diff --git a/docs/source/elastic/agent.rst b/docs/source/elastic/agent.rst index db1465d71e..ac42403761 100644 --- a/docs/source/elastic/agent.rst +++ b/docs/source/elastic/agent.rst @@ -74,3 +74,21 @@ will internally create a unique file name and set it to the environment variable ```TORCHELASTIC_TIMER_FILE```, and t...
2.41.0
a7fd20aa11c02a10b1223caeefb645de482bcbc
Wed, 10 Apr 2024 21:58:49 -0700
[PATCH 0031/1000] [dynamo] Support autograd.FunctionCtx.needs_input_grad (#123700)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123700 Approved by: https://github.com/anijain2305
diff --git a/test/dynamo/test_autograd_function.py b/test/dynamo/test_autograd_function.py index 8b77af1a35..beca818570 100644 --- a/test/dynamo/test_autograd_function.py +++ b/test/dynamo/test_autograd_function.py @@ -818,6 +818,30 @@ class AutogradFunctionTests(torch._dynamo.test_case.TestCase): foo(torch.ra...
2.41.0
82d20c207a5fdbc48263734ccc82db6b4ad24c0
Thu, 11 Apr 2024 19:37:11 +0000
[PATCH 0032/1000] [NEON] Remove implicit type promotion in `Vectorized<c10::Half>::operator!=` (#123864)
To make code compilable with `gcc`, which `clang` does not allow transparent type promotion between vectorized NEON types of the same sizes, see https://godbolt.org/z/xoasoGM81 as an example Pull Request resolved: https://github.com/pytorch/pytorch/pull/123864 Approved by: https://github.com/malfet
diff --git a/aten/src/ATen/cpu/vec/vec256/vec256_half_neon.h b/aten/src/ATen/cpu/vec/vec256/vec256_half_neon.h index bb8716ecb7..aaf1d5995f 100644 --- a/aten/src/ATen/cpu/vec/vec256/vec256_half_neon.h +++ b/aten/src/ATen/cpu/vec/vec256/vec256_half_neon.h @@ -565,9 +565,9 @@ class Vectorized<c10::Half> { } Vecto...
2.41.0
a013f69bb9647a8defba8d0733475a6fdcd6a73
Wed, 10 Apr 2024 15:10:59 -0700
[PATCH 0034/1000] dynamo assertion that graph has no fake-tensor constants should check for subclasses (#118644)
This would have caught some of the nasty errors in https://github.com/pytorch/pytorch/pull/118191 Pull Request resolved: https://github.com/pytorch/pytorch/pull/118644 Approved by: https://github.com/tugsbayasgalan, https://github.com/zou3519 ghstack dependencies: #118647
diff --git a/torch/_dynamo/utils.py b/torch/_dynamo/utils.py index 809adc6b04..ab43664f5c 100644 --- a/torch/_dynamo/utils.py +++ b/torch/_dynamo/utils.py @@ -1889,7 +1889,7 @@ def get_real_value(node, tracer): def assert_no_fake_params_or_buffers(gm): - from torch._subclasses.fake_tensor import FakeTensorConfi...
2.41.0
cd06f56b1d12fbb052eb5cd66756f8a5b90096a
Thu, 11 Apr 2024 20:24:47 +0000
[PATCH 0035/1000] [ez] test_profiler in serial (#123665)
Add test_profiler to the serial list since we keep needing to reopen disable issues and I think its due to being incompatible with parallelism Pull Request resolved: https://github.com/pytorch/pytorch/pull/123665 Approved by: https://github.com/ZainRizvi, https://github.com/huydhn
diff --git a/test/run_test.py b/test/run_test.py index cd77683c7c..4d0b8b58e9 100755 --- a/test/run_test.py +++ b/test/run_test.py @@ -248,6 +248,7 @@ CI_SERIAL_LIST = [ "inductor/test_torchinductor", # OOM on test_large_block_sizes "inductor/test_torchinductor_dynamic_shapes", # OOM on test_large_block_siz...
2.41.0
70bf23b7b74bd7a94f5095fb54857873aa1957b
Wed, 10 Apr 2024 16:51:37 -0700
[PATCH 0036/1000] [dynamo] apply certain bytecode cleaning transformations unconditionally (#123785)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123785 Approved by: https://github.com/jansel
diff --git a/torch/_dynamo/bytecode_transformation.py b/torch/_dynamo/bytecode_transformation.py index 32c77298d3..30b7866ade 100644 --- a/torch/_dynamo/bytecode_transformation.py +++ b/torch/_dynamo/bytecode_transformation.py @@ -1172,14 +1172,14 @@ def cleaned_instructions(code, safe=False) -> List[Instruction]: ...
2.41.0
7fac76fc259394136bc77b3e39d5705919e5c4c
Thu, 11 Apr 2024 20:52:02 +0000
[PATCH 0037/1000] [DCP fixes for _load_state_dict_keys and supports nested keys (#123679)
Fixes some issues with `_load_state_dict_keys`, including: * updates broken test, which was failing due to incorrect parameters * adds support for specifying nested keys e.g. (load state dict keys can now specify `something like "optimizer.state"`, which loads all keys under `optimzier.state`. * updates call site to us...
diff --git a/test/distributed/checkpoint/e2e/test_e2e_save_and_load.py b/test/distributed/checkpoint/e2e/test_e2e_save_and_load.py index 03553adaf1..8d4733b827 100644 --- a/test/distributed/checkpoint/e2e/test_e2e_save_and_load.py +++ b/test/distributed/checkpoint/e2e/test_e2e_save_and_load.py @@ -351,8 +351,20 @@ clas...
2.41.0
3070e2753cfab9b09449f50aad9154bc92eaef5
Tue, 9 Apr 2024 11:23:37 -0700
[PATCH 0038/1000] [DCP] Adds better handling in logging of specific kwargs (#123658)
Adds additional signpost integrations to DCP Logger, to add support for MLU and metric collection. Differential Revision: [D55803461](https://our.internmc.facebook.com/intern/diff/D55803461/) Pull Request resolved: https://github.com/pytorch/pytorch/pull/123658 Approved by: https://github.com/fegin
diff --git a/torch/distributed/c10d_logger.py b/torch/distributed/c10d_logger.py index 3d59bdcd76..5d2aa9b629 100644 --- a/torch/distributed/c10d_logger.py +++ b/torch/distributed/c10d_logger.py @@ -47,15 +47,16 @@ _c10d_logger = _get_or_create_logger() def _get_msg_dict(func_name, *args, **kwargs) -> Dict[str, Any]...
2.41.0
75a32b9f91d8d3abe215c69dbf62c2f995d183f
Thu, 11 Apr 2024 10:52:47 -0700
[PATCH 0039/1000] [FSDP2] Fixed `is_last_backward` for 1f1b (#123857)
`FSDPState` only uses `TrainingState.PRE_BACKWARD` as a backward training state, not `TrainingState.POST_BACKWARD`, because the FSDP state itself does not run post-backward (only its `FSDPParamGroup`, which may not exist if the state does not manage any parameters). This meant that when `is_last_backward=False`, the `...
diff --git a/test/distributed/_composable/fsdp/test_fully_shard_training.py b/test/distributed/_composable/fsdp/test_fully_shard_training.py index 3ab1b0a546..3b9406b3ef 100644 --- a/test/distributed/_composable/fsdp/test_fully_shard_training.py +++ b/test/distributed/_composable/fsdp/test_fully_shard_training.py @@ -7...
2.41.0
d225189f15c8d33ae47a8f3c3879a408b792aad
Mon, 8 Apr 2024 22:44:37 +0100
[PATCH 0041/1000] [inductor] Change OverridesData to take callables instead of strings (#123397)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123397 Approved by: https://github.com/lezcano
diff --git a/torch/_inductor/codegen/common.py b/torch/_inductor/codegen/common.py index 4caa4dc367..b061e3ad1f 100644 --- a/torch/_inductor/codegen/common.py +++ b/torch/_inductor/codegen/common.py @@ -582,42 +582,21 @@ class OpOverrides: def _initialize_pointwise_overrides(cls, target): assert target in...
2.41.0
0b7aa201caae3b99ce03f65ac4e0893f412c4e5
Wed, 10 Apr 2024 21:32:40 -0700
[PATCH 0042/1000] [dynamo][cpp-guards] Introduce DictSubclassGuardManager (#123773)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123773 Approved by: https://github.com/jansel
diff --git a/test/dynamo/test_misc.py b/test/dynamo/test_misc.py index ce49e49631..79835450be 100644 --- a/test/dynamo/test_misc.py +++ b/test/dynamo/test_misc.py @@ -10167,6 +10167,23 @@ fn res = opt_m(x) self.assertEqual(ref, res) + def test_ordered_dict_move_to_end(self): + d = { + ...
2.41.0
0ccf599ccafb379346101591bf76c6653a03e7f
Thu, 11 Apr 2024 22:40:46 +0000
[PATCH 0045/1000] [export] Restore original placeholder names (part 2: higher-order-op subgraph naming) (#123587)
Summary: note: breaking the original diff [D55225818](https://www.internalfb.com/diff/D55225818) into 3 parts (top-level renaming, higher-order-op subgraphs, constant input de/serialization) because of its size. Stacked PR to restore original names to placeholder nodes, replacing the default names arg0_1, arg1_1, ... ...
diff --git a/test/export/test_export.py b/test/export/test_export.py index 5ec2deb050..6aa4f04a42 100644 --- a/test/export/test_export.py +++ b/test/export/test_export.py @@ -3910,19 +3910,19 @@ def forward(self, b_pred, b_t, x, y): self.assertExpectedInline( str(exported_program.graph_module.true...
2.41.0
04c9c5601cbb9ae65ca10484d256c3f779d2574
Thu, 11 Apr 2024 23:45:05 +0000
[PATCH 0046/1000] Enable UFMT on all of `test/jit` (#123623)
Partially addresses #123062 Ran lintrunner on: - `test/jit` with command: ```bash lintrunner -a --take UFMT --all-files ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/123623 Approved by: https://github.com/ezyang
diff --git a/.lintrunner.toml b/.lintrunner.toml index 7112557122..d492ed12e7 100644 --- a/.lintrunner.toml +++ b/.lintrunner.toml @@ -1162,94 +1162,6 @@ exclude_patterns = [ 'test/functorch/test_vmap.py', 'test/functorch/test_vmap_registrations.py', 'test/functorch/xfail_suggester.py', - 'test/jit/__...
2.41.0
d9dc976aec8766d86c51548de2d4e5416fa9a39
Thu, 11 Apr 2024 12:07:03 -0700
[PATCH 0047/1000] Handle unqualified imports in custom Triton kernels (#123703)
Summary: If in a custom (user-written) Triton kernel an externally imported symbol is used directly, we need to codegen the corresponding import outside the kernel body in the Python wrapper. E.g., if the user code has this: ``` from triton.language.extra.cuda.libdevice import fast_dividef @triton.jit def my_kernel(....
diff --git a/test/inductor/test_triton_kernels.py b/test/inductor/test_triton_kernels.py index 2b45920c42..162728b3a3 100644 --- a/test/inductor/test_triton_kernels.py +++ b/test/inductor/test_triton_kernels.py @@ -16,7 +16,7 @@ from torch._higher_order_ops.triton_kernel_wrap import ( from torch._inductor import metri...
2.41.0
f6884f6209c44335b807079754fcea186b1b590
Thu, 11 Apr 2024 12:09:50 -0700
[PATCH 0048/1000] Added some extra repr to triton template buffers and added autotuned block configs to templated attention (#123813)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123813 Approved by: https://github.com/drisspg, https://github.com/shunting314 ghstack dependencies: #123768
diff --git a/test/inductor/test_templated_attention.py b/test/inductor/test_templated_attention.py index 64e8a97260..b374d6d77a 100644 --- a/test/inductor/test_templated_attention.py +++ b/test/inductor/test_templated_attention.py @@ -5,6 +5,7 @@ from collections import namedtuple from typing import Callable from u...
2.41.0
b8c81eb824c58820ad2b71b0c807280c6dd6c61
Fri, 12 Apr 2024 00:05:45 +0000
[PATCH 0049/1000] [PT] [FSDP] fix HSDP sharding placement (#123778)
Summary: https://github.com/pytorch/pytorch/pull/123230 formalized the contract for `ShardedTensor` sub group rank placement validation by making sure the placement rank is global rank, to align with general `torch.distributed` convention. The current HSDP allows for both `ShardedTensor` and `DTensor`. While `DTensor`...
diff --git a/test/distributed/_shard/sharding_plan/test_sharding_plan.py b/test/distributed/_shard/sharding_plan/test_sharding_plan.py index b5ea29b020..0536163a18 100644 --- a/test/distributed/_shard/sharding_plan/test_sharding_plan.py +++ b/test/distributed/_shard/sharding_plan/test_sharding_plan.py @@ -141,15 +141,1...
2.41.0
8824fd212d236e3a467d81245f53b5ff532c934
Fri, 12 Apr 2024 00:07:42 +0000
[PATCH 0050/1000] [inductor] Fix recompiles bug for torch.full (#123811)
Fixes #123810 Pull Request resolved: https://github.com/pytorch/pytorch/pull/123811 Approved by: https://github.com/peterbell10
diff --git a/test/inductor/test_torchinductor_dynamic_shapes.py b/test/inductor/test_torchinductor_dynamic_shapes.py index c3b0147fab..b7f80471cb 100644 --- a/test/inductor/test_torchinductor_dynamic_shapes.py +++ b/test/inductor/test_torchinductor_dynamic_shapes.py @@ -578,7 +578,7 @@ class TestInductorDynamic(TestCas...
2.41.0
2ba180e552aa05b21d07ccceece0db41846267b
Thu, 11 Apr 2024 14:06:21 -0700
[PATCH 0051/1000] [c10d] add more fields for periodic logging (#123860)
Summary: Added the names of the last enquened, started and completed colletives, in addition to their seq ID Test Plan: CI Pull Request resolved: https://github.com/pytorch/pytorch/pull/123860 Approved by: https://github.com/XilunWu
diff --git a/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp b/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp index def79cde2b..8f2d63656e 100644 --- a/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp +++ b/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp @@ -1551,11 +1551,17 @@ void ProcessGroupNCCL::watchdogHandle...
2.41.0
fe672b146e776d18fcb794ca3cbb4e52d32ba8e
Thu, 11 Apr 2024 08:19:27 -0700
[PATCH 0052/1000] compile: ban mutations on non-compositional uses of as_strided (#122502)
Fixes https://github.com/pytorch/pytorch/issues/104505 I was originally going to ban all usages of as_strided + mutation in functionalization. But I'm pretty sure that as_strided + mutation is fine when we are calling as_strided on a base tensor. So in this PR I added a slightly more conservative check: if we see an ...
diff --git a/aten/src/ATen/FunctionalStorageImpl.cpp b/aten/src/ATen/FunctionalStorageImpl.cpp index 22bba985a4..78a5b6a9cf 100644 --- a/aten/src/ATen/FunctionalStorageImpl.cpp +++ b/aten/src/ATen/FunctionalStorageImpl.cpp @@ -10,7 +10,7 @@ namespace at::functionalization { ViewMeta ViewMeta::to_out_idx(int64_t out_...
2.41.0
efaf54dc46034189cb36b345764a5a9a5b693d4
Thu, 11 Apr 2024 08:19:28 -0700
[PATCH 0054/1000] Fakeifying views shouldnt create symbols when dynamic=False (#123348)
Fixes https://github.com/pytorch/pytorch/issues/123298 I was also seeing some crashes in torchtrain due to dynamic shapes, even when I set `compile(dynamic=False)` (cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @wanc...
diff --git a/test/dynamo/test_subclasses.py b/test/dynamo/test_subclasses.py index 2bc7101c55..387b6bf59b 100644 --- a/test/dynamo/test_subclasses.py +++ b/test/dynamo/test_subclasses.py @@ -1456,6 +1456,24 @@ class TestNestedTensor(torch._dynamo.test_case.TestCase): for nt_view in self._get_views(): ...
2.41.0
b648afba4901521ccff124ebad931f8dd7aa13a
Fri, 12 Apr 2024 01:21:50 +0000
[PATCH 0055/1000] Enable UFMT on test/test_multiprocessing (#123840)
part of https://github.com/pytorch/pytorch/issues/123062 Pull Request resolved: https://github.com/pytorch/pytorch/pull/123840 Approved by: https://github.com/ezyang
diff --git a/.lintrunner.toml b/.lintrunner.toml index d492ed12e7..6a4472073f 100644 --- a/.lintrunner.toml +++ b/.lintrunner.toml @@ -1375,7 +1375,6 @@ exclude_patterns = [ 'test/test_modules.py', 'test/test_monitor.py', 'test/test_mps.py', - 'test/test_multiprocessing.py', 'test/test_multiproce...
2.41.0
68e5ced5df34f1aef3703654f76e03f5126b534
Tue, 9 Apr 2024 15:40:30 -0700
[PATCH 0057/1000] [Dispatcher] Collect autograd sequence numbers on PythonTLSSnapshot dispatch keys (#123304)
Fixes #121758 **TL;DR**: When profiling is turned on, the dispatcher will sometimes attach the autograd sequence number to the recorded profiler event. This PR expands the set of profiler events onto which we attach sequence numbers. Before, we'd only attach a sequence number if the current dispatch key was an Autogra...
diff --git a/aten/src/ATen/core/dispatch/Dispatcher.cpp b/aten/src/ATen/core/dispatch/Dispatcher.cpp index cd96c5825c..a355bbe92f 100644 --- a/aten/src/ATen/core/dispatch/Dispatcher.cpp +++ b/aten/src/ATen/core/dispatch/Dispatcher.cpp @@ -498,24 +498,38 @@ std::vector<OperatorName> Dispatcher::getRegistrationsForDispat...
2.41.0
aec97a40364bb6ccfd968f28d309cfff8748d20
Thu, 11 Apr 2024 13:10:09 -0700
[PATCH 0058/1000] [sparse] Add fast semi-structured spasification kernels (#122350)
This PR adds in fast semi-structured sparsification kernels to PyTorch. These kernels allow for accelerated semi-structured sparsification kernels in PyTorch. The kernels have been added as aten native functions In particular, three new functions have been added: * `torch._sparse_semi_structured_tile` This functio...
diff --git a/aten/src/ATen/native/native_functions.yaml b/aten/src/ATen/native/native_functions.yaml index 1953abbd31..04cd21b27d 100644 --- a/aten/src/ATen/native/native_functions.yaml +++ b/aten/src/ATen/native/native_functions.yaml @@ -3342,6 +3342,18 @@ dispatch: CUDA: _cslt_sparse_mm_search +- func: _spa...
2.41.0
de9e8237ac620f3ea8925cbd7f17aefc011f06d
Thu, 11 Apr 2024 10:39:48 -0700
[PATCH 0059/1000] [dynamo] Bug fix for GET_YIELD_FROM_ITER (#122943)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122943 Approved by: https://github.com/jansel
diff --git a/test/dynamo/test_misc.py b/test/dynamo/test_misc.py index 79835450be..b3d876290a 100644 --- a/test/dynamo/test_misc.py +++ b/test/dynamo/test_misc.py @@ -8736,6 +8736,24 @@ def ___make_guard_fn(): self.assertEqual(eager, compiled) self.assertEqual(counter.frame_count, 1) + def test_y...
2.41.0
06f7d1f2248a36d6ca7c325868eee1b1115668b
Fri, 12 Apr 2024 03:39:34 +0000
[PATCH 0061/1000] Enable UFMT on `test/jit_hooks`, `test/lazy` and some files (#123807)
Part of: #123062 Ran lintrunner on: - `test/jit_hooks` - `test/lazy` - `test/linear.py` - `test/load_torchscript_model.py` - `test/mkl_verbose.py` - `test/mkldnn_verbose.py` with command: ```bash lintrunner -a --take UFMT --all-files ``` Co-authored-by: Edward Z. Yang <ezyang@fb.com> Pull Request resolved: https:/...
diff --git a/.lintrunner.toml b/.lintrunner.toml index 6a4472073f..7c4e94ed24 100644 --- a/.lintrunner.toml +++ b/.lintrunner.toml @@ -1162,18 +1162,6 @@ exclude_patterns = [ 'test/functorch/test_vmap.py', 'test/functorch/test_vmap_registrations.py', 'test/functorch/xfail_suggester.py', - 'test/lazy/_...
2.41.0
b895ace1d36726e64781774f53b3d3098206116
Thu, 11 Apr 2024 23:44:51 +0000
[PATCH 0063/1000] Only run backward part of COW test if results are strided (#123870)
Fixes #123792 Pull Request resolved: https://github.com/pytorch/pytorch/pull/123870 Approved by: https://github.com/ezyang
diff --git a/test/test_ops.py b/test/test_ops.py index fc2e92e3c1..34462961df 100644 --- a/test/test_ops.py +++ b/test/test_ops.py @@ -1668,42 +1668,46 @@ class TestCompositeCompliance(TestCase): leaf_results = pytree.tree_leaves(results_raw) results = [r for r in leaf_results if isins...
2.41.0
669334175bb2155316e7a74685b6278e127ecb4
Fri, 12 Apr 2024 06:29:27 +0000
[PATCH 0064/1000] Revert "Add Matmul recipe into x86_inductor_quantizer (#122776)"
This reverts commit e8e9261b906f69b397e4027362be801f98a68d62. Reverted https://github.com/pytorch/pytorch/pull/122776 on behalf of https://github.com/DanilBaibak due to Break internal build ([comment](https://github.com/pytorch/pytorch/pull/122776#issuecomment-2051073373))
diff --git a/test/quantization/pt2e/test_x86inductor_quantizer.py b/test/quantization/pt2e/test_x86inductor_quantizer.py index 4af5a30ddf..c9df319bfd 100644 --- a/test/quantization/pt2e/test_x86inductor_quantizer.py +++ b/test/quantization/pt2e/test_x86inductor_quantizer.py @@ -289,42 +289,21 @@ class TestHelperModules...
2.41.0
881d567f402f6bf16c68803a1bf4bf5c5e1673f
Fri, 12 Apr 2024 07:23:57 +0000
[PATCH 0065/1000] Revert "[inductor] Write generated files from parent process (#123409)"
This reverts commit 79c565b24e6c305c09c8c908e27f4023f41dd567. Reverted https://github.com/pytorch/pytorch/pull/123409 on behalf of https://github.com/DanilBaibak due to Needs to be reverted because it blocks reverting of the broken PR. ([comment](https://github.com/pytorch/pytorch/pull/123409#issuecomment-2051166617))...
diff --git a/torch/_inductor/codecache.py b/torch/_inductor/codecache.py index 4e84838504..98cf75fc23 100644 --- a/torch/_inductor/codecache.py +++ b/torch/_inductor/codecache.py @@ -59,7 +59,12 @@ from torch._dynamo.device_interface import ( from torch._dynamo.utils import counters, dynamo_timed from torch._inductor...
2.41.0
994d993c05fdd93510dbaba4dcbfad4e4f20a1b
Fri, 12 Apr 2024 07:26:50 +0000
[PATCH 0066/1000] Revert "[inductor] Fix fresh_inductor_cache() (#122661)"
This reverts commit cda383e7bcdac029a6d5508d63c0355a40bb0d32. Reverted https://github.com/pytorch/pytorch/pull/122661 on behalf of https://github.com/DanilBaibak due to Break internal build ([comment](https://github.com/pytorch/pytorch/pull/122661#issuecomment-2051171028))
diff --git a/test/inductor/test_codecache.py b/test/inductor/test_codecache.py index 04ab69debc..89380bee53 100644 --- a/test/inductor/test_codecache.py +++ b/test/inductor/test_codecache.py @@ -14,12 +14,10 @@ from torch._inductor.codecache import ( CUDACodeCache, FxGraphCachePickler, FxGraphHashDetails...
2.41.0
ff53e169f77518f5037f0d6020c610837f2fa94
Fri, 12 Apr 2024 07:50:28 +0000
[PATCH 0067/1000] add option to turn on return_tuple in _SplitterBase (#123868)
Summary: as title. split the oss change from D55871896 into this separate diff Test Plan: deploy Differential Revision: D56032268 Pull Request resolved: https://github.com/pytorch/pytorch/pull/123868 Approved by: https://github.com/ZhengkaiZ, https://github.com/DanilBaibak
diff --git a/torch/fx/passes/splitter_base.py b/torch/fx/passes/splitter_base.py index 3a493f4af3..d0c146eb86 100644 --- a/torch/fx/passes/splitter_base.py +++ b/torch/fx/passes/splitter_base.py @@ -306,6 +306,7 @@ class _SplitterBase: operator_support: OperatorSupportBase, settings: _SplitterSettingB...
2.41.0
dfeec9cdc246a6a003dff4b6ba0a5ceb60613f1
Fri, 12 Apr 2024 08:56:06 +0000
[PATCH 0070/1000] Add a mode to avoid clone() in DDPSink (#122927)
DDPSink clones the outputs of DDP to avoid in-place modification of loss (see https://github.com/pytorch/pytorch/issues/61982). However, when outputs are really large (2-3GB) this adds a lot of overhead for peak memory. As a result, adding a mode to avoid this clone in cases where users are not modifying loss in-place...
diff --git a/torch/nn/parallel/distributed.py b/torch/nn/parallel/distributed.py index 56415aa84d..3db95fe14a 100644 --- a/torch/nn/parallel/distributed.py +++ b/torch/nn/parallel/distributed.py @@ -242,9 +242,11 @@ class _DDPSink(Function): # None and are not filled with zeros. ctx.set_materialize_gr...
2.41.0
274d57037837c29ecdd572dc7ea430267ecf9c2
Wed, 10 Apr 2024 21:20:22 -0700
[PATCH 0071/1000] [compiled autograd][dynamo] Make compiled graph take in boxed inputs (#122353)
### Context In today's Dynamo, we lift all tensors encountered during tracing to be individual graph inputs, even when they were in a container. And [Dynamo generates](https://github.com/pytorch/pytorch/blob/fdc281f2587f9a5a935de1f1368e7ad7ed0f9828/torch/_dynamo/codegen.py#L371) the runtime function's signature using ...
diff --git a/test/dynamo/test_backward_higher_order_ops.py b/test/dynamo/test_backward_higher_order_ops.py index 0b68f15328..db34287583 100644 --- a/test/dynamo/test_backward_higher_order_ops.py +++ b/test/dynamo/test_backward_higher_order_ops.py @@ -126,12 +126,14 @@ class _multiply_invoke(torch.nn.Module): ...
2.41.0
40b451e910c943e86badda7372eecfa628b8417
Thu, 11 Apr 2024 22:47:38 -0700
[PATCH 0072/1000] [compiled autograd][dynamo] Codegen aliases to keep grad mutated tensors alive (#123359)
The current codegen is problematic if __compiled_fn_0 clears the inputs list, since we need it for assignment afterwards ```python def forward(inputs): __compiled_fn_0 = ... # The actual function needs to be provided graph_out_0 = __compiled_fn_0(inputs) # clears inputs temp_list = [] temp_list.append(graph_out_0[0])...
diff --git a/test/inductor/test_compiled_autograd.py b/test/inductor/test_compiled_autograd.py index 7acc225272..da4f7a972a 100644 --- a/test/inductor/test_compiled_autograd.py +++ b/test/inductor/test_compiled_autograd.py @@ -257,6 +257,57 @@ main() self.assertNotEqual(grads[1], None) self.assertNotE...
2.41.0
fc3aa5f8168414b9bfbcf59df0e33450d96c424
Thu, 11 Apr 2024 22:47:39 -0700
[PATCH 0073/1000] [compiled autograd][aot] Trim runtime refs for list inputs from dynamo (#122535)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122535 Approved by: https://github.com/bdhirsh ghstack dependencies: #123630, #123674, #122353, #123359
diff --git a/test/inductor/test_compiled_autograd.py b/test/inductor/test_compiled_autograd.py index da4f7a972a..7992ce8c40 100644 --- a/test/inductor/test_compiled_autograd.py +++ b/test/inductor/test_compiled_autograd.py @@ -1191,6 +1191,43 @@ TORCH_LIBRARY(test_autograd_cpp_node_data_dependent, m) { self....
2.41.0
0eb162730e76132a5e29adbc16f8721ef125d68
Fri, 12 Apr 2024 13:14:59 +0000
[PATCH 0074/1000] Revert "Switch quantized_decomposed over to new custom ops API (#123454)"
This reverts commit 638729c0cdf3ce4274f4d68f8e46e5a1cd36cbe8. Reverted https://github.com/pytorch/pytorch/pull/123454 on behalf of https://github.com/DanilBaibak due to Break internal build ([comment](https://github.com/pytorch/pytorch/pull/123454#issuecomment-2051738976))
diff --git a/torch/_custom_op/impl.py b/torch/_custom_op/impl.py index 6f25e2b9af..fefd7cedf9 100644 --- a/torch/_custom_op/impl.py +++ b/torch/_custom_op/impl.py @@ -882,11 +882,6 @@ SUPPORTED_RETURN_TYPES = { def parse_return(annotation, error_fn): - if annotation == inspect.Signature.empty: - error_fn...
2.41.0
9f7ef33c42ebc49bce0274da97726ff17c5ca15
Thu, 11 Apr 2024 12:39:27 -0700
[PATCH 0075/1000] AOTAutograd: add config to error when overlapping input checks would cause slow compile / runtimes (#123455)
We should eventually make the non-overlapping checks faster when dynamic shapes are enabled, but this is pretty difficult to do. So for now this PR adds a config that lets us fail fast when this situation happens, instead of causing compile times to secretly come to a crawl. Pull Request resolved: https://github.com/p...
diff --git a/test/dynamo/test_repros.py b/test/dynamo/test_repros.py index 2811ea81ff..f98c84cf54 100644 --- a/test/dynamo/test_repros.py +++ b/test/dynamo/test_repros.py @@ -4271,6 +4271,30 @@ class ReproTests(torch._dynamo.test_case.TestCase): opt_fn(lengths) self.assertEqual(cnt.frame_count, 1) +...
2.41.0
120dbbf81f394bf7ecd0ea19da8729b2fcece65
Fri, 12 Apr 2024 13:26:07 +0000
[PATCH 0076/1000] Revert "[sparse] Add fast semi-structured spasification kernels (#122350)"
This reverts commit aaec97a40364bb6ccfd968f28d309cfff8748d20. Reverted https://github.com/pytorch/pytorch/pull/122350 on behalf of https://github.com/DanilBaibak due to Break internal build ([comment](https://github.com/pytorch/pytorch/pull/122350#issuecomment-2051757450))
diff --git a/aten/src/ATen/native/native_functions.yaml b/aten/src/ATen/native/native_functions.yaml index 04cd21b27d..1953abbd31 100644 --- a/aten/src/ATen/native/native_functions.yaml +++ b/aten/src/ATen/native/native_functions.yaml @@ -3342,18 +3342,6 @@ dispatch: CUDA: _cslt_sparse_mm_search
- dispatch:
cdde98df4dd85841b138e3a18a2bb9c065cf83b
Fri, 12 Apr 2024 06:52:50 +0000
[PATCH 0077/1000] Intel GPU oneDNN upstreaming for library compilation (#117098)
# Motivation As proposed in https://github.com/pytorch/pytorch/issues/114848 and https://github.com/pytorch/pytorch/issues/114723, oneDNN library is an important component for Intel GPU software ecosystem. This PR is intended to enable oneDNN compilation for Intel GPU. It is the first step for we enabling any opera...
diff --git a/cmake/Modules/FindMKLDNN.cmake b/cmake/Modules/FindMKLDNN.cmake index d9ca7dfd54..f6a19812c8 100644 --- a/cmake/Modules/FindMKLDNN.cmake +++ b/cmake/Modules/FindMKLDNN.cmake @@ -18,6 +18,49 @@ IF(NOT MKLDNN_FOUND) SET(IDEEP_ROOT "${PROJECT_SOURCE_DIR}/third_party/ideep") SET(MKLDNN_ROOT "${PROJECT_...
2.41.0
1613a0803f7cde7956f039bc80f94253b0843f9
Fri, 12 Apr 2024 14:28:19 +0000
[PATCH 0080/1000] [Profiler][PrivateUse1] Profiler support PrivateUse1 key (#120556)
Summary: 1.Package public headers of kineto if USE_KINETO so that they can be used by PrivateUse1 user. 2.Add PrivateUse1 key to ActivityType. 3. Support PrivateUse1 key in function deviceTypeFromActivity and _supported_activities. 4. Fix some bugs when processing profiler results. Co-authored-by: albanD <desmaison.alb...
diff --git a/setup.py b/setup.py index 06ab04b80f..ce8f16df77 100644 --- a/setup.py +++ b/setup.py @@ -1386,6 +1386,12 @@ def main(): "include/tensorpipe/transport/uv/*.h", ] ) + if get_cmake_cache_vars()["USE_KINETO"]: + torch_package_data.extend( + [ + ...
2.41.0
024c0c2ef8d50e319ba5a9542dd648a35f7ed41
Fri, 12 Apr 2024 15:36:47 +0000
[PATCH 0082/1000] Convert MKL symbols from global to local (#122284)
PyTorch is statically linked to MKL but MKL symbols are visible to global, which may cause symbol conflicts. Such error has been observed when a different version of MKL is dynamically linked to the other components: `libtorch_cpu.so` was invoked incorrectly when MKL descriptor was freed. Pull Request resolved: https:...
diff --git a/cmake/public/mkl.cmake b/cmake/public/mkl.cmake index 68bf1b9dc9..2f6d1fd905 100644 --- a/cmake/public/mkl.cmake +++ b/cmake/public/mkl.cmake @@ -21,3 +21,20 @@ endforeach() set_property( TARGET caffe2::mkl PROPERTY INTERFACE_LINK_DIRECTORIES ${MKL_ROOT}/lib ${MKL_ROOT}/lib/intel64 ${MKL_ROOT}/lib/i...
2.41.0
cb3301f807e2d58d719bbd976b68a13999f1c70
Fri, 12 Apr 2024 15:38:41 +0000
[PATCH 0083/1000] [ROCm] Add cast to kFloat in amax calculation (#123872)
necessary cast to kFloat missed in previous amax PR Pull Request resolved: https://github.com/pytorch/pytorch/pull/123872 Approved by: https://github.com/drisspg
diff --git a/aten/src/ATen/native/cuda/Blas.cpp b/aten/src/ATen/native/cuda/Blas.cpp index 15a05985fd..be8aa363a9 100644 --- a/aten/src/ATen/native/cuda/Blas.cpp +++ b/aten/src/ATen/native/cuda/Blas.cpp @@ -912,7 +912,7 @@ _scaled_mm_out_cuda(const Tensor& mat1, const Tensor& mat2, #if defined(USE_ROCM) && ROCM_VERS...
2.41.0
51582949bd45bbe892d936f58ff4cafa1fd6204
Fri, 12 Apr 2024 15:44:56 +0000
[PATCH 0084/1000] [export] Enforce final classes in serialization. (#123861)
Summary: as title, these are private API and not meant to be used across repos. Test Plan: CI Differential Revision: D56027954 Pull Request resolved: https://github.com/pytorch/pytorch/pull/123861 Approved by: https://github.com/tugsbayasgalan
diff --git a/torch/_export/serde/serialize.py b/torch/_export/serde/serialize.py index 9a24862271..18acbe1cae 100644 --- a/torch/_export/serde/serialize.py +++ b/torch/_export/serde/serialize.py @@ -1,5 +1,6 @@ import base64 import copy +import copyreg import dataclasses import heapq import inspect @@ -8,9 +9,8 @@...
2.41.0
35db051d08c8a358ce43cc05670ac552e3d40c3
Fri, 12 Apr 2024 06:38:31 -0700
[PATCH 0087/1000] get torch.distributed.breakpoint() to work under Python/Meta contexts (#118645)
I noticed that when I put a `torch.distributed.breakpoint()` in [here](https://github.com/pytorch/pytorch/blob/main/torch/_subclasses/meta_utils.py#L605), it would fail. This fixes it. In theory, it would probably be better to have a way to get the `barrier()` call to skip the dispatcher completely. I wasn't sure how ...
diff --git a/torch/distributed/__init__.py b/torch/distributed/__init__.py index 5fb05a3477..206a6e6f1f 100644 --- a/torch/distributed/__init__.py +++ b/torch/distributed/__init__.py @@ -86,7 +86,16 @@ if is_available(): f"Type 'up' to get to the frame that called dist.breakpoint(rank={rank})\n" ...
2.41.0
4c887fbf6fcc9b1864d10b3ab18294e5edebdfc
Thu, 11 Apr 2024 21:30:18 -0300
[PATCH 0088/1000] [AOTAutograd] Replay views on output using `FunctionalTensor` metas. (#121007)
Fix: #120336 This PR fixes an issue on AOTAutograd, specifically on backends that don't support views by themselves (e.g. XLA). Previously, AOTAutograd tried to reconstruct output views by calling `as_strided` on the concrete bases using sizes and strides of the outputs that aliased them. Since backends such as XLA do...
diff --git a/aten/src/ATen/FunctionalTensorWrapper.cpp b/aten/src/ATen/FunctionalTensorWrapper.cpp index db36ca2bbd..a7ba697d13 100644 --- a/aten/src/ATen/FunctionalTensorWrapper.cpp +++ b/aten/src/ATen/FunctionalTensorWrapper.cpp @@ -337,16 +337,26 @@ void FunctionalTensorWrapper::sync_() { regenerate_from_base(); ...
2.41.0
c0a380bdf756414bb8dffd99256d3e7c360ad3e
Thu, 11 Apr 2024 14:47:40 -0700
[PATCH 0089/1000] [pt2e][qat] Support conv-transpose-bn[-relu] QAT fusion (#123652)
Summary: This commit adds support for QAT fusion for the [conv-transpose-bn] and [conv-transpose-bn-relu] patterns. Test Plan: python test/test_quantization.py TestQuantizePT2EQAT_ConvBn1d.test_qat_conv_transpose_bn python test/test_quantization.py TestQuantizePT2EQAT_ConvBn1d.test_qat_conv_transpose_bn_relu python te...
diff --git a/test/dynamo_expected_failures/TestQuantizePT2EQAT_ConvBn1d.test_qat_conv_transpose_bn b/test/dynamo_expected_failures/TestQuantizePT2EQAT_ConvBn1d.test_qat_conv_transpose_bn new file mode 100644 index 0000000000..e69de29bb2 diff --git a/test/dynamo_expected_failures/TestQuantizePT2EQAT_ConvBn1d.test_qat_co...
2.41.0
b647bd325ead8e256199f807ec8bb8765a0772a
Fri, 12 Apr 2024 17:17:34 +0000
[PATCH 0090/1000] Add missing interfaces of `torch.optim.swa_utils` (#117036)
Add type hints for the function/class interfaces that appear in torch/optim/swa_utils.py but are missing in torch/optim/swa_utils.pyi. - get_ema_multi_avg_fn - get_swa_multi_avg_fn - get_ema_avg_fn - get_swa_avg_fn - AveragedModel.__init__(multi_avg_fn) - SWALR.get_lr Co-authored-by: Jane (Yuan) Xu <31798555+janeyx99...
diff --git a/torch/distributed/pipeline/sync/batchnorm.py b/torch/distributed/pipeline/sync/batchnorm.py index ad375f8933..868ad50cf3 100644 --- a/torch/distributed/pipeline/sync/batchnorm.py +++ b/torch/distributed/pipeline/sync/batchnorm.py @@ -5,7 +5,7 @@ # This source code is licensed under the BSD license found i...
2.41.0
c09c6b91af6287be9ce6f167640575f0908f943
Fri, 12 Apr 2024 17:34:58 +0000
[PATCH 0091/1000] Fix memory planning compile error (#123867)
Summary: We should be using CppPrinter in the cpp wrapper codegen, not the ExprPrinter (which prints expressions for Python) Not really a memory-planning-specific bug, but exposed by mem planning because it tends to emit more complicated expressions Differential Revision: D56025683 Pull Request resolved: https://git...
diff --git a/test/inductor/test_memory_planning.py b/test/inductor/test_memory_planning.py index c1271e0648..1bd546e5b4 100644 --- a/test/inductor/test_memory_planning.py +++ b/test/inductor/test_memory_planning.py @@ -74,7 +74,7 @@ class TestMemoryPlanning(TestCase): ).check_next( "auto buf0 = al...
2.41.0
3dbe2b517b1a515016eaf6c681aaa02f62f3cdd
Thu, 11 Apr 2024 17:31:24 +0000
[PATCH 0092/1000] Add test for skipping hf logging during export (#123410)
https://github.com/pytorch/pytorch/pull/123402 already supports hf logging because HF logger is based on logging module This PR adds a test to guard this against regression, only Pull Request resolved: https://github.com/pytorch/pytorch/pull/123410 Approved by: https://github.com/BowenBao, https://github.com/malfet
diff --git a/test/export/test_export.py b/test/export/test_export.py index f05aa1d074..5b2f920739 100644 --- a/test/export/test_export.py +++ b/test/export/test_export.py @@ -45,6 +45,7 @@ from torch.testing._internal.common_utils import ( IS_SANDCASTLE, IS_WINDOWS, run_tests, + TEST_TRANSFORMERS, ...
2.41.0
66b24e24255044de81092547541da84d13799cb
Fri, 12 Apr 2024 18:21:27 +0000
[PATCH 0094/1000] [Inductor] Add a device agnostic DeviceGuard class to inductor (#123338)
Summary: Currently although only in one place in inductor, the `device` context manager from the device interface is used . This PR creates an inductor specific `DeviceGuard` class for use in these cases, which keeps a reference to the `DeviceInterface` class which is defined and added out of tree. This then offloads t...
diff --git a/test/dynamo/test_deviceguard.py b/test/dynamo/test_deviceguard.py new file mode 100644 index 0000000000..4ed54a4c19 --- /dev/null +++ b/test/dynamo/test_deviceguard.py @@ -0,0 +1,92 @@ +# Owner(s): ["module: dynamo"] +import unittest +from unittest.mock import Mock + +import torch + +import torch._dynamo.t...
2.41.0
b0ba6bbd320985fedaccbd4a874ef5ff3d61d74
Thu, 11 Apr 2024 21:54:39 -0700
[PATCH 0097/1000] [dynamo] Improve constant-prop for regex/torch.__version__ (#123705)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123705 Approved by: https://github.com/anijain2305 ghstack dependencies: #123700
diff --git a/test/dynamo/test_functions.py b/test/dynamo/test_functions.py index 7ce0edd9b5..db79e59d39 100644 --- a/test/dynamo/test_functions.py +++ b/test/dynamo/test_functions.py @@ -171,6 +171,58 @@ class FunctionTests(torch._dynamo.test_case.TestCase): v = v + x return v + @make_test + ...
2.41.0
022600cc64f1594d98656d66dc261822ea7d202
Thu, 11 Apr 2024 21:54:40 -0700
[PATCH 0098/1000] [inductor] Handle meta tensor ops in graph (#123786)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123786 Approved by: https://github.com/anijain2305 ghstack dependencies: #123700, #123705
diff --git a/test/inductor/test_cpu_repro.py b/test/inductor/test_cpu_repro.py index 80a0fed789..7acd238c96 100644 --- a/test/inductor/test_cpu_repro.py +++ b/test/inductor/test_cpu_repro.py @@ -1773,6 +1773,19 @@ class CPUReproTests(TestCase): res_grad = test_args_for_opt["input"].grad self.a...
2.41.0
1e6f84ad8faa1400ce5227f5e42f44325c585ed
Thu, 11 Apr 2024 21:54:40 -0700
[PATCH 0099/1000] [dynamo] Graph break on uninitialized nn.Module (#123790)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123790 Approved by: https://github.com/anijain2305 ghstack dependencies: #123700, #123705, #123786
diff --git a/torch/_dynamo/guards.py b/torch/_dynamo/guards.py index f54b3cfe57..33caa7dd42 100644 --- a/torch/_dynamo/guards.py +++ b/torch/_dynamo/guards.py @@ -1099,16 +1099,10 @@ class GuardBuilder(GuardBuilderBase): def NN_MODULE(self, guard: Guard): self.ID_MATCH(guard) - ref = self.arg_ref...
2.41.0
3935783f7f0ae642d409aaf7df00e240730311c
Thu, 11 Apr 2024 21:54:41 -0700
[PATCH 0101/1000] [dynamo] Fix @property on user-defined nn.Module (#123804)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123804 Approved by: https://github.com/anijain2305 ghstack dependencies: #123700, #123705, #123786, #123790, #123803
diff --git a/test/dynamo/test_repros.py b/test/dynamo/test_repros.py index f98c84cf54..2b03261129 100644 --- a/test/dynamo/test_repros.py +++ b/test/dynamo/test_repros.py @@ -4658,6 +4658,30 @@ def forward(self, s0 : torch.SymInt, s1 : torch.SymInt, L_x_ : torch.Tensor): compiled_str = str(e) self...
2.41.0
0694690810f3d0530f22b3558384df715bb6e3e
Thu, 11 Apr 2024 21:54:42 -0700
[PATCH 0103/1000] [dynamo] Support Tuple[int] args to autograd.Function (#123887)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123887 Approved by: https://github.com/anijain2305 ghstack dependencies: #123700, #123705, #123786, #123790, #123803, #123804, #123896
diff --git a/test/dynamo/test_autograd_function.py b/test/dynamo/test_autograd_function.py index beca818570..afde52b9ed 100644 --- a/test/dynamo/test_autograd_function.py +++ b/test/dynamo/test_autograd_function.py @@ -924,6 +924,33 @@ class AutogradFunctionTests(torch._dynamo.test_case.TestCase): foo(torch.ra...
2.41.0
f0fc04fa39017761a72f46ae50571e2fd35d9da
Fri, 12 Apr 2024 19:18:16 +0000
[PATCH 0104/1000] [CUDA][64-bit indexing] Bump large tensor threshold of `test_cross_entropy_large_tensor` to 70GiB (#123772)
`torch.cuda.max_memory_reserved()` here shows 68729962496 (about 65546 MiB). CC @malfet @crcrpar Pull Request resolved: https://github.com/pytorch/pytorch/pull/123772 Approved by: https://github.com/mikaylagawarecki
diff --git a/test/test_nn.py b/test/test_nn.py index f4f5b80a3c..ae58f688c5 100644 --- a/test/test_nn.py +++ b/test/test_nn.py @@ -11737,10 +11737,10 @@ if __name__ == '__main__': # i.e. we don't count the ignored_idx at all. check_equal(loss, (inp1, targ_positive_ignore_index), (inp2[...
2.41.0
346ec8263ee0695b7706cf38e7fd4063ad69896
Thu, 11 Apr 2024 09:31:29 -0700
[PATCH 0105/1000] [BE] Document what is tested in TestOptim (#123853)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123853 Approved by: https://github.com/soulitzer
diff --git a/test/test_optim.py b/test/test_optim.py index d11fe8d42f..7f4a352c48 100644 --- a/test/test_optim.py +++ b/test/test_optim.py @@ -42,6 +42,48 @@ def drosenbrock(tensor): @markDynamoStrictTest class TestOptimRenewed(TestCase): + """ + This test class validates the core optimizers and is structured...
2.41.0
62e19606e2555361d2cdeb64d914a4cab93d728
Fri, 12 Apr 2024 08:52:55 -0700
[PATCH 0106/1000] [quant] Enable backward for choose_qparams_per_token_asymmetric (#123452)
Summary: When running the backward for this op, we get the error: ``` RuntimeError: derivative for aten::aminmax is not implemented ``` This commit replaces this call with separate amin and amax calls instead, which do have implemented derivatives. Test Plan: python test/test_quantization.py -k test_decomposed_choose_...
diff --git a/test/quantization/core/test_quantized_tensor.py b/test/quantization/core/test_quantized_tensor.py index b2bd97bdc3..228f1f8ee7 100644 --- a/test/quantization/core/test_quantized_tensor.py +++ b/test/quantization/core/test_quantized_tensor.py @@ -1602,6 +1602,14 @@ class TestQuantizedTensor(TestCase): ...
2.41.0
9deff689fabc87941bf7a8ea5db987a01e125f8
Fri, 12 Apr 2024 20:13:16 +0000
[PATCH 0107/1000] Update compile doc to suggest Module.compile (#123951)
For users for whom fqn change is problematic Pull Request resolved: https://github.com/pytorch/pytorch/pull/123951 Approved by: https://github.com/msaroufim
diff --git a/torch/__init__.py b/torch/__init__.py index 3a10130d5f..73e91dd316 100644 --- a/torch/__init__.py +++ b/torch/__init__.py @@ -1794,6 +1794,8 @@ def compile(model: Optional[Callable] = None, *, disable: builtins.bool = False) -> Callable: """ Optimizes given model/function using Torch...
2.41.0
65aa5af6ed429be295c9ac57895186fc33dd4f8
Fri, 12 Apr 2024 21:19:41 +0000
[PATCH 0109/1000] [Pytorch] doc sync-stream-and-free-HBM counter in memory_stats (#123799)
Differential Revision: D56000503 Pull Request resolved: https://github.com/pytorch/pytorch/pull/123799 Approved by: https://github.com/malfet
diff --git a/torch/cuda/memory.py b/torch/cuda/memory.py index 60440c58dc..857a13a06c 100644 --- a/torch/cuda/memory.py +++ b/torch/cuda/memory.py @@ -210,6 +210,11 @@ def memory_stats(device: Union[Device, int] = None) -> Dict[str, Any]: - ``"num_alloc_retries"``: number of failed ``cudaMalloc`` calls that ...
2.41.0
b11fb4695c61e04c7032e039f0a9f02636e0176
Fri, 12 Apr 2024 18:24:20 +0000
[PATCH 0111/1000] [Dynamo] fix opcode `YIELD_FROM` and `SEND` (#123912)
This PR is split from #120300. - #120300 Pull Request resolved: https://github.com/pytorch/pytorch/pull/123912 Approved by: https://github.com/anijain2305
diff --git a/test/dynamo/test_misc.py b/test/dynamo/test_misc.py index f64f33a5b9..e760cce7fd 100644 --- a/test/dynamo/test_misc.py +++ b/test/dynamo/test_misc.py @@ -8729,7 +8729,7 @@ def ___make_guard_fn(): return [t * k for t in yield_from_gen(t_list)] - t_list = [torch.randn([2, 3])] * 3 + ...
2.41.0
dde6a461f3fff53104344bf4f9c16dec0b3ff86
Fri, 12 Apr 2024 22:31:57 +0000
[PATCH 0113/1000] fix cpp path in torch/_C/_autograd.pyi (#123924)
The file `tools/autograd/init.cpp` does not exist, I think the right path is `torch/csrc/autograd/init.cpp`. Pull Request resolved: https://github.com/pytorch/pytorch/pull/123924 Approved by: https://github.com/Skylion007
diff --git a/torch/_C/_autograd.pyi b/torch/_C/_autograd.pyi index 7f15c1cd12..92b21f96df 100644 --- a/torch/_C/_autograd.pyi +++ b/torch/_C/_autograd.pyi @@ -10,7 +10,7 @@ from ._profiler import ( ProfilerConfig, ) -# Defined in tools/autograd/init.cpp +# Defined in torch/csrc/autograd/init.cpp class Device...
2.41.0
e98bdd66d2b051a918e58d5f7bb80b366677bf8
Fri, 12 Apr 2024 23:30:56 +0000
[PATCH 0115/1000] [dynamo] Turn on CPP guard manager (#123547)
As title Pull Request resolved: https://github.com/pytorch/pytorch/pull/123547 Approved by: https://github.com/jansel
diff --git a/torch/_dynamo/config.py b/torch/_dynamo/config.py index b94f523d14..e45aad8fa2 100644 --- a/torch/_dynamo/config.py +++ b/torch/_dynamo/config.py @@ -339,7 +339,7 @@ numpy_default_int = "int64" use_numpy_random_stream = False # Use C++ guard manager -enable_cpp_guard_manager = os.environ.get("TORCHDYNA...
2.41.0
85cd117e64f51b998049e66aeac1728aadcf8b6
Fri, 12 Apr 2024 23:33:11 +0000
[PATCH 0116/1000] [nccl-pg] print broadcast ncclunique id duration (#123963)
Summary: Print NCCL PG broadcast nccl unique id duration for measurement. Differential Revision: D56048059 Pull Request resolved: https://github.com/pytorch/pytorch/pull/123963 Approved by: https://github.com/wconstab
diff --git a/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp b/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp index bf34066101..66e5e00504 100644 --- a/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp +++ b/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp @@ -1945,7 +1945,16 @@ std::shared_ptr<NCCLComm> ProcessGroup...
2.41.0
61eb39348117221e284538f0ec0e95d2024b4e9
Fri, 12 Apr 2024 13:49:02 -0700
[PATCH 0117/1000] AOT logging: log fw_metadata with each graph (#118646)
Log fw_metadata for each AOT graph. This is helpful for seeing information about subclass graph inputs/outputs/tangents, and lots of other stuff Pull Request resolved: https://github.com/pytorch/pytorch/pull/118646 Approved by: https://github.com/tugsbayasgalan, https://github.com/ezyang ghstack dependencies: #118645
diff --git a/test/dynamo/test_logging.py b/test/dynamo/test_logging.py index 21caf7b975..f2f507825a 100644 --- a/test/dynamo/test_logging.py +++ b/test/dynamo/test_logging.py @@ -85,7 +85,7 @@ def single_record_test(**kwargs): class LoggingTests(LoggingTestCase): test_bytecode = multi_record_test(2, bytecode=True...
2.41.0
d6c5972c12e4742b2f2b2776f0b3421c0e1c678
Fri, 12 Apr 2024 23:54:11 +0000
[PATCH 0118/1000] [BE]: Optimize min/max/sum comprehensions C419 (#123960)
Automatic fixes that replaces certain list comprehensions with generator ones where appropriate so that they are immediately consumed. This is preview functionality in ruff for rule C419 and it was automatically applied. Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com> Pull Request resolved: htt...
diff --git a/benchmarks/gpt_fast/benchmark.py b/benchmarks/gpt_fast/benchmark.py index 0bbbbcb8f7..14d7477449 100644 --- a/benchmarks/gpt_fast/benchmark.py +++ b/benchmarks/gpt_fast/benchmark.py @@ -194,10 +194,8 @@ def run_experiment( torch.manual_seed(1234) model_size = sum( - [ - p.nume...
2.41.0
79f5b9a3926e155d118932eb5b7678436139341
Sat, 13 Apr 2024 00:55:44 +0000
[PATCH 0120/1000] Pass triton kernel info to record function (#123871)
Summary: This DIFF is to pass triton kernel information, such as kernel python file, kernel type, grid, and stream, to record_function. With these information, Execution trace can capture triton kernel and replay it in PARAM. Test Plan: unit test buck2 test caffe2/test:profiler -- test_record_function_fast Differenti...
diff --git a/torch/_inductor/triton_heuristics.py b/torch/_inductor/triton_heuristics.py index 5ce3d82d71..e4b392df36 100644 --- a/torch/_inductor/triton_heuristics.py +++ b/torch/_inductor/triton_heuristics.py @@ -147,6 +147,7 @@ class CachingAutotuner(KernelInterface): size_hints=None, inductor_meta...
2.41.0
7a45883ce8496e0e67f2f64a3587029aa61795e
Sat, 13 Apr 2024 00:57:00 +0000
[PATCH 0121/1000] [Reland] [Distributed] [2/N] Fix clang-tidy warnings in torch/csrc/distributed/c10d (#123821)
Reland of #122892 with problematic changes reverted. Pull Request resolved: https://github.com/pytorch/pytorch/pull/123821 Approved by: https://github.com/Skylion007
diff --git a/torch/csrc/distributed/c10d/Functional.cpp b/torch/csrc/distributed/c10d/Functional.cpp index e2829937c5..d3c4a9fe1d 100644 --- a/torch/csrc/distributed/c10d/Functional.cpp +++ b/torch/csrc/distributed/c10d/Functional.cpp @@ -1,5 +1,3 @@ -#include <shared_mutex> - #include <ATen/ATen.h> #include <ATen/co...
2.41.0
1654fd4b091345a3ea763562b5cb6bf4b2c7081
Fri, 12 Apr 2024 15:41:33 -0700
[PATCH 0122/1000] [PT2D][FSDP] skip FSDP hooks base on dynamo config (#123021)
unit test: `pytest test/distributed/_composable/fsdp/test_fully_shard_compile.py` For FSDP, we turn on/off compiling hooks base on `torch._dynamo.config.skip_fsdp_hooks` Pull Request resolved: https://github.com/pytorch/pytorch/pull/123021 Approved by: https://github.com/yf225, https://github.com/anijain2305
diff --git a/test/distributed/_composable/fsdp/test_fully_shard_compile.py b/test/distributed/_composable/fsdp/test_fully_shard_compile.py new file mode 100644 index 0000000000..8a87dfdd1d --- /dev/null +++ b/test/distributed/_composable/fsdp/test_fully_shard_compile.py @@ -0,0 +1,64 @@ +# Owner(s): ["oncall: distribut...
2.41.0
efe6e2fea41b875ad19b27980b937f2b7aa6435
Thu, 11 Apr 2024 23:40:10 -0700
[PATCH 0123/1000] [dynamo][3.12] Stop backend detection on the first RETURN_VALUE (#123878)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123878 Approved by: https://github.com/williamwen42 ghstack dependencies: #122943, #123877
diff --git a/benchmarks/dynamo/ci_expected_accuracy/aot_eager_torchbench_inference.csv b/benchmarks/dynamo/ci_expected_accuracy/aot_eager_torchbench_inference.csv index 155f1e54ff..49ad52dabe 100644 --- a/benchmarks/dynamo/ci_expected_accuracy/aot_eager_torchbench_inference.csv +++ b/benchmarks/dynamo/ci_expected_accur...
2.41.0
8afcd7b619aa9f5732cecc2ba1d012c93f23deb
Thu, 11 Apr 2024 23:40:11 -0700
[PATCH 0124/1000] [dynamo][dict] Add UnspecializedNNModuleVariable to dict keys (#122812)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122812 Approved by: https://github.com/jansel ghstack dependencies: #122943, #123877, #123878
diff --git a/test/dynamo/test_repros.py b/test/dynamo/test_repros.py index 2b03261129..a51664a748 100644 --- a/test/dynamo/test_repros.py +++ b/test/dynamo/test_repros.py @@ -1301,9 +1301,9 @@ class ReproTests(torch._dynamo.test_case.TestCase): self.assertTrue(same(opt_model(a, b, c, d), correct)) ...
2.41.0
da3e113ca9420a07dd2786ad524de30283326e2
Fri, 12 Apr 2024 17:00:11 -0700
[PATCH 0125/1000] [functional_collective] remove the logic that forces torch-xla to use legacy funcol (#123776)
After https://github.com/pytorch/xla/pull/6887, torch-xla now also uses the all_reduce from native funcol. So we can remove this logic. Pull Request resolved: https://github.com/pytorch/pytorch/pull/123776 Approved by: https://github.com/wanchaol
diff --git a/torch/distributed/_functional_collectives_impl.py b/torch/distributed/_functional_collectives_impl.py index 69c4715bd4..d628dff7c2 100644 --- a/torch/distributed/_functional_collectives_impl.py +++ b/torch/distributed/_functional_collectives_impl.py @@ -42,17 +42,8 @@ else: def native_funcol_enabled()...
2.41.0
a2e1d8e4fdcf6d91a6874a1e1e323de93a1be4b
Fri, 12 Apr 2024 17:00:11 -0700
[PATCH 0126/1000] [functional collective] change the Python APIs to only use the native funcol ops (#123777)
## Summary After this PR, the functional collective Python APIs will stop honoring `TORCH_DISABLE_NATIVE_FUNCOL` and only use native funcol ops. Specifically, this PR: - Removed `use_native_funcol()`. - Removed the code path in the Python APIs when `use_native_funcol()` is `False`. - Changed the CI tests that runs on ...
diff --git a/test/distributed/_composable/test_replicate_with_compiler.py b/test/distributed/_composable/test_replicate_with_compiler.py index 9a7da14884..381610ef57 100644 --- a/test/distributed/_composable/test_replicate_with_compiler.py +++ b/test/distributed/_composable/test_replicate_with_compiler.py @@ -27,7 +27,...
2.41.0
1b8363f409c5b2b7a16d0a58e04dc4a8e6c5e8d
Sat, 13 Apr 2024 03:19:10 +0000
[PATCH 0127/1000] [inductor] Remove unused local variable. (#120227)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/120227 Approved by: https://github.com/Skylion007
diff --git a/torch/_inductor/comm_analysis.py b/torch/_inductor/comm_analysis.py index 6ff48e5dc6..de3f631b0e 100644 --- a/torch/_inductor/comm_analysis.py +++ b/torch/_inductor/comm_analysis.py @@ -61,7 +61,6 @@ def get_collective_type(node: ir.IRNode) -> NCCL_COLL: def get_collective_input_size_bytes(node: ir.IRNode...
2.41.0
216068559b723b795d63723dd32fc0b82243bdf
Sat, 13 Apr 2024 03:31:56 +0000
[PATCH 0128/1000] Enable UFMT on test/test_ops* (#123935)
Part of https://github.com/pytorch/pytorch/issues/123062 Pull Request resolved: https://github.com/pytorch/pytorch/pull/123935 Approved by: https://github.com/ezyang
diff --git a/.lintrunner.toml b/.lintrunner.toml index a0a74c86ad..4073f1bbff 100644 --- a/.lintrunner.toml +++ b/.lintrunner.toml @@ -1476,10 +1476,6 @@ exclude_patterns = [ 'test/test_nvfuser_dynamo.py', 'test/test_nvfuser_frontend.py', 'test/test_openmp.py', - 'test/test_ops.py', - 'test/test_op...
2.41.0
39e6b3156f91cd0250c5e9625d1235c7dd276f5
Sat, 13 Apr 2024 04:04:09 +0000
[PATCH 0130/1000] Cleanup: Remove redundant `inference_patterns` PatternMatcherPass (#121602)
## Summary Removes a redundant `PatternMatcherPass` in Inductor post-grad passes Pull Request resolved: https://github.com/pytorch/pytorch/pull/121602 Approved by: https://github.com/jgong5, https://github.com/eellison
diff --git a/torch/_inductor/fx_passes/post_grad.py b/torch/_inductor/fx_passes/post_grad.py index ff09069a2e..89a3978845 100644 --- a/torch/_inductor/fx_passes/post_grad.py +++ b/torch/_inductor/fx_passes/post_grad.py @@ -61,8 +61,6 @@ pass_patterns = [ PatternMatcherPass(), PatternMatcherPass(), ] -# patte...
2.41.0
e3f80f00f3995e335bd6313b8c4be998cc4e2cd
Sat, 13 Apr 2024 04:18:42 +0000
[PATCH 0131/1000] accelerate `binary_cross_entropy_with_logits` (#122789)
Following https://github.com/pytorch/pytorch/pull/115539 Same benchmark in #115539: |avg time (ms)|with `pos_weight`|no `pos_weight`| |-|-|-| |before #115539 |2049|1736| |after #115539 |1320|1049| |this PR |907 |801| This PR is faster 24-31% than the version after #115539. Pull Request resolved: ht...
diff --git a/aten/src/ATen/native/Activation.cpp b/aten/src/ATen/native/Activation.cpp index 75f373811c..533bc32216 100644 --- a/aten/src/ATen/native/Activation.cpp +++ b/aten/src/ATen/native/Activation.cpp @@ -76,7 +76,6 @@ #include <ATen/ops/tanh.h> #include <ATen/ops/threshold_backward_native.h> #include <ATen/op...
2.41.0
91736f115cfb8414bb46df2821693a7cee140bf
Sat, 13 Apr 2024 01:51:52 +0000
[PATCH 0132/1000] Fix links rendering when surrounding code in Dynamo deepdive (#123427)
I thought the RST was rendering correctly, but here we are. Pull Request resolved: https://github.com/pytorch/pytorch/pull/123427 Approved by: https://github.com/peterbell10
diff --git a/docs/source/export.rst b/docs/source/export.rst index f3278e2721..2ebf780944 100644 --- a/docs/source/export.rst +++ b/docs/source/export.rst @@ -668,7 +668,8 @@ Read More :caption: Deep Dive for PyTorch Developers :maxdepth: 1 - torch.compiler_deepdive + torch.compiler_dynamo_overview + to...
2.41.0
961e23e76a83640170916b5cecd5090512c0b74
Sat, 13 Apr 2024 05:27:52 +0000
[PATCH 0133/1000] primitive attribute assignment (#123898)
This PR ensures that assignment of attributes of primitive type work without needing any code changes in non-strict mode. (In a previous PR we banned attribute assignments of tensor type unless such attributes are registered as buffers.) While strict mode errors on (all) attribute assignments, non-strict doesn't care,...
diff --git a/test/export/test_export.py b/test/export/test_export.py index 5b2f920739..4bc660b916 100644 --- a/test/export/test_export.py +++ b/test/export/test_export.py @@ -424,7 +424,7 @@ class TestExport(TestCase): foo, bad_example_inp, dynamic_shapes=dynamic_shapes, strict=False ) -...
2.41.0