commitId
stringlengths
40
40
datetime
stringlengths
30
31
subject
stringlengths
37
266
comment
stringlengths
109
15.2k
diff
stringlengths
238
914k
gitVersion
stringclasses
9 values
5a8e9ee7776ba53a768df32a86c1b8a45049fdf
Tue, 30 Apr 2024 16:59:20 -0700
[PATCH 0877/1000] [inductor] better cache clearing in fx graph cache tests (#125280)
Summary: There's a shortcoming in the FX graph cache tests in that they don't fully clear all inductor in-memory caches when testing the cache-hit path: We were previously accessing the FX graph cache correctly, but when loading the source object using the PyCodeCache.load_by_key_path() method, _that_ path was serving ...
diff --git a/test/inductor/test_codecache.py b/test/inductor/test_codecache.py index 96ed0d7022..28403bf0df 100644 --- a/test/inductor/test_codecache.py +++ b/test/inductor/test_codecache.py @@ -21,7 +21,7 @@ from torch._inductor.codecache import ( ) from torch._inductor.runtime.runtime_utils import cache_dir from t...
2.41.0
4857e71c261b64cb5265c58ce8eab5b69f87af6
Wed, 1 May 2024 05:38:01 +0000
[PATCH 0878/1000] Export `torch.jit.interface` from `torch.jit` package (#125209)
Seems like this symbol was overlooked when other symbols were exported from `torch.jit`. Pull Request resolved: https://github.com/pytorch/pytorch/pull/125209 Approved by: https://github.com/ezyang
diff --git a/torch/jit/__init__.py b/torch/jit/__init__.py index 8ed5c0727e..a5b9f5627e 100644 --- a/torch/jit/__init__.py +++ b/torch/jit/__init__.py @@ -81,6 +81,7 @@ __all__ = [ "export_opnames", "fork", "freeze", + "interface", "ignore", "isinstance", "load",
2.41.0
81f41a920114d766d303fc4234d05103f1d4d89
Wed, 1 May 2024 05:43:35 +0000
[PATCH 0879/1000] Use BFloat16 in distributed quantization when supported by NCCL (#125113)
This PR enables BFloat16 in torch/csrc/distributed/c10d/quantization/quantization_gpu.cu . Pull Request resolved: https://github.com/pytorch/pytorch/pull/125113 Approved by: https://github.com/kwen2501
diff --git a/torch/csrc/distributed/c10d/quantization/quantization_gpu.cu b/torch/csrc/distributed/c10d/quantization/quantization_gpu.cu index c9b6185a40..48cc7cfc4f 100644 --- a/torch/csrc/distributed/c10d/quantization/quantization_gpu.cu +++ b/torch/csrc/distributed/c10d/quantization/quantization_gpu.cu @@ -69,15 +69...
2.41.0
3c4465f504282f61f962027adf2bb9ea196b976
Tue, 30 Apr 2024 16:55:24 -0700
[PATCH 0880/1000] Add has_guarded_code to CompilationMetrics (#125279)
While studying some tlparse, I noticed that CompilationMetrics was reporting that there was no error for frames that have no nodes. I'm pretty sure we don't actually install a frame in this situation. has_guarded_code will tell us if that's the case, because it says if the GuardedCode object is None or not. Actually, ...
diff --git a/torch/_dynamo/convert_frame.py b/torch/_dynamo/convert_frame.py index f2355ca66d..574c56454d 100644 --- a/torch/_dynamo/convert_frame.py +++ b/torch/_dynamo/convert_frame.py @@ -696,6 +696,7 @@ def _compile( fail_reason: Optional[str] = None fail_user_frame_filename: Optional[str] = None ...
2.41.0
ead440c62c94d0a5b9055bd34e961be4aabafdb
Wed, 1 May 2024 07:34:04 +0000
[PATCH 0882/1000] [Inductor] Further tune block size for templated attention on H100 (#125286)
Run a script to enumerate and get the best default block size for templated attention. A100 -> no change, check numbers at #125139 H100 ## torch.bfloat16 Before: ``` | Type | Speedup | batch_size | num_heads | q_seq_len | k_seq_len | head_dim | score_mod | dtype | |---------|-----------|--...
diff --git a/torch/_inductor/kernel/flex_attention.py b/torch/_inductor/kernel/flex_attention.py index e31dfe0977..635af59f8c 100644 --- a/torch/_inductor/kernel/flex_attention.py +++ b/torch/_inductor/kernel/flex_attention.py @@ -3,7 +3,7 @@ import logging from typing import Any, List import torch -from .. import ...
2.41.0
2142192d488f983bdbc6a65de77a0859e21d66d
Tue, 30 Apr 2024 14:32:02 -0700
[PATCH 0883/1000] [pipelining] Add stage backward function (#124958)
This is a helper function which: 1. computes the gradients for the stage inputs, and 2. accumulates gradients for the stage module's parameters. A unit test for this function is also added. Pull Request resolved: https://github.com/pytorch/pytorch/pull/124958 Approved by: https://github.com/wconstab ghstack dependenc...
diff --git a/docs/source/distributed.pipelining.rst b/docs/source/distributed.pipelining.rst new file mode 100644 index 0000000000..ec7423f261 --- /dev/null +++ b/docs/source/distributed.pipelining.rst @@ -0,0 +1,178 @@ +.. role:: hidden + :class: hidden-section + +Pipeline Parallelism +#################### + +.. no...
2.41.0
59cce38a9b389885c545a6f1b5090aa86539250
Wed, 1 May 2024 10:39:13 +0000
[PATCH 0884/1000] [MacOS][CPUInductor] Fix includes to system Python (#125285)
On MacOS 14.4, system Python is configured to point to a non-existing include dir ``` % /usr/bin/python3 -c "import sysconfig;print(sysconfig.get_path('include'))" /Library/Python/3.9/include ``` Workaround the issue by composing path to include folder from `stlib` config, which points to ``` % /usr/bin/python3 -c "im...
diff --git a/torch/_inductor/codecache.py b/torch/_inductor/codecache.py index 16af022a21..5d884ee62b 100644 --- a/torch/_inductor/codecache.py +++ b/torch/_inductor/codecache.py @@ -1456,6 +1456,19 @@ def _set_gpu_runtime_env() -> None: os.environ["CUDA_HOME"] = os.path.dirname(build_paths.cuda()) +def _g...
2.41.0
7ba7a76e230c9a1355cd16d254a9516d406f95a
Wed, 1 May 2024 10:57:10 +0000
[PATCH 0885/1000] [ATen][CUDA][AMP] Fix dtype mismatch in linalg_vector_norm (#125175)
Fixes #125174 Pull Request resolved: https://github.com/pytorch/pytorch/pull/125175 Approved by: https://github.com/eqy, https://github.com/lezcano
diff --git a/aten/src/ATen/native/LinearAlgebra.cpp b/aten/src/ATen/native/LinearAlgebra.cpp index 1849228a63..81f461f6c9 100644 --- a/aten/src/ATen/native/LinearAlgebra.cpp +++ b/aten/src/ATen/native/LinearAlgebra.cpp @@ -2839,10 +2839,16 @@ TORCH_IMPL_FUNC(linalg_vector_norm_out)(const Tensor& self, const Scalar& sca...
2.41.0
fbb4dfc125943547df5d484dbb2be4a8b48c0f3
Wed, 1 May 2024 12:08:02 +0000
[PATCH 0886/1000] Fix AttributeError when doing mock patch for FileTimerServerTest.test_expired_timers (#125144)
Fix the patch failure, and we should patch the function where it is used, not where it is defined. Failure info: ```bash root@cambricon-PowerEdge-C4140:/workspace# python file_based_timer_test.py -k test_expired_timers /opt/conda/lib/python3.10/site-packages/torch/_custom_ops.py:253: DeprecationWarning: torch.library.i...
diff --git a/test/distributed/elastic/timer/file_based_local_timer_test.py b/test/distributed/elastic/timer/file_based_local_timer_test.py index 4616ae061b..490e4a9ce3 100644 --- a/test/distributed/elastic/timer/file_based_local_timer_test.py +++ b/test/distributed/elastic/timer/file_based_local_timer_test.py @@ -264,7...
2.41.0
2715144c332a3b3e9032b0c9ddee592a1b572da
Wed, 1 May 2024 14:04:46 +0000
[PATCH 0887/1000] Add NEON-accelerated int8mm for bfloat16 (#125290)
As apparently `vshlq_u32` is faster than `vcvt_f32_f16` Refactor NEON `tinygemm_kernel` to rely on `load_as_float32x4` and `load_as_float32x4x2` and implement them for float16 (using vcvt), bfloat16 (using left shift) and plain float32 (not using anything) As result stories110M run at 60 tokens/sec with f16, but at 6...
diff --git a/aten/src/ATen/native/cpu/int8mm_kernel.cpp b/aten/src/ATen/native/cpu/int8mm_kernel.cpp index 4ef6cde4a8..bd266030b2 100644 --- a/aten/src/ATen/native/cpu/int8mm_kernel.cpp +++ b/aten/src/ATen/native/cpu/int8mm_kernel.cpp @@ -185,17 +185,50 @@ inline void tinygemm_kernel( #if !defined(C10_MOBILE) && defin...
2.41.0
421f1b4a86ccd0305ac5b3343f8b18097c0adc8
Wed, 1 May 2024 14:27:37 +0000
[PATCH 0888/1000] docs: `torch.nn.utils.rnn`: docs improve (#123559)
docs: `torch.nn.utils.rnn`: docs improve Pull Request resolved: https://github.com/pytorch/pytorch/pull/123559 Approved by: https://github.com/mikaylagawarecki
diff --git a/torch/nn/utils/rnn.py b/torch/nn/utils/rnn.py index 27c902e9aa..1a62254f08 100644 --- a/torch/nn/utils/rnn.py +++ b/torch/nn/utils/rnn.py @@ -212,10 +212,10 @@ def pack_padded_sequence( ) -> PackedSequence: r"""Packs a Tensor containing padded sequences of variable length. - :attr:`input` can be...
2.41.0
09f98c705e4851414cd8ddf21949177af2b13aa
Wed, 1 May 2024 14:31:31 +0000
[PATCH 0889/1000] =?UTF-8?q?Include=20support=20for=20the=20scatt?= =?UTF-8?q?er=20gather=20cuda=20kernels=20to=20allow=20for=20comp=E2=80=A6?= =?UTF-8?q?=20(#124809)?=MIME-Version: 1.0Content-Type: text/plain; charset=UTF-8Content-Transfer-Encoding: 8bit
Fixes #121965 This PR hopes to add support complex numbers in the scatter/gather related kernels. For brevity, I will only include `complex<float>` for now as `complex<double>`, for example, will be more complicated. C++ unit tests are currently passing alongside tests in `test_scatter_gather_ops.py`. Python test sui...
diff --git a/aten/src/ATen/cuda/Atomic.cuh b/aten/src/ATen/cuda/Atomic.cuh index 56ee8f87e2..2fa55902f9 100644 --- a/aten/src/ATen/cuda/Atomic.cuh +++ b/aten/src/ATen/cuda/Atomic.cuh @@ -35,6 +35,26 @@ struct AtomicFPOp<at::Half> { } }; +template <> +struct AtomicFPOp<c10::complex<float>> { + template <typename ...
2.41.0
fde9a988c2b6ba5cd88ee884227563a5f669912
Wed, 1 May 2024 15:37:48 +0000
[PATCH 0891/1000] CI: Extending unit test coverage for aarch64 linux (#125255)
Adding core, dynamo and inductor unit tests for aarch64 linux CI runs. Pull Request resolved: https://github.com/pytorch/pytorch/pull/125255 Approved by: https://github.com/malfet, https://github.com/atalman
diff --git a/.ci/pytorch/test.sh b/.ci/pytorch/test.sh index 5a1e098636..c903a26998 100755 --- a/.ci/pytorch/test.sh +++ b/.ci/pytorch/test.sh @@ -1158,8 +1158,23 @@ test_executorch() { } test_linux_aarch64(){ - # TODO: extend unit tests list - python test/run_test.py --include test_modules test_mkldnn test_mkldn...
2.41.0
16f1ee4cc3e58dbd4755021e4b4b87a16b2aac2
Wed, 1 May 2024 15:48:48 +0000
[PATCH 0892/1000] [ez][CI] Move test_modules and test_schema_check off CI_SERIAL_LIST (#125193)
* Related https://github.com/pytorch/pytorch/pull/124085 As in title, move test_modules and test_schema_check off CI_SERIAL_LIST If things fail, they can get the serialTest decorator instead Pull Request resolved: https://github.com/pytorch/pytorch/pull/125193 Approved by: https://github.com/huydhn
diff --git a/test/run_test.py b/test/run_test.py index 9945859987..1b95bcb465 100755 --- a/test/run_test.py +++ b/test/run_test.py @@ -227,11 +227,9 @@ CI_SERIAL_LIST = [ "nn/test_pooling", "nn/test_convolution", # Doesn't respect set_per_process_memory_fraction, results in OOM for other tests in slow gradch...
2.41.0
d410155b241334b32872ff593d3480d10f57d5e
Wed, 1 May 2024 16:02:02 +0000
[PATCH 0893/1000] =?UTF-8?q?Revert=20"Include=20support=20for=20t?= =?UTF-8?q?he=20scatter=20gather=20cuda=20kernels=20to=20allow=20for=20com?= =?UTF-8?q?p=E2=80=A6=20(#124809)"?=MIME-Version: 1.0Content-Type: text/plain; charset=UTF-8Content-Transfer-Encoding: 8bit
This reverts commit e09f98c705e4851414cd8ddf21949177af2b13aa. Reverted https://github.com/pytorch/pytorch/pull/124809 on behalf of https://github.com/clee2000 due to windows build failure is real, https://github.com/pytorch/pytorch/actions/runs/8910674030/job/24470387612#step:11:11236 is the correct failure line, igno...
diff --git a/aten/src/ATen/cuda/Atomic.cuh b/aten/src/ATen/cuda/Atomic.cuh index 2fa55902f9..56ee8f87e2 100644 --- a/aten/src/ATen/cuda/Atomic.cuh +++ b/aten/src/ATen/cuda/Atomic.cuh @@ -35,26 +35,6 @@ struct AtomicFPOp<at::Half> { } }; -template <> -struct AtomicFPOp<c10::complex<float>> { - template <typename ...
2.41.0
9eb5d4fa4a09cd462b51f460e71e16768188a44
Wed, 1 May 2024 16:59:35 +0000
[PATCH 0894/1000] Add Sanity Testing to Pytorch Profiler (#124773)
Summary: In the recent weeks, we have encountered bugs in both the normal synchronous trace and on-demand tracing. This diff on its own does sanity checking to make sure the profiler does not have spans that extend past the boundaries that we expect. It also checks some basic properties of the tracings we expect to see...
diff --git a/test/profiler/test_profiler.py b/test/profiler/test_profiler.py index 2ec04c447f..ff6dc640dd 100644 --- a/test/profiler/test_profiler.py +++ b/test/profiler/test_profiler.py @@ -22,6 +22,7 @@ import re import subprocess import sys import threading +import time import unittest from dataclasses import d...
2.41.0
3627d05e789c6cd20eeccce89dd733f02a808a9
Wed, 1 May 2024 17:26:28 +0000
[PATCH 0895/1000] [CMake] Add NVPL BLAS/LAPACK option (#125268)
This PR add a [NVPL](https://docs.nvidia.com/nvpl/introduction.html) BLAS/LAPACK option to CMake for `aarch64` (ARM) machines. Pull Request resolved: https://github.com/pytorch/pytorch/pull/125268 Approved by: https://github.com/albanD
diff --git a/cmake/Dependencies.cmake b/cmake/Dependencies.cmake index 5b30bef4fc..a8313f206f 100644 --- a/cmake/Dependencies.cmake +++ b/cmake/Dependencies.cmake @@ -237,6 +237,12 @@ elseif(BLAS STREQUAL "MKL") set(CAFFE2_USE_EIGEN_FOR_BLAS ON) set(CAFFE2_USE_MKL OFF) endif() +elseif(BLAS STREQUAL "NVPL")...
2.41.0
8d2a55273757c90989fde7c6f05e957aba9a238
Wed, 1 May 2024 17:45:11 +0000
[PATCH 0896/1000] Intel GPU: specify the tolerance for torchbench models (#125213)
We encountered some model accuracy failures as the tolerance is critical. In general, we align with CUDA practice. This PR intends to adjust the tolerance for Torchbench models for training mode on Intel GPU devices and aligns with CUDA. Pull Request resolved: https://github.com/pytorch/pytorch/pull/125213 Approved by...
diff --git a/benchmarks/dynamo/torchbench.py b/benchmarks/dynamo/torchbench.py index a6b4edb3a4..274d04da15 100755 --- a/benchmarks/dynamo/torchbench.py +++ b/benchmarks/dynamo/torchbench.py @@ -402,7 +402,7 @@ class TorchBenchmarkRunner(BenchmarkRunner): if name in self._tolerance["higher_bf16"]: ...
2.41.0
f6acf9add0f52a483c9ea073a7adfe4ab1233dd
Wed, 1 May 2024 18:30:57 +0000
[PATCH 0897/1000] [ROCm] Add extra cuda_to_hip_mappings.py (#125108)
Adding extra mappings discovered when hipifying the backward CUDA kernel of the Mamba model (https://github.com/state-spaces/mamba/). Co-authored-by: Jeff Daily <jeff.daily@amd.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/125108 Approved by: https://github.com/Skylion007, https://github.com/jeff...
diff --git a/torch/utils/hipify/cuda_to_hip_mappings.py b/torch/utils/hipify/cuda_to_hip_mappings.py index e48c595928..d07292a22b 100644 --- a/torch/utils/hipify/cuda_to_hip_mappings.py +++ b/torch/utils/hipify/cuda_to_hip_mappings.py @@ -643,7 +643,11 @@ CUDA_INCLUDE_MAP = collections.OrderedDict( ("thrust/sy...
2.41.0
7422fd0b9ad21eb9e44c68172d4a85b3d03769e
Wed, 1 May 2024 18:35:49 +0000
[PATCH 0898/1000] add missing space to first cmake append (#125294)
the first append not having a space incorrectly merges it to any previous arguments, like `-allow-unsupported-compiler` in my case which results in a silly error: `unrecognized command-line option '-allow-unsupported-compiler-DLIBCUDACXX_ENABLE_SIMPLIFIED_COMPLEX_OPERATIONS'` full log: ``` python setup.py develop Buil...
diff --git a/CMakeLists.txt b/CMakeLists.txt index 215ec7a81a..9ce1e06bea 100644 --- a/CMakeLists.txt +++ b/CMakeLists.txt @@ -56,7 +56,7 @@ endif() # This define is needed to preserve behavior given anticpated changes to cccl/thrust # https://nvidia.github.io/libcudacxx/standard_api/numerics_library/complex.html -...
2.41.0
97612c84c37eb99ef6d18f579e7fd8a52a31d28
Wed, 1 May 2024 19:59:51 +0000
[PATCH 0902/1000] ProcessGroupWrapper support custom backend (#124447)MIME-Version: 1.0Content-Type: text/plain; charset=UTF-8Content-Transfer-Encoding: 8bit
Fixes #ISSUE_NUMBER In current code, ProcessGroupWrapper works only for `GLOO, NCCL, UCC` when `TORCH_DISTRIBUTED_DEBUG=DETAIL`. I read the ProcessGroupWrapper code,find that communication_op in ProcessGroupWrapper is just communication_op in origin_backend + runCollectiveChecks in gloo, like allreduce: https://github....
diff --git a/torch/distributed/distributed_c10d.py b/torch/distributed/distributed_c10d.py index 74f2ed5845..c006fbc08c 100644 --- a/torch/distributed/distributed_c10d.py +++ b/torch/distributed/distributed_c10d.py @@ -1602,7 +1602,7 @@ def _new_process_group_helper( break # Process group wrappe...
2.41.0
bcf123105a3f11d02f04067ca0cb377ed09e88c
Wed, 1 May 2024 20:59:17 +0000
[PATCH 0904/1000] Upgrade submodule oneDNN to v3.4 (#122472)MIME-Version: 1.0Content-Type: text/plain; charset=UTF-8Content-Transfer-Encoding: 8bit
## Improvements This upgrade fixes the following issues: - https://github.com/pytorch/pytorch/issues/120982 This upgrade brings the following new features: - Introduced memory descriptor serialization API. This API is needed to support freezing on CPU in AOTInductor (https://github.com/pytorch/pytorch/issues/114450) ...
diff --git a/third_party/ideep b/third_party/ideep index 8a6cc4e09d..420c9806c8 160000 --- a/third_party/ideep +++ b/third_party/ideep @@ -1 +1 @@ -Subproject commit 8a6cc4e09dc509f04f83c085e38786b1fb44e14d +Subproject commit 420c9806c870711668a4bbc469f1d40839532fea diff --git a/third_party/mkl-dnn.BUILD b/third_party/...
2.41.0
46da8755c45c331dc4c7513ff89b531f978c2cd
Wed, 1 May 2024 21:01:26 +0000
[PATCH 0905/1000] switch tests from constrain_as* to torch._check* (#125253)
To fix data-dependent errors we want to recommend that people use `torch._check*` APIs. The `constrain_as*` APIs should be fully subsumed by them, and in the future we should kill them entirely. Differential Revision: D56774333 Pull Request resolved: https://github.com/pytorch/pytorch/pull/125253 Approved by: https:/...
diff --git a/test/distributed/test_c10d_functional_native.py b/test/distributed/test_c10d_functional_native.py index 99062d1bab..54030d1f1d 100644 --- a/test/distributed/test_c10d_functional_native.py +++ b/test/distributed/test_c10d_functional_native.py @@ -703,7 +703,7 @@ class CompileTest(TestCase): def _to...
2.41.0
d794bcb8a9e7dec578b1e8587dc6513cd24c2e7
Wed, 1 May 2024 13:00:06 -0400
[PATCH 0906/1000] Delete NegateSource handling, I think it's dead (#125311)
Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/125311 Approved by: https://github.com/Skylion007
diff --git a/torch/fx/experimental/symbolic_shapes.py b/torch/fx/experimental/symbolic_shapes.py index 748737a9e1..fdaeb3a060 100644 --- a/torch/fx/experimental/symbolic_shapes.py +++ b/torch/fx/experimental/symbolic_shapes.py @@ -3398,7 +3398,7 @@ class ShapeEnv: # TODO: Make this more efficient by binding al...
2.41.0
058563078e707b848b061a222d0195e7472be7d
Wed, 1 May 2024 09:51:21 -0700
[PATCH 0907/1000] support as_python_constant on PlacementClassVariable (#124398)
Fixes an error for torchtitan + internal Pull Request resolved: https://github.com/pytorch/pytorch/pull/124398 Approved by: https://github.com/ezyang, https://github.com/wanchaol, https://github.com/yoyoyocmu
diff --git a/torch/_dynamo/variables/distributed.py b/torch/_dynamo/variables/distributed.py index 2f4a2eb91e..6816ea9b1e 100644 --- a/torch/_dynamo/variables/distributed.py +++ b/torch/_dynamo/variables/distributed.py @@ -107,6 +107,9 @@ class PlacementClassVariable(DistributedVariable): return type(value) ...
2.41.0
173cbe260e8cfc9173da005a7b272c31511b05a
Wed, 1 May 2024 09:51:21 -0700
[PATCH 0908/1000] fix FakeTensor creation on noncontiguous subclasses (#124399)
Fixes https://github.com/pytorch/pytorch/issues/125287 Fixes https://github.com/pytorch/pytorch/issues/124090, context on the issue Pull Request resolved: https://github.com/pytorch/pytorch/pull/124399 Approved by: https://github.com/soulitzer ghstack dependencies: #124398
diff --git a/test/distributed/_tensor/test_dtensor_compile.py b/test/distributed/_tensor/test_dtensor_compile.py index f9ad0278d7..21ca8ae8f0 100644 --- a/test/distributed/_tensor/test_dtensor_compile.py +++ b/test/distributed/_tensor/test_dtensor_compile.py @@ -252,6 +252,43 @@ class TestDTensorCompile(torch._dynamo.t...
2.41.0
e9ba61fde8f97fc05ceaee6f4ccbf23fe6bf9e8
Wed, 1 May 2024 09:51:22 -0700
[PATCH 0909/1000] AOTAutograd: force tangents to be contiguous when subclass inner tensor is noncontiguous (#124400)
Fixes https://github.com/pytorch/pytorch/issues/124397 Pull Request resolved: https://github.com/pytorch/pytorch/pull/124400 Approved by: https://github.com/ezyang, https://github.com/yoyoyocmu ghstack dependencies: #124398, #124399
diff --git a/test/distributed/_tensor/test_dtensor_compile.py b/test/distributed/_tensor/test_dtensor_compile.py index 21ca8ae8f0..f726e35153 100644 --- a/test/distributed/_tensor/test_dtensor_compile.py +++ b/test/distributed/_tensor/test_dtensor_compile.py @@ -289,6 +289,41 @@ class TestDTensorCompile(torch._dynamo.t...
2.41.0
99a2e25f132b6ad7cc060d1dad4a4f374eb100d
Wed, 1 May 2024 09:56:32 -0700
[PATCH 0910/1000] Reland "make sure dynamo doesn't inline DTensor __new__ or __torch_dispatch__ (#123347)" (#125288)
Re-land of https://github.com/pytorch/pytorch/pull/123347. The original PR broke internal because of a circular import due to importing dynamo in the DTensor code. The new version uses `torch._dynamo_disable` to work around This reverts commit 9d88339b535f57cd0e2926c9ac4c2542e4490aac. Pull Request resolved: https://...
diff --git a/test/distributed/_tensor/test_dtensor_compile.py b/test/distributed/_tensor/test_dtensor_compile.py index f726e35153..b26c8900dd 100644 --- a/test/distributed/_tensor/test_dtensor_compile.py +++ b/test/distributed/_tensor/test_dtensor_compile.py @@ -191,6 +191,47 @@ class TestDTensorCompile(torch._dynamo.t...
2.41.0
06eda538b85c9fec7a35a4c8684cdef05fc5e9d
Wed, 1 May 2024 22:06:47 +0000
[PATCH 0911/1000] Fix windows build error not propagating (#125306)
* Fixes https://github.com/pytorch/pytorch/issues/124886 * Kind of similar to https://github.com/pytorch/pytorch/pull/109393 I think what happens is `exit` and `exit /b` propagate the errorlevel correctly, but `exit /b` only exists the currently running batch script and not the entire cmd.exe (or whatever program is r...
diff --git a/.ci/pytorch/win-test-helpers/build_pytorch.bat b/.ci/pytorch/win-test-helpers/build_pytorch.bat index 4b7bbad744..28bd083f98 100644 --- a/.ci/pytorch/win-test-helpers/build_pytorch.bat +++ b/.ci/pytorch/win-test-helpers/build_pytorch.bat @@ -17,22 +17,22 @@ set PATH=C:\Program Files\CMake\bin;C:\Program Fi...
2.41.0
451d108da6b903e5a4d21d55be7988080cac684
Wed, 1 May 2024 23:14:05 +0000
[PATCH 0912/1000] Implemented isin_Tensor_Tensor_out for MPS backend (#124896)
Addresses issue #124518, adds isin_Tensor_Tensor_out. Tests added to test_mps.py. Pull Request resolved: https://github.com/pytorch/pytorch/pull/124896 Approved by: https://github.com/malfet, https://github.com/kulinseth
diff --git a/aten/src/ATen/native/mps/operations/TensorCompare.mm b/aten/src/ATen/native/mps/operations/TensorCompare.mm index 78fa67e57d..f378af1326 100644 --- a/aten/src/ATen/native/mps/operations/TensorCompare.mm +++ b/aten/src/ATen/native/mps/operations/TensorCompare.mm @@ -12,7 +12,9 @@ #include <ATen/ops/clamp_m...
2.41.0
cfb55dd5dfcffc06e039c9af2342f455dfd1007
Wed, 1 May 2024 23:19:07 +0000
[PATCH 0913/1000] Add a variable for some testcases. (#124708)
Some testcases can use 'TEST_PRIVATEUSE1_DEVICE_TYPE' to make adapting these testcases on others device more convenient. Fixes #ISSUE_NUMBER Pull Request resolved: https://github.com/pytorch/pytorch/pull/124708 Approved by: https://github.com/albanD
diff --git a/test/test_shape_ops.py b/test/test_shape_ops.py index 189187b582..47acfff9c6 100644 --- a/test/test_shape_ops.py +++ b/test/test_shape_ops.py @@ -12,7 +12,7 @@ import unittest from torch import nan from torch.testing import make_tensor from torch.testing._internal.common_utils import ( - TestCase, ru...
2.41.0
f5f405b057c7de0f5fce0b1432cb74468f96f95
Wed, 1 May 2024 23:29:55 +0000
[PATCH 0914/1000] [ncclx] Rename NCCL-EXP to NCCLX (#125238)
Reviewed By: kryanchun Differential Revision: D56534548 Pull Request resolved: https://github.com/pytorch/pytorch/pull/125238 Approved by: https://github.com/kwen2501
diff --git a/test/cpp/c10d/ProcessGroupNCCLTest.cpp b/test/cpp/c10d/ProcessGroupNCCLTest.cpp index d2dc02d323..d1c2380274 100644 --- a/test/cpp/c10d/ProcessGroupNCCLTest.cpp +++ b/test/cpp/c10d/ProcessGroupNCCLTest.cpp @@ -840,7 +840,7 @@ TEST_F(ProcessGroupNCCLTest, testSplittingCommunicator) { multiThreadRun(testS...
2.41.0
6f326eff56250d97e0df1f5e75943a72ebaa5c2
Wed, 1 May 2024 11:38:49 -0700
[PATCH 0915/1000] explicitly reset stderr/stdout in precompilation (#125289)
I was seeing a weird bug where after running max-autotune my stdout would be misdirected. other people have not been able to repro this. Pull Request resolved: https://github.com/pytorch/pytorch/pull/125289 Approved by: https://github.com/shunting314, https://github.com/mlazos
diff --git a/torch/_inductor/select_algorithm.py b/torch/_inductor/select_algorithm.py index 577a1c318b..8fcb441ed8 100644 --- a/torch/_inductor/select_algorithm.py +++ b/torch/_inductor/select_algorithm.py @@ -40,6 +40,7 @@ from .runtime.runtime_utils import do_bench from .utils import ( get_dtype_size, Pla...
2.41.0
043ccafdf133002ce6c6993429c419054ee9ad7
Tue, 30 Apr 2024 22:55:30 +0300
[PATCH 0916/1000] Require nnz==0 in sparse meta tensors (#125221)
As in the title and per discussion starting at https://github.com/pytorch/pytorch/pull/117907#issuecomment-2082426468 Pull Request resolved: https://github.com/pytorch/pytorch/pull/125221 Approved by: https://github.com/amjames, https://github.com/ezyang
diff --git a/aten/src/ATen/native/TensorConversions.cpp b/aten/src/ATen/native/TensorConversions.cpp index a6c1118c4e..c70da8334a 100644 --- a/aten/src/ATen/native/TensorConversions.cpp +++ b/aten/src/ATen/native/TensorConversions.cpp @@ -259,6 +259,9 @@ Tensor _to_copy( memory_format == MemoryFormat::Preser...
2.41.0
281d3a0cb435792c88f5015fafd33af40e2004e
Wed, 1 May 2024 23:44:53 +0000
[PATCH 0917/1000] Enable UFMT on test_indexing&test_view_ops (#125112)
Part of https://github.com/pytorch/pytorch/issues/123062 Pull Request resolved: https://github.com/pytorch/pytorch/pull/125112 Approved by: https://github.com/ezyang
diff --git a/.lintrunner.toml b/.lintrunner.toml index a945814b23..21dfdf5b3f 100644 --- a/.lintrunner.toml +++ b/.lintrunner.toml @@ -1071,7 +1071,6 @@ exclude_patterns = [ 'test/test_fx_reinplace_pass.py', 'test/test_hub.py', 'test/test_import_stats.py', - 'test/test_indexing.py', 'test/test_it...
2.41.0
022f131b54065ffbfd8e17b4be73a5228f1139a
Sun, 28 Apr 2024 12:27:29 -0700
[PATCH 0918/1000] [inductor] switch assume_aligned_inputs to False (#124336)
In #123319, we guard some behavior behind the `assume_aligned_inputs` config option. If we set this to `False`, then the behavior added in #123319 becomes the default behavior. See the referenced PR for more details about the behavior affected. Side effects: * It's possible that this will hurt performance in some scen...
diff --git a/test/inductor/test_cudagraph_trees.py b/test/inductor/test_cudagraph_trees.py index 33ee5247bd..c8877d4a8e 100644 --- a/test/inductor/test_cudagraph_trees.py +++ b/test/inductor/test_cudagraph_trees.py @@ -634,7 +634,7 @@ if HAS_CUDA and not TEST_WITH_ASAN: new_id = self.get_manager().new_grap...
2.41.0
1f142c44f81384afbdba5e451fc15744868bf26
Wed, 1 May 2024 23:56:00 +0000
[PATCH 0919/1000] Revert "Fakify script object inputs and attributes for non-strict export (#124239)"
This reverts commit ecc2e034f7e55bf9ff7f4e5df4e9086a5c92caaa. Reverted https://github.com/pytorch/pytorch/pull/124239 on behalf of https://github.com/kit1980 due to breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/124239#issuecomment-2089305447))
diff --git a/test/export/test_passes.py b/test/export/test_passes.py index e2724ead88..41597a6030 100644 --- a/test/export/test_passes.py +++ b/test/export/test_passes.py @@ -13,10 +13,6 @@ from typing import List, Set import torch from functorch.experimental.control_flow import cond from torch._dynamo.eval_frame im...
2.41.0
e24c263f998819f849bb8293323213101e9aefc
Wed, 1 May 2024 23:58:33 +0000
[PATCH 0920/1000] =?UTF-8?q?Include=20support=20for=20the=20scatt?= =?UTF-8?q?er=20gather=20cuda=20kernels=20to=20allow=20for=20comp=E2=80=A6?= =?UTF-8?q?=20(#124809)?=MIME-Version: 1.0Content-Type: text/plain; charset=UTF-8Content-Transfer-Encoding: 8bit
Fixes #121965 This PR hopes to add support complex numbers in the scatter/gather related kernels. For brevity, I will only include `complex<float>` for now as `complex<double>`, for example, will be more complicated. C++ unit tests are currently passing alongside tests in `test_scatter_gather_ops.py`. Python test sui...
diff --git a/aten/src/ATen/NumericUtils.h b/aten/src/ATen/NumericUtils.h index 788da64b4e..421fe0efab 100644 --- a/aten/src/ATen/NumericUtils.h +++ b/aten/src/ATen/NumericUtils.h @@ -38,7 +38,11 @@ inline C10_HOST_DEVICE bool _isnan(T val) { template <typename T, std::enable_if_t<c10::is_complex<T>::value, int> = 0>...
2.41.0
731130ea8a83a33b14e5b566130b2f6888c2e6b
Tue, 30 Apr 2024 20:09:36 -0700
[PATCH 0923/1000] Add a code comment about torch._check_is_size in tensor_split (#125292)
Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/125292 Approved by: https://github.com/albanD
diff --git a/torch/_decomp/decompositions.py b/torch/_decomp/decompositions.py index 124ed8fb72..6cfccbab0d 100644 --- a/torch/_decomp/decompositions.py +++ b/torch/_decomp/decompositions.py @@ -1413,6 +1413,15 @@ def tensor_split_tensor_indices_or_sections_py_impl( return self.tensor_split(sections, dim) ...
2.41.0
119e1bcc2aa7e7d25bbad46cdcdfa06e8d3cc20
Thu, 2 May 2024 02:34:30 +0000
[PATCH 0924/1000] Fix refcount handling for dtype, layout and memory format (#125271)
Finish fixing https://github.com/pytorch/pytorch/issues/124868 re-use our wrap() utils as much as possible and NewRef in other places. Pull Request resolved: https://github.com/pytorch/pytorch/pull/125271 Approved by: https://github.com/colesbury
diff --git a/tools/autograd/templates/python_nn_functions.cpp b/tools/autograd/templates/python_nn_functions.cpp index f311cfebe4..4877df6584 100644 --- a/tools/autograd/templates/python_nn_functions.cpp +++ b/tools/autograd/templates/python_nn_functions.cpp @@ -60,14 +60,14 @@ static PyObject * THPVariable__parse_to(P...
2.41.0
ea54839c90896d9ba1fcf1f26472f8b477954f0
Wed, 1 May 2024 07:02:48 -0700
[PATCH 0925/1000] Make min(stride, strides[idx]) in collapse_view_helper size oblivious (#125301)
Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/125301 Approved by: https://github.com/albanD
diff --git a/torch/_prims/__init__.py b/torch/_prims/__init__.py index 43ba508175..116671703e 100644 --- a/torch/_prims/__init__.py +++ b/torch/_prims/__init__.py @@ -1366,7 +1366,10 @@ def _collapse_view_helper( continue length = length * shape[idx] - stride = min(stride, strides[idx]) +...
2.41.0
ff7a31800296669d0dc880282934efa45d4c339
Thu, 2 May 2024 03:38:32 +0000
[PATCH 0927/1000] fix torchdeploy issue on sharddim_alltoall op (#125344)
Summary: fix torchdeploy issues when registering the distributed op, similar to what functional collective did Differential Revision: D56850434 Pull Request resolved: https://github.com/pytorch/pytorch/pull/125344 Approved by: https://github.com/XilunWu, https://github.com/fegin
diff --git a/torch/distributed/_tensor/_collective_utils.py b/torch/distributed/_tensor/_collective_utils.py index ce4809d996..fe712f3cb2 100644 --- a/torch/distributed/_tensor/_collective_utils.py +++ b/torch/distributed/_tensor/_collective_utils.py @@ -23,11 +23,20 @@ from torch.distributed.distributed_c10d import ( ...
2.41.0
a5d2d9b3ed8c61fb463226854ba92cd0fb682bd
Wed, 1 May 2024 16:41:09 -0700
[PATCH 0928/1000] Hotfix: restore CPP guard string in structured trace (#125303)
Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/125303 Approved by: https://github.com/albanD
diff --git a/test/dynamo/test_structured_trace.py b/test/dynamo/test_structured_trace.py index 07f541edbe..1213a48ffc 100644 --- a/test/dynamo/test_structured_trace.py +++ b/test/dynamo/test_structured_trace.py @@ -141,6 +141,7 @@ class StructuredTraceTest(TestCase): {"inductor_post_grad_graph": {}, "frame_id": 0, "fr...
2.41.0
b70026d3b549782ee194c85a0a0f6805cceaed7
Wed, 1 May 2024 16:55:30 -0700
[PATCH 0930/1000] Do not pass none to has_pending_mutation (#125359)
#fix https://github.com/pytorch/pytorch/issues/125315 Several failures when inlining nn module is enabled are due to passing None to has_pending_mutation from previous code, it sounds like its expected for variable to be none when not found, In that case we should skip it and not call has_pending_mutation this is test...
diff --git a/torch/_dynamo/variables/optimizer.py b/torch/_dynamo/variables/optimizer.py index e183f7a5e5..f7d2056aa2 100644 --- a/torch/_dynamo/variables/optimizer.py +++ b/torch/_dynamo/variables/optimizer.py @@ -112,9 +112,8 @@ class OptimizerVariable(UserDefinedObjectVariable): for g in self.value.param_gr...
2.41.0
13a0a247906941438717e74b7990c26b2246e0c
Thu, 2 May 2024 00:22:14 -0700
[PATCH 0931/1000] [dynamo][easy] Simple fixes to prepare for nn module guards (#125316)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125316 Approved by: https://github.com/williamwen42 ghstack dependencies: #125275
diff --git a/test/dynamo/test_modules.py b/test/dynamo/test_modules.py index de0e66e59f..b46844f219 100644 --- a/test/dynamo/test_modules.py +++ b/test/dynamo/test_modules.py @@ -1961,6 +1961,7 @@ class OptimizedModuleTest(torch._dynamo.test_case.TestCase): self.assertEqual(compiled_func(inp).item(), 16) ...
2.41.0
b1bfe115693d4dd3cc2be68a2ea62717bc7f18e
Thu, 2 May 2024 15:17:10 +0000
[PATCH 0933/1000] Get cutlass_library import working under fbcode (#125257)
Differential Revision: D56764089 Pull Request resolved: https://github.com/pytorch/pytorch/pull/125257 Approved by: https://github.com/chenyang78
diff --git a/torch/_inductor/codegen/cuda/cutlass_utils.py b/torch/_inductor/codegen/cuda/cutlass_utils.py index ff60525548..62465b0883 100644 --- a/torch/_inductor/codegen/cuda/cutlass_utils.py +++ b/torch/_inductor/codegen/cuda/cutlass_utils.py @@ -10,6 +10,7 @@ from typing import Any, List, Optional import sympy ...
2.41.0
93b57a57031b893139a6b334cf64b6201dde8c4
Tue, 30 Apr 2024 17:08:35 -0700
[PATCH 0934/1000] Add propagate_real_tensors mode for unbacked (#125115)
A common complaint when working with data-dependent code in PyTorch is that it's hard to tell how far you are from the finish line: every time a GuardOnDataDependentSymNode error is hit, you have to somehow fix or workaround it to see the next one. This PR adds a new mode `torch._functorch.config.fake_tensor_propagate...
diff --git a/benchmarks/dynamo/common.py b/benchmarks/dynamo/common.py index 2ecadef60e..48ea0496da 100644 --- a/benchmarks/dynamo/common.py +++ b/benchmarks/dynamo/common.py @@ -75,6 +75,7 @@ except ImportError: graph_break_reasons, maybe_enable_compiled_autograd, ) +import torch._functorch.conf...
2.41.0
eb7b8eb6085322c96ba0bbacb039ef6f3b2875b
Wed, 1 May 2024 23:19:07 -0700
[PATCH 0935/1000] [PT2D] Ensure the trace rules are correct with distributed (#125333)
Summary: 1. Avoid using `torch._dynamo.disable`. 2. Clear the LRU cache of the trace rules. This won't do anything if rules are not evluated before PG initilization. Pull Request resolved: https://github.com/pytorch/pytorch/pull/125333 Approved by: https://github.com/yanboliang
diff --git a/torch/_dynamo/trace_rules.py b/torch/_dynamo/trace_rules.py index ab6f52c7fb..954f3997e2 100644 --- a/torch/_dynamo/trace_rules.py +++ b/torch/_dynamo/trace_rules.py @@ -3541,3 +3541,11 @@ def lookup_inner( return SkipFunctionVariable else: return UserFunctionVariable + + +def clear_...
2.41.0
a991fac221e8c03ed8924498e58c0ea48416a5c
Thu, 2 May 2024 17:16:02 +0000
[PATCH 0936/1000] [ROCm][CI] upgrade CI to ROCm 6.1 (#124300)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124300 Approved by: https://github.com/malfet
diff --git a/.ci/docker/build.sh b/.ci/docker/build.sh index 1b8ed8df93..a45516f46a 100755 --- a/.ci/docker/build.sh +++ b/.ci/docker/build.sh @@ -204,7 +204,7 @@ case "$image" in PROTOBUF=yes DB=yes VISION=yes - ROCM_VERSION=5.7 + ROCM_VERSION=6.0 NINJA_VERSION=1.9.0 CONDA_CMAKE=yes ...
2.41.0
741fb36803febc7e5db28e722746a872b3a4884
Thu, 2 May 2024 08:53:52 -0700
[PATCH 0937/1000] [DCP] Introduce async staging extension points (#122965)
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #124944 * #124939 * __->__ #122965 Differential Revision: [D55493240](https://our.internmc.facebook.com/intern/diff/D55493240/) *This PR is now ready for merge and is not an RFC* Major choices are: -- the introduction of the AsyncStager p...
diff --git a/docs/source/distributed.checkpoint.rst b/docs/source/distributed.checkpoint.rst index 86f69f9d22..e948b66e1e 100644 --- a/docs/source/distributed.checkpoint.rst +++ b/docs/source/distributed.checkpoint.rst @@ -29,6 +29,12 @@ The entrypoints to load and save a checkpoint are the following: .. autofunction:...
2.41.0
99f1460af159a1417bb3ef1f088347cb8d89434
Thu, 2 May 2024 08:53:54 -0700
[PATCH 0938/1000] [DCP] Provides default AsyncStager (#124939)
Differential Revision: [D56575987](https://our.internmc.facebook.com/intern/diff/D56575987/) Pull Request resolved: https://github.com/pytorch/pytorch/pull/124939 Approved by: https://github.com/fegin ghstack dependencies: #122965
diff --git a/docs/source/distributed.checkpoint.rst b/docs/source/distributed.checkpoint.rst index e948b66e1e..573faa429b 100644 --- a/docs/source/distributed.checkpoint.rst +++ b/docs/source/distributed.checkpoint.rst @@ -36,6 +36,9 @@ The following module is also useful for additional customization of the staging .....
2.41.0
bbfb708315137ad709656f7df76dc7871346f59
Thu, 2 May 2024 20:30:49 +0000
[PATCH 0939/1000] ignore unsupported module from flop counter (#125346)
Summary: Torchscript modules do not support forward hooks and thus can't work with flop_counter context manager for hierarchical output by passing a module to FlopCounterMode on construction. Currently any module that includes a script module causes an exception to be thrown so adding a try/catch to ignore any script ...
diff --git a/torch/utils/flop_counter.py b/torch/utils/flop_counter.py index 42868dc359..09bb039ea6 100644 --- a/torch/utils/flop_counter.py +++ b/torch/utils/flop_counter.py @@ -388,11 +388,19 @@ class FlopCounterMode(TorchDispatchMode): else: name = ".".join([prefix, name]) - ...
2.41.0
f62494bf9c6915189ffd2ec1dbef0c5dd8b91ae
Thu, 2 May 2024 08:53:56 -0700
[PATCH 0940/1000] [DCP] Move async logic into filesystem for better encapsulation (#124944)
This logic is specific to FilesystemWriter, and now has a better place to live due to the new AsyncStager class Differential Revision: [D56578436](https://our.internmc.facebook.com/intern/diff/D56578436/) Pull Request resolved: https://github.com/pytorch/pytorch/pull/124944 Approved by: https://github.com/fegin ghsta...
diff --git a/torch/distributed/checkpoint/filesystem.py b/torch/distributed/checkpoint/filesystem.py index 3672e2401b..07bc85560e 100644 --- a/torch/distributed/checkpoint/filesystem.py +++ b/torch/distributed/checkpoint/filesystem.py @@ -29,8 +29,8 @@ import torch from torch import Tensor from torch._utils import _g...
2.41.0
ae574c7133de4ee36b1802f1ac113be6d644c1e
Thu, 2 May 2024 11:22:54 -0400
[PATCH 0941/1000] Don't make replacements for i variables (#125398)
This was introduced in https://github.com/pytorch/pytorch/pull/110262 but actually it looks like they were trying to hit unbacked SymInt. Now that unbacked SymInt is renamed to u, this code is no longer necessary Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: https://github.com/pytorch/pytorch...
diff --git a/torch/_inductor/codegen/common.py b/torch/_inductor/codegen/common.py index 7126d565cf..a044fd3210 100644 --- a/torch/_inductor/codegen/common.py +++ b/torch/_inductor/codegen/common.py @@ -1683,7 +1683,6 @@ class Kernel(CodeGen): x: self.args.size(x) for x in sorted_symbols ...
2.41.0
c76764a567c3e2425432671de04c4cfc705b745
Thu, 2 May 2024 20:43:17 +0000
[PATCH 0942/1000] Always pass down kernel_file and grid as string (#125384)
From my test with Ads production workload, I found sometime kernel_file is None and grid is a tuple. It will crash since ExecutionTraceObserver expects string for both kernel_file and grid. This PR is to make sure kernel_file and grid are always passed down as string. Need to find the root cause why kernel_file is non...
diff --git a/torch/_inductor/runtime/triton_heuristics.py b/torch/_inductor/runtime/triton_heuristics.py index 6d49c60b0d..b66bdbf393 100644 --- a/torch/_inductor/runtime/triton_heuristics.py +++ b/torch/_inductor/runtime/triton_heuristics.py @@ -794,14 +794,15 @@ class CachingAutotuner(KernelInterface): # man...
2.41.0
199ce8d6c7149fb4ba9e1d92cb133abe4ac0667
Tue, 30 Apr 2024 17:04:50 -0700
[PATCH 0944/1000] [pipelining] Add microbatch split and merge utils (#125273)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125273 Approved by: https://github.com/H-Huang ghstack dependencies: #124776, #124875, #124958
diff --git a/docs/source/distributed.pipelining.rst b/docs/source/distributed.pipelining.rst index ec7423f261..0083fe4226 100644 --- a/docs/source/distributed.pipelining.rst +++ b/docs/source/distributed.pipelining.rst @@ -176,3 +176,16 @@ Note that since we split our model into three stages, we must run this script wi...
2.41.0
dad82fc906f7e738bca1f918a97ccbc534c43ab
Thu, 2 May 2024 21:28:23 +0000
[PATCH 0945/1000] Add private helper for determining which version of FA2 closest matches kernel version (#123653)
Fixes #ISSUE_NUMBER Pull Request resolved: https://github.com/pytorch/pytorch/pull/123653 Approved by: https://github.com/mikaylagawarecki
diff --git a/torch/nn/attention/__init__.py b/torch/nn/attention/__init__.py index fca8055ad2..fc4835f046 100644 --- a/torch/nn/attention/__init__.py +++ b/torch/nn/attention/__init__.py @@ -115,3 +115,8 @@ def sdpa_kernel(backends: Union[List[SDPBackend], SDPBackend]): enable_flash_sdp(previous_flash) ...
2.41.0
1b03992d02911ac59f8320ba08eed8bf002ad84
Thu, 2 May 2024 21:29:28 +0000
[PATCH 0946/1000] Merge the pyi files into py files of optimizer (#125153)
Merge the interfaces in pyi files into py files in `torch/optim`. Pull Request resolved: https://github.com/pytorch/pytorch/pull/125153 Approved by: https://github.com/janeyx99
diff --git a/torch/optim/adadelta.py b/torch/optim/adadelta.py index 4061a6b68f..d0cf2b32dd 100644 --- a/torch/optim/adadelta.py +++ b/torch/optim/adadelta.py @@ -1,4 +1,4 @@ -from typing import List, Optional +from typing import Any, Dict, List, Optional import torch from torch import Tensor @@ -13,6 +13,7 @@ from...
2.41.0
0e2f62edd79ebdc21c3b8c16bf480db1af168a0
Thu, 2 May 2024 21:36:18 +0000
[PATCH 0947/1000] =?UTF-8?q?Revert=20"Include=20support=20for=20t?= =?UTF-8?q?he=20scatter=20gather=20cuda=20kernels=20to=20allow=20for=20com?= =?UTF-8?q?p=E2=80=A6=20(#124809)"?=MIME-Version: 1.0Content-Type: text/plain; charset=UTF-8Content-Transfer-Encoding: 8bit
This reverts commit 9e24c263f998819f849bb8293323213101e9aefc. Reverted https://github.com/pytorch/pytorch/pull/124809 on behalf of https://github.com/kit1980 due to breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/124809#issuecomment-2091751002))
diff --git a/aten/src/ATen/NumericUtils.h b/aten/src/ATen/NumericUtils.h index 421fe0efab..788da64b4e 100644 --- a/aten/src/ATen/NumericUtils.h +++ b/aten/src/ATen/NumericUtils.h @@ -38,11 +38,7 @@ inline C10_HOST_DEVICE bool _isnan(T val) { template <typename T, std::enable_if_t<c10::is_complex<T>::value, int> = 0>...
2.41.0
c8237c6aa32ab7470e76bde03f2d3dcb9dd42a1
Thu, 2 May 2024 22:26:39 +0000
[PATCH 0949/1000] [ATen-VK] Resolve compiler_flags to allow Mac build (#125361)
Summary: ## `-Wmissing-prototypes` In ATen-Vulkan, we often define functions in `.cpp` files without declaring them in `.h` files without hiding them in an anonymous namespace. Example: [`Packing.cpp`'s channel_image_repacking()](https://github.com/pytorch/pytorch/blob/f1f142c44f81384afbdba5e451fc15744868bf26/aten/sr...
diff --git a/aten/src/ATen/native/vulkan/api/Adapter.cpp b/aten/src/ATen/native/vulkan/api/Adapter.cpp index ff1ea41cf6..173479a0c2 100644 --- a/aten/src/ATen/native/vulkan/api/Adapter.cpp +++ b/aten/src/ATen/native/vulkan/api/Adapter.cpp @@ -44,10 +44,10 @@ PhysicalDevice::PhysicalDevice(VkPhysicalDevice physical_devi...
2.41.0
b5f6b10add7515064f20dace8f80b6226d26ada
Thu, 2 May 2024 22:51:03 +0000
[PATCH 0951/1000] [Inductor] default block size for head_dim = 256 for flex attention (#125380)
## H100 ### torch.bfloat16 No major change, as expected. ``` | Type | Speedup | batch_size | num_heads | q_seq_len | k_seq_len | head_dim | score_mod | dtype | |---------|-----------|--------------|-------------|-------------|-------------|------------|-------------|----------------| | Average...
diff --git a/benchmarks/transformer/score_mod.py b/benchmarks/transformer/score_mod.py index e337049707..0e9e8d11a3 100644 --- a/benchmarks/transformer/score_mod.py +++ b/benchmarks/transformer/score_mod.py @@ -211,7 +211,7 @@ def generate_experiment_configs() -> List[ExperimentConfig]: batch_sizes = [1, 8, 16] ...
2.41.0
551755cec99adef3b9210fd85996fc304bed349
Thu, 2 May 2024 23:34:18 +0000
[PATCH 0952/1000] Update tolerance for flex fp32 (#125444)
# Summary Updates the tolerances to account for internal failure Pull Request resolved: https://github.com/pytorch/pytorch/pull/125444 Approved by: https://github.com/kit1980
diff --git a/test/inductor/test_flex_attention.py b/test/inductor/test_flex_attention.py index e5f31e7bcb..e5aa2aeb86 100644 --- a/test/inductor/test_flex_attention.py +++ b/test/inductor/test_flex_attention.py @@ -143,7 +143,7 @@ class TestTemplatedSDPA(InductorTestCase): # Note, it seems like we really are l...
2.41.0
440d0755ab4659c450de1bc901839e675f253ad
Thu, 2 May 2024 23:44:09 +0000
[PATCH 0953/1000] Support custom layout call under torch dispatch mode (#125379)
Fixes #ISSUE_NUMBER Pull Request resolved: https://github.com/pytorch/pytorch/pull/125379 Approved by: https://github.com/jbschlosser
diff --git a/test/test_nestedtensor.py b/test/test_nestedtensor.py index 3df4b63801..31ea10a930 100644 --- a/test/test_nestedtensor.py +++ b/test/test_nestedtensor.py @@ -3878,6 +3878,14 @@ class TestNestedTensorSubclass(TestCase): self.assertTrue(not nt_noncontiguous.is_contiguous(memory_format=torch.contiguo...
2.41.0
18a6f46d01db665cf092d501fb9597cdac91a33
Fri, 3 May 2024 00:50:49 +0000
[PATCH 0954/1000] Adding Compare in torch.utils.benchmark documentation (#125009)
`torch.utils.benchmark.Compare` is not directly exposed in torch.utils.benchmark documentation. I think this is a valuable resource to add since it can help people embracing the torch benchmark way of doing things, and help people building documentation towards it. Pull Request resolved: https://github.com/pytorch/py...
diff --git a/docs/source/benchmark_utils.rst b/docs/source/benchmark_utils.rst index c93fbfd66c..7546179c50 100644 --- a/docs/source/benchmark_utils.rst +++ b/docs/source/benchmark_utils.rst @@ -19,6 +19,9 @@ Benchmark Utils - torch.utils.benchmark .. autoclass:: FunctionCounts :members: +.. autoclass:: Compare...
2.41.0
cac7aa70ffc62090ab12211c36a3ac37e73ed0b
Fri, 3 May 2024 01:18:52 +0000
[PATCH 0955/1000] [CI] Unskip Linalg tests on ARM (#125377)
Removes obscure "Issue with numpy version on arm" added by https://github.com/pytorch/pytorch/pull/82213 And replaces it with 4 targeted skips: - test_addmv for `float16` - test_vector_norm for `float16`, `bfloat16` and `float32` Followups to fix them are tracked in https://github.com/pytorch/pytorch/issues/125438 Pul...
diff --git a/test/test_linalg.py b/test/test_linalg.py index 8976d81c5a..0a5577d54c 100644 --- a/test/test_linalg.py +++ b/test/test_linalg.py @@ -53,7 +53,6 @@ def blaslt_supported_device(): return True return False -@unittest.skipIf(IS_ARM64, "Issue with numpy version on arm") class TestLinalg(Te...
2.41.0
15da7856caf2deacbf0d9adabd53e5e3f0d9a1d
Fri, 3 May 2024 01:19:21 +0000
[PATCH 0956/1000] [MPS] Fix overflow in cumsum when dtype is bool (#125318)
`cumsum` and `cumprod` was (is?) buggy for MPS: https://github.com/pytorch/pytorch/blob/c8d2a55273757c90989fde7c6f05e957aba9a238/aten/src/ATen/native/mps/operations/UnaryOps.mm#L435-L436 A workaround casts the input to int32 prior to performing the op to prevent overflow for certain numeric types. It turns out this i...
diff --git a/aten/src/ATen/native/mps/operations/UnaryOps.mm b/aten/src/ATen/native/mps/operations/UnaryOps.mm index c76a60e4a4..7709c79db6 100644 --- a/aten/src/ATen/native/mps/operations/UnaryOps.mm +++ b/aten/src/ATen/native/mps/operations/UnaryOps.mm @@ -434,7 +434,7 @@ static void cumulative_op_impl(const Tensor& ...
2.41.0
40d6df448de1acb263ed8f6ff9e7d26f5a1a161
Fri, 3 May 2024 03:50:55 +0000
[PATCH 0957/1000] [MPS] Native nonzero implementation (#125355)
Fixes https://github.com/pytorch/pytorch/issues/124850 Replace previous MPSGraph nonzero construction with native nonzero op. For older OSes, fallback to CPU (previous implementation was not reliable and was comparable to CPU in speed). Pull Request resolved: https://github.com/pytorch/pytorch/pull/125355 Approved by...
diff --git a/aten/src/ATen/native/mps/operations/Indexing.mm b/aten/src/ATen/native/mps/operations/Indexing.mm index d86f57c49f..38c3212500 100644 --- a/aten/src/ATen/native/mps/operations/Indexing.mm +++ b/aten/src/ATen/native/mps/operations/Indexing.mm @@ -241,14 +241,20 @@ static void index_put_kernel_mps(TensorIter...
2.41.0
156cb2e1279bd44d140fdb469d41dda5d0c40c2
Fri, 3 May 2024 04:42:38 +0000
[PATCH 0958/1000] Fix mem size mismatch from split/chunk in const folding (#125199)
Summary: The chunk/split ops on the weights/constants is folded in a fx pass and each output tensor has the same storage size of the original tensor (which is 3x of its actual size if chunk(3)). However Backend calculates the mem size on device from tensor shape/stride/dtype. This causes the mismatch when copying weig...
diff --git a/torch/fx/experimental/const_fold.py b/torch/fx/experimental/const_fold.py index 548d1d3852..8176ccb562 100644 --- a/torch/fx/experimental/const_fold.py +++ b/torch/fx/experimental/const_fold.py @@ -60,7 +60,7 @@ class FoldedGraphModule(torch.fx.GraphModule): def _create_param(i): re...
2.41.0
706da2bad835f521a1062b80aa6c631730a2064
Thu, 2 May 2024 16:29:05 -0700
[PATCH 0959/1000] [dynamo][cpp-guards] Improve recompilation reason logic for NO_TENSOR_ALIASING guard (#125439)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125439 Approved by: https://github.com/williamwen42
diff --git a/torch/_dynamo/guards.py b/torch/_dynamo/guards.py index fb4cb7a039..c04d114336 100644 --- a/torch/_dynamo/guards.py +++ b/torch/_dynamo/guards.py @@ -132,6 +132,7 @@ class GuardManager: self.cache_entry = None self.extra_state = None self.id_matched_objs = None + self.no_t...
2.41.0
f757a5c004641c7ef4781aad4a20196e7dd8b9e
Fri, 3 May 2024 04:59:17 +0000
[PATCH 0961/1000] [export] use tree_map for _flatten_dynamic_shapes (#125415)
Summary: Fixing the implementation of `_flatten_dynamic_shapes()`, to follow how `_process_dynamic_shapes()` does it. The previous implementation would misinterpret some nested dynamic shapes specs, causing it to miss out on some shapes specs, for example with nested inputs/constant input tuples: ``` inputs = ( (2, 1)...
diff --git a/test/export/test_export.py b/test/export/test_export.py index c7b6f53aaf..8aede91977 100644 --- a/test/export/test_export.py +++ b/test/export/test_export.py @@ -4561,6 +4561,35 @@ def forward(self, x): self.assertEqual(div_spec.arg.name, "div") self.assertEqual(div_spec.arg.value, "floor...
2.41.0
71ee40793c21858cc7ccb9855f930549c35777d
Thu, 2 May 2024 16:29:05 -0700
[PATCH 0962/1000] [dynamo][nn module] Check for duplicate tensors in register_attr_or_module (#125421)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125421 Approved by: https://github.com/mlazos ghstack dependencies: #125439
diff --git a/torch/_dynamo/output_graph.py b/torch/_dynamo/output_graph.py index 0be89d59e2..7e0d2e6197 100644 --- a/torch/_dynamo/output_graph.py +++ b/torch/_dynamo/output_graph.py @@ -767,22 +767,32 @@ class OutputGraph: # are registered as get_attr nodes in the root graph. tracer =...
2.41.0
c8478974367753eb1ff3c82dbd8922193ab8416
Fri, 3 May 2024 05:55:25 +0000
[PATCH 0963/1000] [vision hash update] update the pinned vision hash (#123227)
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml). Update the pinned vision hash. Pull Request resolved: https://github.com/pytorch/pytorch/pull/123227 Approved by: https://github.com/pytorchbot
diff --git a/.github/ci_commit_pins/vision.txt b/.github/ci_commit_pins/vision.txt index 15cb85caca..50412178ab 100644 --- a/.github/ci_commit_pins/vision.txt +++ b/.github/ci_commit_pins/vision.txt @@ -1 +1 @@ -2c4665ffbb64f03f5d18016d3398af4ac4da5f03 +06ad737628abc3a1e617571dc03cbdd5b36ea96a
2.41.0
d92637f445d2787f83829079276f71b1ad1fc7c
Thu, 2 May 2024 21:19:48 -0700
[PATCH 0964/1000] Add `write_record_metadata` to PyTorchFileWriter (#125184)
Add `PyTorchFileWriter.write_record_metadata(record_name, num_bytes)` that - writes the zipfile header/end of central directory metadata for an entry* - reserves `num_bytes` in the zipfile for the payload. *Since the payload is not provided, the CRC32 computation is skipped and 0s are written in the corresponding entr...
diff --git a/caffe2/serialize/inline_container.cc b/caffe2/serialize/inline_container.cc index 533fd42a04..173153e805 100644 --- a/caffe2/serialize/inline_container.cc +++ b/caffe2/serialize/inline_container.cc @@ -612,15 +612,35 @@ size_t ostream_write_func( return ret; } +// This func will not update combined_u...
2.41.0
5cc7ada6706f737959d8488d96028b3eb29aeea
Thu, 2 May 2024 20:50:18 -0700
[PATCH 0965/1000] skip triton template precompilation in 311.0-3.11.7 to workaround 311 cpython bug (#125446)
Fix for https://github.com/pytorch/pytorch/issues/125374. We dont have CI for this specific versions, but I verified locally. THere is a cpython bug from 3.11.0->3.11.7 where the ast parsing state is global, and errors with multiple threads. when dust settles a little around the new process based compilation we can loo...
diff --git a/torch/_inductor/select_algorithm.py b/torch/_inductor/select_algorithm.py index 8fcb441ed8..23f73c6207 100644 --- a/torch/_inductor/select_algorithm.py +++ b/torch/_inductor/select_algorithm.py @@ -976,6 +976,14 @@ class AlgorithmSelectorCache(PersistentCache): if num_workers <= 0: ...
2.41.0
ebefcf84520e46e4b156a9426ef76c6e110668d
Fri, 3 May 2024 09:10:11 +0000
[PATCH 0966/1000] Driver folder check (#117548)
Added extra check for driver folders for Libtorch, as stat struct does not recognize driver folders, so torch.save should work for them as well. (e.g. save model.pt directly under C: ) Fixes [#111121](https://github.com/pytorch/pytorch/issues/111121) and #105488 Co-authored-by: Ozan Aydin <148207261+ozanMSFT@users.no...
diff --git a/caffe2/serialize/inline_container.cc b/caffe2/serialize/inline_container.cc index 173153e805..d3bba2c797 100644 --- a/caffe2/serialize/inline_container.cc +++ b/caffe2/serialize/inline_container.cc @@ -93,7 +93,15 @@ static std::string parentdir(const std::string& name) { end = name.find_last_of('\\')...
2.41.0
89b4586e95752dc65a1821a4383b9679ccd5b6b
Fri, 3 May 2024 13:57:32 +0800
[PATCH 0967/1000] [optim]fix ut and sgd kernel (#124904)
- Original `test_grad_scaling_autocast_fused_optimizers` does not work since there is no "fused" in `optim_inputs` - We should use different `grad_scaler`, they should not share 1 `scale`, there is no issue exposed here because the default `_growth_interval` is 2000 so it will not growth and there is also no inf is fou...
diff --git a/aten/src/ATen/native/cpu/FusedSGDKernel.cpp b/aten/src/ATen/native/cpu/FusedSGDKernel.cpp index 3383585675..c19aa249a1 100644 --- a/aten/src/ATen/native/cpu/FusedSGDKernel.cpp +++ b/aten/src/ATen/native/cpu/FusedSGDKernel.cpp @@ -52,8 +52,8 @@ typename std::enable_if< grad_vec2 = grad_vec2 * fVec(op...
2.41.0
a98c2a932132e49559bf777c02798633d585e66
Fri, 3 May 2024 10:24:14 +0000
[PATCH 0968/1000] inductor: Add Conv3d support (#124361)MIME-Version: 1.0Content-Type: text/plain; charset=UTF-8Content-Transfer-Encoding: 8bit
This PR is to add Conv3d support in inductor. Basicly reuse and expand Conv2d logic and unit tests to Conv3d. Conv3d inductor support will improve the performance of C2D_R50, I3D_R50, I3D_R101, Slow and SlowFast-R50 from OOB models. | C2D_R50 | I3D_R50 | I3D_R101 | Slow | SlowFast-R50 -- | -- | -- | -- | -- | -- eage...
diff --git a/aten/src/ATen/native/mkldnn/MKLDNNConversions.cpp b/aten/src/ATen/native/mkldnn/MKLDNNConversions.cpp index ea5464f285..b2901bc522 100644 --- a/aten/src/ATen/native/mkldnn/MKLDNNConversions.cpp +++ b/aten/src/ATen/native/mkldnn/MKLDNNConversions.cpp @@ -198,24 +198,40 @@ Tensor mkldnn_reorder_conv3d_weight...
+- func: mkldnn_reorder_conv3d_weight(Tensor self, SymInt[3] padding=0, SymInt[3] stride=1, SymInt[3] dilation=1, SymInt groups=1, SymInt[]? input_size=None) -> Tensor
9af8143692571c7dec07465fe93605f82815ce8
Thu, 2 May 2024 18:04:01 -0700
[PATCH 0969/1000] [FSDP] Added private `_unshard` API (#124304)MIME-Version: 1.0Content-Type: text/plain; charset=UTF-8Content-Transfer-Encoding: 8bit
Some toy example: <img width="998" alt="Screenshot 2024-04-17 at 2 00 05 PM" src="https://github.com/pytorch/pytorch/assets/31054793/b5665a63-beb0-4ca1-92c6-c57a052812fd"> We define `FullyShardedDataParallel._unshard(async_op: bool = False)` that can be used to prefetch all-gathers. The user should make sure: 1. Run l...
diff --git a/test/distributed/fsdp/test_fsdp_comm.py b/test/distributed/fsdp/test_fsdp_comm.py index 2e5993b23e..c20637061d 100644 --- a/test/distributed/fsdp/test_fsdp_comm.py +++ b/test/distributed/fsdp/test_fsdp_comm.py @@ -3,18 +3,23 @@ import sys from contextlib import nullcontext from enum import auto, Enum -f...
2.41.0
99ada5b27a1032e4bda4849b34411031d893b5e
Fri, 3 May 2024 14:54:15 +0000
[PATCH 0970/1000] call `super().__post_init__` in `ForeachFuncinfo.__post_init__` (#125457)
obviously the current main branch's `ForeachFuncInfo`'s dunder post init doesn't `super().__post_init__()` which does some setup including setting `dtypesIfCUDA` and `dtypesIfROCM`. Fixes #125295 related: #125001 Pull Request resolved: https://github.com/pytorch/pytorch/pull/125457 Approved by: https://github.com/jan...
diff --git a/torch/testing/_internal/common_methods_invocations.py b/torch/testing/_internal/common_methods_invocations.py index 182bf42cce..001d93de18 100644 --- a/torch/testing/_internal/common_methods_invocations.py +++ b/torch/testing/_internal/common_methods_invocations.py @@ -9479,7 +9479,6 @@ foreach_unary_op_db...
2.41.0
6052a35d462ed783a32d37f5278cab2f5cfa17d
Thu, 2 May 2024 07:38:49 -0700
[PATCH 0975/1000] [RFC][FSDP2] Added `register_fsdp_forward_method` for user fwd methods (#125394)
FSDP only runs its pre/post-forward hooks on `nn.Module.forward`. This means that if the user runs a custom method meant as a forward pass, then FSDP will not all-gather the parameters. Examples include HuggingFace models' `generate()` (https://github.com/pytorch/pytorch/issues/123962, https://github.com/pytorch/pytorc...
diff --git a/test/distributed/_composable/fsdp/test_fully_shard_training.py b/test/distributed/_composable/fsdp/test_fully_shard_training.py index bf90bd165c..8d1e1dbfb6 100644 --- a/test/distributed/_composable/fsdp/test_fully_shard_training.py +++ b/test/distributed/_composable/fsdp/test_fully_shard_training.py @@ -1...
2.41.0
84a5b6cc0ecf0d75dbdbe438f2dee4909dc8db4
Wed, 1 May 2024 13:02:42 -0700
[PATCH 0976/1000] [AOTI] Add missing std::move for constant args (#125329)
Summary: fix https://github.com/pytorch/pytorch/issues/123187 Pull Request resolved: https://github.com/pytorch/pytorch/pull/125329 Approved by: https://github.com/angelayi, https://github.com/chenyang78
diff --git a/test/inductor/test_cpu_cpp_wrapper.py b/test/inductor/test_cpu_cpp_wrapper.py index 828ed8eb0e..c28451613c 100644 --- a/test/inductor/test_cpu_cpp_wrapper.py +++ b/test/inductor/test_cpu_cpp_wrapper.py @@ -88,7 +88,6 @@ if config.abi_compatible: "test_qlinear_cpu", "test_qlinear_dequant_p...
2.41.0
1a7455b996a2bed2f716f54315509b6b9a6dcce
Fri, 3 May 2024 17:19:43 +0200
[PATCH 0978/1000] [Inductor cutlass backend] Fix cutlass_utils.get_max_alignment() for strided layouts. (#124930)
Fixes cutlass_utils.get_max_alignment() which was so far not checking the alignment properly. Basically the method so far assumed that the passed layout is contiguous and row-major, which does not have to be true. Test Plan: CI - test_cutlass_backend.py to prevent regressions Added unit test Pull Request resolved: ht...
diff --git a/test/inductor/test_cutlass_backend.py b/test/inductor/test_cutlass_backend.py index 7da02e7077..aed07919a8 100644 --- a/test/inductor/test_cutlass_backend.py +++ b/test/inductor/test_cutlass_backend.py @@ -9,7 +9,8 @@ import torch from torch._dynamo.utils import counters from torch._inductor import confi...
2.41.0
45baef05da3b2624382b9c3242233bba2a61c83
Fri, 3 May 2024 20:52:01 +0000
[PATCH 0979/1000] s390x: remove workaround for sleef issue (#124730)
This workaround is no longer needed since sleef was updated. Pull Request resolved: https://github.com/pytorch/pytorch/pull/124730 Approved by: https://github.com/soulitzer
diff --git a/aten/src/ATen/cpu/vec/vec256/zarch/vec256_zarch.h b/aten/src/ATen/cpu/vec/vec256/zarch/vec256_zarch.h index 9b53745b03..b70b494649 100644 --- a/aten/src/ATen/cpu/vec/vec256/zarch/vec256_zarch.h +++ b/aten/src/ATen/cpu/vec/vec256/zarch/vec256_zarch.h @@ -13,8 +13,6 @@ #include <ATen/cpu/vec/vec_base.h> #i...
2.41.0
941fee7ea6d069cd79c420dbe5528decb86ac3e
Fri, 3 May 2024 21:10:36 +0000
[PATCH 0980/1000] [CPP extention] Baton lock is called regardless the code version (#125404)
Greetings! Fixes #125403 Please assist me with the testing as it is possible for my reproducer to miss the error in the code. Several (at least two) threads should enter the same part of the code at the same time to check file lock is actually working Pull Request resolved: https://github.com/pytorch/pytorch/pull/12...
diff --git a/torch/utils/cpp_extension.py b/torch/utils/cpp_extension.py index 8961dfcae3..ee5584315e 100644 --- a/torch/utils/cpp_extension.py +++ b/torch/utils/cpp_extension.py @@ -1691,10 +1691,10 @@ def _jit_compile(name, file=sys.stderr) name = f'{name}_v{version}' - if version != ...
2.41.0
a578df57cc0f417f671634e564c62ef5d9a97e2
Fri, 3 May 2024 11:19:09 -0700
[PATCH 0981/1000] [FSDP2] Added test to show rank 0 broadcast for HSDP replicas (#125431)
This PR shows a simple utility to broadcast the parameters across replicas for HSDP: ``` replicate_group = mesh.get_group("replicate") for param in model.parameters(): # E.g. for mesh [[0, 1, 2, 3], [4, 5, 6, 7]] sharding on dim-1 and # replicating on dim-0, broadcast with sources 0, 1, 2, 3 src_rank = dist.get_process...
diff --git a/test/distributed/_composable/fsdp/test_fully_shard_init.py b/test/distributed/_composable/fsdp/test_fully_shard_init.py index 2e2e97e3a3..958f375fe2 100644 --- a/test/distributed/_composable/fsdp/test_fully_shard_init.py +++ b/test/distributed/_composable/fsdp/test_fully_shard_init.py @@ -725,5 +725,72 @@ ...
2.41.0
503c29357f8ec7a5f97779a83f3a752eeb1654e
Fri, 3 May 2024 11:13:52 -0700
[PATCH 0982/1000] Introduce torch.utils._sympy.symbol (#125395)
This provides utilities for creating and querying properties on sympy.Symbol. I want to use this refactor to get a better handle on how the 's' prefix is being used in Inductor. To start, I only do symbolic_shapes code because that's what I'm familiar with. Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Reque...
diff --git a/test/inductor/test_compiled_autograd.py b/test/inductor/test_compiled_autograd.py index d0e6949450..4c52630f8f 100644 --- a/test/inductor/test_compiled_autograd.py +++ b/test/inductor/test_compiled_autograd.py @@ -1516,11 +1516,13 @@ def wrap_test_class(orig_cls): elif name.startswith("test_"): ...
2.41.0
b5ae2611e22d992565f202df9267fe66469efaa
Fri, 3 May 2024 21:34:34 +0000
[PATCH 0983/1000] s390x: use runtime detection for vectorization support (#123936)
s390x: use runtime detection for vectorization support Pull Request resolved: https://github.com/pytorch/pytorch/pull/123936 Approved by: https://github.com/malfet, https://github.com/jansel, https://github.com/xuhancn
diff --git a/aten/src/ATen/native/DispatchStub.cpp b/aten/src/ATen/native/DispatchStub.cpp index c7db94889c..93c004acdc 100644 --- a/aten/src/ATen/native/DispatchStub.cpp +++ b/aten/src/ATen/native/DispatchStub.cpp @@ -10,8 +10,19 @@ #include <cstdlib> #include <cstring> +#ifdef HAVE_ZVECTOR_CPU_DEFINITION +#includ...
2.41.0
783fef9904ea785ed1489cdcc0d4c7f55af4a83
Fri, 3 May 2024 14:58:27 -0700
[PATCH 0984/1000] [AOTI] Add a missing mypy ignore (#125508)
Summary: Caused by https://github.com/pytorch/pytorch/pull/125397, but somehow was not caught by CI. Pull Request resolved: https://github.com/pytorch/pytorch/pull/125508 Approved by: https://github.com/izaitsevfb
diff --git a/torch/_inductor/ir.py b/torch/_inductor/ir.py index f240efa7ae..49261cfd58 100644 --- a/torch/_inductor/ir.py +++ b/torch/_inductor/ir.py @@ -5547,7 +5547,7 @@ class FallbackKernel(ExternKernelAlloc): self.set_cpp_kernel(kernel) else: self.cpp_kern...
2.41.0
9abd1dccbbf6a20c76f82a2e9d7670126299a99
Fri, 3 May 2024 22:58:20 +0000
[PATCH 0986/1000] Fix lint after PR 122611 (#125512)
Fix lint after https://github.com/pytorch/pytorch/pull/122611 Pull Request resolved: https://github.com/pytorch/pytorch/pull/125512 Approved by: https://github.com/clee2000
diff --git a/c10/util/Exception.h b/c10/util/Exception.h index 55de996683..bc01ad8d4e 100644 --- a/c10/util/Exception.h +++ b/c10/util/Exception.h @@ -68,7 +68,10 @@ class C10_API Error : public std::exception { const void* caller = nullptr); // Base constructor - Error(std::string msg, std::string backtra...
2.41.0
2ab96a57e5b19c72cf11bc098fd690d238c1d86
Wed, 1 May 2024 14:45:19 -0700
[PATCH 0987/1000] [dynamo] fix crash when context manager is passed to a function (#125321)
Fix https://github.com/pytorch/pytorch/issues/125274. Main change was to reconstruct `ContextWrappingVariables` as objects in general, but we can replace them with the class on the caller side when generating the resume function. Pull Request resolved: https://github.com/pytorch/pytorch/pull/125321 Approved by: https:...
diff --git a/test/dynamo/test_ctx_manager.py b/test/dynamo/test_ctx_manager.py index cc6e39de4d..eab8fdb41a 100644 --- a/test/dynamo/test_ctx_manager.py +++ b/test/dynamo/test_ctx_manager.py @@ -1304,7 +1304,7 @@ class GraphModule(torch.nn.Module): self.assertEqual(fn(x), opt_fn(x)) self.asser...
2.41.0
96bb74077ddc4349ba928e2f6b011d7ed35058c
Fri, 3 May 2024 11:19:09 -0700
[PATCH 0989/1000] [FSDP2] Added HSDP grad acc tests and some minor changes (#125479)
This adds HSDP to the existing gradient accumulation tests and includes some minor changes to simplify things a tiny bit. Pull Request resolved: https://github.com/pytorch/pytorch/pull/125479 Approved by: https://github.com/wanchaol ghstack dependencies: #125431
diff --git a/.ci/pytorch/test.sh b/.ci/pytorch/test.sh index c903a26998..72a3943039 100755 --- a/.ci/pytorch/test.sh +++ b/.ci/pytorch/test.sh @@ -322,6 +322,7 @@ test_inductor_distributed() { pytest test/distributed/_composable/fsdp/test_fully_shard_training.py -k test_train_parity_2d_mlp pytest test/distributed...
2.41.0
aa7699185e4ec39077e3046dfd63244dffa9ddb
Fri, 3 May 2024 12:09:18 -0700
[PATCH 0990/1000] [FSDP2] Computed grad divide factors at runtime (#125484)
**Context** We are interested in supporting the case where HSDP reduce-scatters but does not all-reduce in a microbatch backward. This saves communication while still saving memory. Only on the last microbatch do we need to both reduce-scatter and all-reduce. This is not implemented yet and will hopefully come in a fut...
diff --git a/test/distributed/_composable/fsdp/test_fully_shard_comm.py b/test/distributed/_composable/fsdp/test_fully_shard_comm.py index 33c6e61ac4..115c1f9322 100644 --- a/test/distributed/_composable/fsdp/test_fully_shard_comm.py +++ b/test/distributed/_composable/fsdp/test_fully_shard_comm.py @@ -18,6 +18,8 @@ fro...
2.41.0
a1af95b0979d85c4fe32a75e797323ad81f298d
Sat, 4 May 2024 00:10:53 +0000
[PATCH 0991/1000] [Inductor] Properly package target info for triton.compile (#125241)
Triton updated the interface for `triton.compile` https://github.com/openai/triton/commit/5162346487b3e3ebc062d9697429bafad25f22f6 The `target` argument to compile needs to be wrapped in a `GPUTarget` object. Without proper wrapping, we hit an assert in `compile`. If that assert is removed, Triton attempts to read dev...
diff --git a/torch/_inductor/runtime/triton_heuristics.py b/torch/_inductor/runtime/triton_heuristics.py index b66bdbf393..0dc43f0649 100644 --- a/torch/_inductor/runtime/triton_heuristics.py +++ b/torch/_inductor/runtime/triton_heuristics.py @@ -55,11 +55,17 @@ if triton is not None: from triton.compiler.comp...
2.41.0
325c55896648ccecc5cfec11d3220dd93f75f23
Sat, 4 May 2024 00:29:35 +0000
[PATCH 0992/1000] Add CUDA paths to `CODEOWNERS` (#125409)
CC @ptrblck @albanD Pull Request resolved: https://github.com/pytorch/pytorch/pull/125409 Approved by: https://github.com/albanD
diff --git a/CODEOWNERS b/CODEOWNERS index 6999f8553b..e481e66112 100644 --- a/CODEOWNERS +++ b/CODEOWNERS @@ -144,3 +144,14 @@ caffe2/utils/hip @jeffdaily @jithunnair-amd /torch/csrc/Storage* @mikaylagawarecki # subscribing for PyTorchFileWriter/PyTorchFileReader changes /torch/csrc/jit/python/init.cpp @mikaylagawa...
2.41.0
fd5bb0c44dfaef36961eaa67008df4d6134199d
Fri, 3 May 2024 13:38:04 -0700
[PATCH 0993/1000] [c10d] only PG0 should dump when monitoring thread timed out (#125356)
Summary: We found that some dumps are missing when monitoring thread timeout. This is likely due to multiple PGs could still dump the same records at the same time. So we should allow only PG0 to actualy dump Test Plan: unit test python test/run_test.py --cpp --verbose -i cpp/ProcessGroupNCCLErrorsTest Tags: Pull Requ...
diff --git a/test/cpp/c10d/ProcessGroupNCCLErrorsTest.cpp b/test/cpp/c10d/ProcessGroupNCCLErrorsTest.cpp index 991a8c5dc7..aef97daae2 100644 --- a/test/cpp/c10d/ProcessGroupNCCLErrorsTest.cpp +++ b/test/cpp/c10d/ProcessGroupNCCLErrorsTest.cpp @@ -389,6 +389,7 @@ TEST_F(ProcessGroupNCCLErrorsTest, testNCCLErrorsNoHeartb...
2.41.0
302dc68bf76a0af6dd4bb0488aaf22998374a0e
Sat, 4 May 2024 02:39:39 +0000
[PATCH 0994/1000] =?UTF-8?q?[Reland]=20Fakify=20script=20object?= =?UTF-8?q?=20inputs=20and=20attributes=20for=20non-strict=20ex=E2=80=A6?= =?UTF-8?q?=20(#125490)?=MIME-Version: 1.0Content-Type: text/plain; charset=UTF-8Content-Transfer-Encoding: 8bit
A re-land of #124239. This PR fakify ScriptObject inputs and attributes in export non-strict mode by default. The basic idea is to only fakify the script object during tracing (i.e. aot_export). After we get the traced graph module, eagerly executing, serializing, or running more passes will use the real script objec...
diff --git a/test/export/test_passes.py b/test/export/test_passes.py index 41597a6030..e2724ead88 100644 --- a/test/export/test_passes.py +++ b/test/export/test_passes.py @@ -13,6 +13,10 @@ from typing import List, Set import torch from functorch.experimental.control_flow import cond from torch._dynamo.eval_frame im...
2.41.0
4727fd4ebd42936a1bae7b7f44ee9a038fd643e
Sat, 4 May 2024 03:08:44 +0000
[PATCH 0995/1000] [TD][ez] Better check for is pr or not (#125485)
You can trigger ciflow tags on main branch commits, so we should be more conservative when checking to see if a workflow is a PR/on the main branch. get_pr_number checks for the pr number based on the PR_NUMBER env var or a tag of the for `ciflow/workflow/pr number` If we fail to find something like this, then assume...
diff --git a/test/run_test.py b/test/run_test.py index 1b95bcb465..5120ac6513 100755 --- a/test/run_test.py +++ b/test/run_test.py @@ -59,6 +59,7 @@ from tools.testing.discover_tests import ( ) from tools.testing.do_target_determination_for_s3 import import_results from tools.testing.target_determination.gen_artifac...
2.41.0
f061baa94f437e54c5f4a12f48bcb1dfa662918
Fri, 3 May 2024 16:03:08 -0700
[PATCH 0996/1000] [comm_mode] adding some initial c10d ops to CommDebugMode (#125475)
looks like we can make it work :) Pull Request resolved: https://github.com/pytorch/pytorch/pull/125475 Approved by: https://github.com/awgu
diff --git a/test/distributed/_tensor/debug/test_comm_mode.py b/test/distributed/_tensor/debug/test_comm_mode.py index 893131fe4e..d674905f2b 100644 --- a/test/distributed/_tensor/debug/test_comm_mode.py +++ b/test/distributed/_tensor/debug/test_comm_mode.py @@ -9,11 +9,13 @@ from torch.distributed._tensor import Devic...
2.41.0
62e89c1b8b27e848f30991773d689db3af24532
Fri, 3 May 2024 11:47:00 -0700
[PATCH 0997/1000] [dynamo] Do not turn on record relay with TORCH_COMPILE_DEBUG (#125488)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125488 Approved by: https://github.com/yanboliang, https://github.com/mlazos
diff --git a/torch/_dynamo/config.py b/torch/_dynamo/config.py index 1360782db1..6a870a6d12 100644 --- a/torch/_dynamo/config.py +++ b/torch/_dynamo/config.py @@ -130,7 +130,7 @@ suppress_errors = bool(os.environ.get("TORCHDYNAMO_SUPPRESS_ERRORS", False)) # Record and write an execution record of the current frame to ...
2.41.0
b41e1d6fc05428008875e3cfe8be17184e57491
Fri, 3 May 2024 16:22:16 -0700
[PATCH 0998/1000] try to fix the warning in distribute_tensor (#125476)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/125476 Approved by: https://github.com/albanD, https://github.com/awgu ghstack dependencies: #125475
diff --git a/test/distributed/_tensor/test_api.py b/test/distributed/_tensor/test_api.py index 196bd6407b..4318054117 100644 --- a/test/distributed/_tensor/test_api.py +++ b/test/distributed/_tensor/test_api.py @@ -70,6 +70,7 @@ class DTensorAPITest(DTensorTestBase): ) tensor_shape = [3 * self.world_s...
2.41.0