commitId
stringlengths
40
40
datetime
stringlengths
30
31
subject
stringlengths
37
266
comment
stringlengths
109
15.2k
diff
stringlengths
238
914k
gitVersion
stringclasses
9 values
85c93d64db23b5cf495c52458d9b3120b96ee05
Fri, 12 Apr 2024 20:11:59 -0700
[PATCH 0134/1000] [inductor] Write generated files from parent process (#123409)
Before this PR we would pass generated source code over a pipe to the compile worker then the compile worker would write out the file. Doing it this way is faster and results in smaller messages to the workers (and lets us skip creating the workers in the warm start case). Pull Request resolved: https://github.com/py...
diff --git a/torch/_inductor/codecache.py b/torch/_inductor/codecache.py index cf8f8f6e53..88d37823a7 100644 --- a/torch/_inductor/codecache.py +++ b/torch/_inductor/codecache.py @@ -59,7 +59,7 @@ from torch._dynamo.device_interface import ( from torch._dynamo.utils import counters, dynamo_timed from torch._inductor ...
2.41.0
ac8fe46dde91cd17f4a271751113deb5763de3f
Sat, 13 Apr 2024 06:36:07 +0000
[PATCH 0135/1000] Enable UFMT on all of test/quantization/ao_migration &bc (#123994)
Partially addresses #123062 Ran lintrunner on: - test/quantization/ao_migration - test/quantization/bc Detail: ``` $ lintrunner -a --take UFMT --all-files ok No lint issues. Successfully applied all patches. ``` @ezyang Pull Request resolved: https://github.com/pytorch/pytorch/pull/123994 Approved by: https://github...
diff --git a/.lintrunner.toml b/.lintrunner.toml index 4073f1bbff..872b6452ab 100644 --- a/.lintrunner.toml +++ b/.lintrunner.toml @@ -1334,13 +1334,6 @@ exclude_patterns = [ 'test/profiler/test_profiler.py', 'test/profiler/test_profiler_tree.py', 'test/quantization/__init__.py', - 'test/quantization/...
2.41.0
7261be0a8f09bed9ab95d0cee82e75eebd249c3
Sat, 13 Apr 2024 07:47:32 +0000
[PATCH 0136/1000] Revert "Simplify ATen sparse semi-structured operators based on CUTLASS (#123473)"
This reverts commit b2a0b8c446234f0b35a66aff87501c4596ea5d51. Reverted https://github.com/pytorch/pytorch/pull/123473 on behalf of https://github.com/DanilBaibak due to Break internal build ([comment](https://github.com/pytorch/pytorch/pull/123473#issuecomment-2053561077))
diff --git a/aten/src/ATen/native/native_functions.yaml b/aten/src/ATen/native/native_functions.yaml index 6e96a8a6aa..02e7c5caa2 100644 --- a/aten/src/ATen/native/native_functions.yaml +++ b/aten/src/ATen/native/native_functions.yaml @@ -3342,19 +3342,10 @@ dispatch: CUDA: _cslt_sparse_mm_search -# DEPRECATE...
- dispatch:
60af92c175799c11afce4fb948748f399955e3a
Sat, 13 Apr 2024 11:45:00 +0000
[PATCH 0137/1000] [Distributed] [3/N] Fix clang-tidy warnings in torch/csrc/distributed/c10d (#123312)
This PR continues to fix some clang-tidy warnings in distributed code, following https://github.com/pytorch/pytorch/pull/122892. Pull Request resolved: https://github.com/pytorch/pytorch/pull/123312 Approved by: https://github.com/Skylion007
diff --git a/torch/csrc/distributed/c10d/HashStore.cpp b/torch/csrc/distributed/c10d/HashStore.cpp index cde8585ca5..50ed5ca5eb 100644 --- a/torch/csrc/distributed/c10d/HashStore.cpp +++ b/torch/csrc/distributed/c10d/HashStore.cpp @@ -1,12 +1,9 @@ #include <torch/csrc/distributed/c10d/HashStore.hpp> #include <unist...
2.41.0
a7db5d345a10ffb5092b26c5159f56faec1d0ea
Sat, 13 Apr 2024 12:54:14 +0000
[PATCH 0138/1000] [BE] migrate import sorter configurations to `pyproject.toml` (#123846)
Migrate import sorter configurations to `pyproject.toml` and delete `.isort.cfg`. Also, set the line length to 88 (which is the default of `black`). Pull Request resolved: https://github.com/pytorch/pytorch/pull/123846 Approved by: https://github.com/Skylion007
diff --git a/.isort.cfg b/.isort.cfg deleted file mode 100644 index d14d9bf207..0000000000 --- a/.isort.cfg +++ /dev/null @@ -1,6 +0,0 @@ -[settings] -include_trailing_comma=True -multi_line_output=3 -skip=third_party -skip_gitignore=True -use_parentheses=True diff --git a/pyproject.toml b/pyproject.toml index 089864d2...
2.41.0
440d1baa6cf7b32b47f14ffd9349eea9c39f3c4
Sat, 13 Apr 2024 07:42:48 +0000
[PATCH 0139/1000] [dynamo, 3.12] fix the block stack... again (#123978)
Some changes to how we handle blocks in 3.11+: - We only keep track of with blocks that are not enclosed in a try block - We do not compile partial graphs if we are in a block that is not in a tracked with block - i.e. any block enclosed in some non-with try/except/etc. block Pull Request resolved: https://github.com/...
diff --git a/test/dynamo/test_misc.py b/test/dynamo/test_misc.py index 19fc38c2f9..e633e88e60 100644 --- a/test/dynamo/test_misc.py +++ b/test/dynamo/test_misc.py @@ -7025,7 +7025,7 @@ def fn(): def test_variable_access_in_exception(self): def fn(): - x = torch.ones(3, 3) + x = tor...
2.41.0
8a71594933b2464d9d8b6b3533c5a945a4ac2ff
Sat, 13 Apr 2024 19:55:57 +0000
[PATCH 0141/1000] [NT] Fix typo in declared strides variable (#123856)
Summary: Looks like it's missing an s in the declaration so pyre is throwing an error {F1484357040} Test Plan: expect no pyre errors Differential Revision: D56023743 Pull Request resolved: https://github.com/pytorch/pytorch/pull/123856 Approved by: https://github.com/cpuhrsch, https://github.com/soulitzer
diff --git a/torch/nested/_internal/nested_tensor.py b/torch/nested/_internal/nested_tensor.py index b5c9354ad2..5494633b09 100644 --- a/torch/nested/_internal/nested_tensor.py +++ b/torch/nested/_internal/nested_tensor.py @@ -45,7 +45,7 @@ class NestedTensor(torch.Tensor): # For example, a jagged tensor with shap...
2.41.0
35c238bad6b1629b54f863b75b541e24ff57452
Sun, 14 Apr 2024 06:07:21 +0000
[PATCH 0143/1000] Enable UFMT on all of test/quantization/jit &pt2e (#124010)
Partially addresses #123062 Ran lintrunner on: - test/quantization/jit - test/quantization/pt2e Detail: ``` $ lintrunner -a --take UFMT --all-files ok No lint issues. Successfully applied all patches. ``` cc, please @ezyang Pull Request resolved: https://github.com/pytorch/pytorch/pull/124010 Approved by: https://git...
diff --git a/.lintrunner.toml b/.lintrunner.toml index 872b6452ab..65a4c936e2 100644 --- a/.lintrunner.toml +++ b/.lintrunner.toml @@ -1368,14 +1368,6 @@ exclude_patterns = [ 'test/quantization/fx/test_numeric_suite_fx.py', 'test/quantization/fx/test_quantize_fx.py', 'test/quantization/fx/test_subgraph_r...
2.41.0
5331aade57725b03c36d5cc6c683f6a6bc0692d
Sat, 13 Apr 2024 18:35:02 +0000
[PATCH 0144/1000] Simplify ATen sparse semi-structured operators based on CUTLASS (#123473)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123473 Approved by: https://github.com/cpuhrsch
diff --git a/aten/src/ATen/native/native_functions.yaml b/aten/src/ATen/native/native_functions.yaml index 02e7c5caa2..6e96a8a6aa 100644 --- a/aten/src/ATen/native/native_functions.yaml +++ b/aten/src/ATen/native/native_functions.yaml @@ -3342,10 +3342,19 @@ dispatch: CUDA: _cslt_sparse_mm_search +# DEPRECATE...
2.41.0
096e99a5d59a9b22bfaa7ea15492fd454592c48
Sun, 14 Apr 2024 08:13:52 -0700
[PATCH 0146/1000] Enable int8mm kernel for float16 (#124022)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124022 Approved by: https://github.com/mikekgfb
diff --git a/aten/src/ATen/native/LinearAlgebra.cpp b/aten/src/ATen/native/LinearAlgebra.cpp index 5824cfa23d..26242b4b70 100644 --- a/aten/src/ATen/native/LinearAlgebra.cpp +++ b/aten/src/ATen/native/LinearAlgebra.cpp @@ -3516,8 +3516,8 @@ Tensor _weight_int8pack_mm_cpu( auto N = B.size(0); auto K = A.size(1); ...
2.41.0
9f50333e91e9e8b20a78517becd74bca70c7d46
Fri, 12 Apr 2024 15:34:17 -0400
[PATCH 0147/1000] Improve assert message for unbacked symint not written out (#123965)
Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/123965 Approved by: https://github.com/Skylion007
diff --git a/torch/_inductor/ir.py b/torch/_inductor/ir.py index 860b71c545..ba0a726a26 100644 --- a/torch/_inductor/ir.py +++ b/torch/_inductor/ir.py @@ -3069,7 +3069,7 @@ class Buffer(IRNode): symbols_to_define.remove(s) assert ( not symbols_to_define - ), f"unbacked symint {...
2.41.0
3a05e791aabe4f2e3938626bff849641dc101ea
Sat, 13 Apr 2024 10:54:01 -0700
[PATCH 0148/1000] Don't add non-integer Triton kernel arg 1 to equal_to_1 (#123886)
Summary: Triton compiler adds constnat argument 1 to `equal_to_1` [only when it's an int](https://github.com/openai/triton/blob/8c5e33c77ef83e0cb99c744e58842930e602df31/python/triton/runtime/jit.py#L275). Here we restrict Inductor's `equal_to_1` in the same way. Test Plan: ``` $ python test/inductor/test_triton_kerne...
diff --git a/test/inductor/test_aot_inductor.py b/test/inductor/test_aot_inductor.py index 5de6d91a0b..2a41afbbe4 100644 --- a/test/inductor/test_aot_inductor.py +++ b/test/inductor/test_aot_inductor.py @@ -43,6 +43,7 @@ if HAS_CUDA: add_kernel_2d_autotuned, add_kernel_autotuned, add_kernel_w...
2.41.0
3ac61587aa368c613ef01df1f328a396b64cd5d
Mon, 15 Apr 2024 06:21:52 +0000
[PATCH 0149/1000] Enable UFMT on `test/functorch` (#123541)
Partially addresses #123062 Ran lintrunner on: - `test/functorch` Co-authored-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/123541 Approved by: https://github.com/zou3519, https://github.com/ezyang
diff --git a/.lintrunner.toml b/.lintrunner.toml index 65a4c936e2..e223d1a069 100644 --- a/.lintrunner.toml +++ b/.lintrunner.toml @@ -1144,24 +1144,6 @@ exclude_patterns = [ 'test/distributed/test_pg_wrapper.py', 'test/distributed/test_store.py', 'test/expect/__init__.py', - 'test/functorch/attn_ft.p...
2.41.0
cd7a7aa8e0942da627095b23b94dc89f5a54943
Mon, 15 Apr 2024 10:50:54 +0000
[PATCH 0150/1000] [xla hash update] update the pinned xla hash (#124042)
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml). Update the pinned xla hash. Pull Request resolved: https://github.com/pytorch/pytorch/pull/124042 Approved by: https://github.com/pytorchbot
diff --git a/.github/ci_commit_pins/xla.txt b/.github/ci_commit_pins/xla.txt index 4b92ed5afe..259a97684d 100644 --- a/.github/ci_commit_pins/xla.txt +++ b/.github/ci_commit_pins/xla.txt @@ -1 +1 @@ -5c48be19e6ded305bb524b3d1231fd4ce4d46208 +58a412cb271a3f98ae2e01fd1d24bdbb66645d4e
2.41.0
f9a70723374acd26ee2ce1fb5b37d513c8e8a17
Sun, 14 Apr 2024 23:32:09 -0400
[PATCH 0151/1000] Use uv in lintrunner init when it is available. (#124033)
Before, a no-op lintrunner init takes 12s. After, it takes 1s; a full order of magnitude improvement. Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/124033 Approved by: https://github.com/cyyever, https://github.com/Skylion007
diff --git a/tools/linter/adapters/pip_init.py b/tools/linter/adapters/pip_init.py index f177a920d0..0c1551f5e3 100644 --- a/tools/linter/adapters/pip_init.py +++ b/tools/linter/adapters/pip_init.py @@ -4,6 +4,7 @@ Initializer script that installs stuff to pip. import argparse import logging import os +import shutil...
2.41.0
ea1b99d89204989db64d0d63f5e46fce60d1962
Mon, 15 Apr 2024 15:36:55 +0000
[PATCH 0152/1000] Remove warning from LazyModuleMixin constructor. (#123968)
Remove warning from `LazyModuleMixin` about lazy modules being a new feature under heavy development. The last nontrivial change to the code happened more than three years ago. Fixes #123928 Pull Request resolved: https://github.com/pytorch/pytorch/pull/123968 Approved by: https://github.com/mikaylagawarecki
diff --git a/torch/nn/modules/lazy.py b/torch/nn/modules/lazy.py index 52784ae511..c4b7459c4a 100644 --- a/torch/nn/modules/lazy.py +++ b/torch/nn/modules/lazy.py @@ -1,5 +1,4 @@ import itertools -import warnings from typing import Protocol, Optional, Type, Any import torch @@ -178,8 +177,6 @@ class LazyModuleMixi...
2.41.0
c4fc5fa348eec9a93ec97f95971279797dfbf6f
Mon, 15 Apr 2024 16:51:40 +0000
[PATCH 0153/1000] [BE][Ez]: Fix minor potential perf regression from #123960 (#124013)
The `non_blocking` arg here is useless if the values are all eagerly consumed, so revert the change. Pull Request resolved: https://github.com/pytorch/pytorch/pull/124013 Approved by: https://github.com/ezyang
diff --git a/torch/amp/grad_scaler.py b/torch/amp/grad_scaler.py index f2fae37142..a72c6246c9 100644 --- a/torch/amp/grad_scaler.py +++ b/torch/amp/grad_scaler.py @@ -426,8 +426,10 @@ class GradScaler: found_inf = cast( torch.Tensor, sum( - ...
2.41.0
3dcb5b0f2ef3578e81841fd8a2166e732c0ca99
Fri, 12 Apr 2024 06:23:04 -0700
[PATCH 0154/1000] make sure dynamo doesn't inline DTensor __new__ or __torch_dispatch__ (#123347)
Fixes https://github.com/pytorch/pytorch/issues/122459, https://github.com/pytorch/torchtrain/issues/61 Even with the previous PR ("support DTensor/subclass constructors directly in the graph"), I still see some errors when running the repro above that start some logs showing that dynamo is inlining `__new__`. I noti...
diff --git a/test/distributed/_tensor/test_dtensor_compile.py b/test/distributed/_tensor/test_dtensor_compile.py index f9ad0278d7..5f98050e83 100644 --- a/test/distributed/_tensor/test_dtensor_compile.py +++ b/test/distributed/_tensor/test_dtensor_compile.py @@ -191,6 +191,47 @@ class TestDTensorCompile(torch._dynamo.t...
2.41.0
ce29f1416f6a3852dc10bfe9326afc6cca4c2f0
Mon, 15 Apr 2024 17:46:54 +0000
[PATCH 0155/1000] Enable UFMT on `test/onnx_caffe2`, `test/optim`, `test/package` and `test/profiler` (#123901)
Part of: #123062 Ran lintrunner on: - `test/onnx_caffe2` - `test/optim` - `test/package` - `test/profiler` Pull Request resolved: https://github.com/pytorch/pytorch/pull/123901 Approved by: https://github.com/ezyang
diff --git a/.lintrunner.toml b/.lintrunner.toml index e223d1a069..73e225e9ee 100644 --- a/.lintrunner.toml +++ b/.lintrunner.toml @@ -1256,65 +1256,6 @@ exclude_patterns = [ 'test/nn/test_parametrization.py', 'test/nn/test_pooling.py', 'test/nn/test_pruning.py', - 'test/onnx_caffe2/export_onnx_tests_...
2.41.0
21c397e2ecc14c09bf57892eb55dd6f7dca80b8
Sun, 14 Apr 2024 08:20:04 -0700
[PATCH 0156/1000] Use NEON to speedup `int8pack_mm` on aarch64 (#124023)
Just vectorizing innter loop as follows: ```cpp float32x4_t c_val = vdupq_n_f32(0.0); for (int k = 0; k < K; k += 8) { float16x8_t a_val = vld1q_f16(reinterpret_cast<const float16_t *>(A) + m * lda + k); int16x8_t b_val = vmovl_s8(vld1_s8(B + n * ldb + k)); auto a_val_low = vcvt_f32_f16(vget_low_f16(a_val)); auto a_val...
diff --git a/aten/src/ATen/native/cpu/int8mm_kernel.cpp b/aten/src/ATen/native/cpu/int8mm_kernel.cpp index 3645bae3a6..37495a08f4 100644 --- a/aten/src/ATen/native/cpu/int8mm_kernel.cpp +++ b/aten/src/ATen/native/cpu/int8mm_kernel.cpp @@ -182,6 +182,51 @@ inline void tinygemm_kernel( #endif +#if !defined(C10_MOBIL...
2.41.0
b6f6270d6b77726ef0d17c01486593058d12453
Sun, 14 Apr 2024 23:13:28 -0700
[PATCH 0157/1000] [inductor] comprehensive padding (#120758)
This PR adds the ability to pad tensor strides during lowering. The goal is to make sure (if possible) tensors with bad shape can have aligned strides so GPU can access the memory more efficiently. By testing BlenderbotSmallForConditionalGeneration I already see 2.5ms speedup. Pull Request resolved: https://github.co...
diff --git a/test/inductor/test_padding.py b/test/inductor/test_padding.py new file mode 100644 index 0000000000..2270c33291 --- /dev/null +++ b/test/inductor/test_padding.py @@ -0,0 +1,639 @@ +# Owner(s): ["module: inductor"] +import copy + +import functools +import os +import unittest + +import torch +from torch impo...
2.41.0
0d1720861833e7e2d74ca69c4b056c01b0ee1d0
Mon, 15 Apr 2024 19:09:41 +0000
[PATCH 0158/1000] [export] Restore original placeholder names (part 3: constant input de/serialization) (#123590)
Summary: note: breaking the original diff D55225818 into 3 parts (top-level renaming, higher-order-op subgraphs, constant input de/serialization) because of its size. Stacked PR to restore original names to placeholder nodes, replacing the default names arg0_1, arg1_1, ... This PR supports constant argument placehold...
diff --git a/test/export/test_export.py b/test/export/test_export.py index 4bc660b916..73795fe0eb 100644 --- a/test/export/test_export.py +++ b/test/export/test_export.py @@ -1262,10 +1262,6 @@ class TestExport(TestCase): } self._test_export_same_as_eager(kw_func, args, kwargs) - # TODO(pianpwk):...
+- 3
9059affb97a4d69c2c6b82e274f2fefb7ab765b
Mon, 15 Apr 2024 09:42:10 -0700
[PATCH 0160/1000] Use packed metadata from triton to reduce launch latency (#123842)
https://github.com/openai/triton/pull/3633 converts some kernel launch metadata from a namedtuple to a regular tuple, which is faster to parse. Using it here shaves off a microsecond or so from the apparently extremely-sensitive launch path. Fixes #123597 Pull Request resolved: https://github.com/pytorch/pytorch/pul...
diff --git a/torch/_inductor/triton_heuristics.py b/torch/_inductor/triton_heuristics.py index 71fbdaa052..7a4ea40b5f 100644 --- a/torch/_inductor/triton_heuristics.py +++ b/torch/_inductor/triton_heuristics.py @@ -387,7 +387,9 @@ class CachingAutotuner(KernelInterface): "bin": binary, "launch...
2.41.0
5b404b809f231c766e3015e728340730204424a
Mon, 15 Apr 2024 09:25:11 -0700
[PATCH 0161/1000] [inductor] Fix fresh_inductor_cache() (#122661)
Summary: Modify fresh_inductor_cache() to clear cached state before mocking the toplevel cache_dir directory. Any lru_caches (or otherwise) can use the @clear_on_fresh_inductor_cache decorator to register the cache for clearing. Also change the base inductor TestCase class to use fresh_inductor_cache(). Previously that...
diff --git a/test/inductor/test_codecache.py b/test/inductor/test_codecache.py index 89380bee53..04ab69debc 100644 --- a/test/inductor/test_codecache.py +++ b/test/inductor/test_codecache.py @@ -14,10 +14,12 @@ from torch._inductor.codecache import ( CUDACodeCache, FxGraphCachePickler, FxGraphHashDetails...
2.41.0
5a090fb56b5a9cd018caf1dc5169a3a2b5d7468
Mon, 15 Apr 2024 20:39:50 +0000
[PATCH 0163/1000] [CI] Update bazel deps (#124076)
- Update `WORKSPACE` to actually use Python-3.10 as job name claims it is - Get rid of unneeded `future` and `six` dependencies (Removed long time ago) - Update `requests`, `typing-extensions` and `setuptools` to the latest releases - Mark `tools/build/bazel/requirements.txt` as a generated file This also updates idna...
diff --git a/.gitattributes b/.gitattributes index 8bccf04bbb..e904301752 100644 --- a/.gitattributes +++ b/.gitattributes @@ -4,3 +4,4 @@ .github/generated-* linguist-generated=true .github/scripts/gql_mocks.json linguist-generated=true third_party/LICENSES_BUNDLED.txt linguist-generated=true +tools/build/bazel/req...
2.41.0
75f77784f01d7dfa6dc74a704f9d390bbde339e
Mon, 15 Apr 2024 18:05:22 +0000
[PATCH 0164/1000] Fix CUDA out of memory error message formatting (#123984)
We need a string instead of an integer here. With device 0, the string was getting NULL terminated leading to a truncated error message Pull Request resolved: https://github.com/pytorch/pytorch/pull/123984 Approved by: https://github.com/eqy, https://github.com/peterbell10
diff --git a/c10/cuda/CUDACachingAllocator.cpp b/c10/cuda/CUDACachingAllocator.cpp index 704a011c3e..c472e82ce2 100644 --- a/c10/cuda/CUDACachingAllocator.cpp +++ b/c10/cuda/CUDACachingAllocator.cpp @@ -1149,7 +1149,7 @@ class DeviceCachingAllocator { "CUDA out of memory. Tried to allocate ", form...
2.41.0
a52918e816879fe6cafbb1cc7c737119cfe6fad
Mon, 15 Apr 2024 10:31:01 -0700
[PATCH 0165/1000] [FSDP2] Generalized all-gather outputs to >1 per parameter (#119302)
This PR is part of the FSDP extensions work. For subclasses such as for QLoRA's `NF4Tensor` (using block-wise quantization) that have multiple inner tensors per parameter, we must generalize to allow each parameter to contribute >1 all-gather inputs and hence have >1 all-gather outputs. This PR does this generalizatio...
diff --git a/torch/distributed/_composable/fsdp/_fsdp_collectives.py b/torch/distributed/_composable/fsdp/_fsdp_collectives.py index feba4a606b..7a72ce0a13 100644 --- a/torch/distributed/_composable/fsdp/_fsdp_collectives.py +++ b/torch/distributed/_composable/fsdp/_fsdp_collectives.py @@ -15,7 +15,13 @@ class AllGathe...
- sharded parameter
1a0821e7ecc83bf06b6ad03a6038e5404ce0ec0
Mon, 15 Apr 2024 10:31:01 -0700
[PATCH 0166/1000] [FSDP2] Added pre/post-all-gather extensions (subclass) (#122908)
**Overview** This PR adds pre/post-all-gather extensions to FSDP2. - The pre/post-all-gather extensions are specified at the tensor-level on the `sharded_param._local_tensor` (i.e. the tensor wrapped by the sharded `DTensor`). If the user has a tensor-subclass parameter on the module passed to FSDP that preserves the s...
diff --git a/test/distributed/_composable/fsdp/test_fully_shard_extensions.py b/test/distributed/_composable/fsdp/test_fully_shard_extensions.py new file mode 100644 index 0000000000..655bff78f0 --- /dev/null +++ b/test/distributed/_composable/fsdp/test_fully_shard_extensions.py @@ -0,0 +1,235 @@ +# Owner(s): ["oncall:...
2.41.0
95a4d4a4227b9f17b71707945a97451f7fb62a5
Mon, 15 Apr 2024 10:31:02 -0700
[PATCH 0167/1000] [FSDP2] Added `mesh` arg to `fsdp_pre_all_gather` (#123953)
This PR adds a `mesh: DeviceMesh` argument to `fsdp_pre_all_gather()` so that the extension can know over which mesh the all-gather is happening. This can be useful in recovering the post-all-gather tensor size in the `fsdp_post_all_gather()` (e.g. for `NF4Tensor`). Pull Request resolved: https://github.com/pytorch/py...
diff --git a/test/distributed/_composable/fsdp/test_fully_shard_extensions.py b/test/distributed/_composable/fsdp/test_fully_shard_extensions.py index 655bff78f0..5ce0f272c3 100644 --- a/test/distributed/_composable/fsdp/test_fully_shard_extensions.py +++ b/test/distributed/_composable/fsdp/test_fully_shard_extensions....
2.41.0
aba918bd83c5524cf19672a7aeec96faa99997a
Mon, 15 Apr 2024 21:41:42 +0000
[PATCH 0168/1000] Support Accelerator OOM Error (#121200) (#121702)
Fixes #121200 This PR introduces AcceleratorOutOfMemoryError for all privateuse1 backend. For python, there is a PyError object which will be set only when privateuse1 is registered. All privateuse1 backend then can use this error for memory errors. Maybe more error types in the future. Pull Request resolved: https://...
diff --git a/test/test_public_bindings.py b/test/test_public_bindings.py index 2f9aad141e..65aa339aff 100644 --- a/test/test_public_bindings.py +++ b/test/test_public_bindings.py @@ -188,6 +188,7 @@ class TestPublicBindings(TestCase): "NumberType", "OperatorInfo", "OptionalType", ...
2.41.0
d222473fc69db8a1258973f94534a181c8d8681
Mon, 15 Apr 2024 21:44:26 +0000
[PATCH 0169/1000] [EZ][BE] Fix unknown pragma warning (#124086)
By using `C10_DIAGNOSTIC_` macros instead of `#pragma clang diagnostic` that puts appropriate compiler supported pragmas. Fixes following warning during the bazel build ``` INFO: From Compiling aten/src/ATen/native/TensorFactories.cpp: aten/src/ATen/native/TensorFactories.cpp:372: warning: ignoring #pragma clang diagno...
diff --git a/aten/src/ATen/native/TensorFactories.cpp b/aten/src/ATen/native/TensorFactories.cpp index 2416d015ee..c8fddc3756 100644 --- a/aten/src/ATen/native/TensorFactories.cpp +++ b/aten/src/ATen/native/TensorFactories.cpp @@ -369,10 +369,9 @@ Tensor& empty_out(IntArrayRef size, // Some scalar types in CAST_OP h...
2.41.0
0ad64e8a644d478228aa740e03f65d6153c4074
Mon, 15 Apr 2024 22:31:12 +0000
[PATCH 0171/1000] update docs for separate context and forward functions (#121955)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121955 Approved by: https://github.com/soulitzer
diff --git a/torch/autograd/function.py b/torch/autograd/function.py index a15ef9e88d..3ff96953b2 100644 --- a/torch/autograd/function.py +++ b/torch/autograd/function.py @@ -32,8 +32,8 @@ class FunctionCtx: def save_for_backward(self, *tensors: torch.Tensor): r"""Save given tensors for a future call to :...
2.41.0
c25b18d76feacc913851e513ce75956350bd5e8
Mon, 15 Apr 2024 09:08:11 -0700
[PATCH 0174/1000] Excise old custom ops prototype from custom_op_db (#124062)
Test Plan: - tests Pull Request resolved: https://github.com/pytorch/pytorch/pull/124062 Approved by: https://github.com/albanD ghstack dependencies: #123615
diff --git a/torch/testing/_internal/custom_op_db.py b/torch/testing/_internal/custom_op_db.py index f78777e5c5..7567d59269 100644 --- a/torch/testing/_internal/custom_op_db.py +++ b/torch/testing/_internal/custom_op_db.py @@ -16,7 +16,6 @@ from torch.testing._internal.autograd_function_db import ( from torch import T...
2.41.0
079c766892e873b5691185b3b76569bad030090
Tue, 16 Apr 2024 00:24:32 +0000
[PATCH 0177/1000] Fix Asynchronous PyTorch Profiler Trace (#124080)
Summary: With the merge of D55925068, we have introduced an overflow issue when recording a trace using dyno gputrace. This is because it is possible for TorchOPs to be enumerated but not have an end time since they were running as the recording ended. By default these events have an end time set to INT_MIN. When findi...
diff --git a/torch/csrc/profiler/collection.cpp b/torch/csrc/profiler/collection.cpp index 104657ec19..d64df0bd04 100644 --- a/torch/csrc/profiler/collection.cpp +++ b/torch/csrc/profiler/collection.cpp @@ -813,13 +813,18 @@ void passEventsToKineto( // Generate Kineto events for each event recorded by the PyTorch pr...
2.41.0
2596fd3e0371c3568315bcabf9664157e06c457
Tue, 16 Apr 2024 00:42:18 +0000
[PATCH 0178/1000] [Distributed] [4/N] Fix clang-tidy warnings in torch/csrc/distributed/c10d (#124032)
This PR continues to fix some clang-tidy warnings in distributed/c10d code, following https://github.com/pytorch/pytorch/pull/123312. Pull Request resolved: https://github.com/pytorch/pytorch/pull/124032 Approved by: https://github.com/Skylion007
diff --git a/torch/csrc/distributed/c10d/FileStore.cpp b/torch/csrc/distributed/c10d/FileStore.cpp index 98b90297a6..df74adeb6d 100644 --- a/torch/csrc/distributed/c10d/FileStore.cpp +++ b/torch/csrc/distributed/c10d/FileStore.cpp @@ -90,6 +90,7 @@ class Lock { flock(operation); } + // NOLINTNEXTLINE(bugpron...
2.41.0
aad0554b4d5e64f828eaceeaf5d74d8348caed4
Mon, 15 Apr 2024 23:17:18 +0200
[PATCH 0179/1000] [Inductor] Fix endless recursion in codecache.DLLWrapper.__getattr__ (#123931)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123931 Approved by: https://github.com/peterbell10
diff --git a/torch/_inductor/codecache.py b/torch/_inductor/codecache.py index 4e84838504..ede59ab828 100644 --- a/torch/_inductor/codecache.py +++ b/torch/_inductor/codecache.py @@ -2569,6 +2569,7 @@ class DLLWrapper: lib_path: str, ): self.lib_path = lib_path + self.is_open = False ...
2.41.0
cf62e86a47ee575b5fbb997fd00f60ef0163130
Tue, 16 Apr 2024 01:26:22 +0000
[PATCH 0180/1000] skip various unit tests for Jetson (#122531)
skip multiprocessing, cuda expandable segments, mem eff and flash attention tests on Jetson due to hanging / sigkill issues from nvidia internal testing Pull Request resolved: https://github.com/pytorch/pytorch/pull/122531 Approved by: https://github.com/eqy, https://github.com/malfet
diff --git a/test/test_cuda_expandable_segments.py b/test/test_cuda_expandable_segments.py index 123d2d2fe8..8b634b774e 100644 --- a/test/test_cuda_expandable_segments.py +++ b/test/test_cuda_expandable_segments.py @@ -4,9 +4,11 @@ import os import torch -if torch.cuda.is_available(): +from torch.testing._internal....
2.41.0
eb626b46a4d606d6900aec8c690feb8a9d6e88b
Tue, 16 Apr 2024 01:39:42 +0000
[PATCH 0181/1000] [BE] Do not use `using namespace` in mps headers (#124117)
- Remove `using namespace std` from `MPSDevice.h` - Add `std::` prefix to 1st argument of `MPSProfiler::StartTrace` - Do the same in front of `numeric_limits` template instantiation in `ReduceOps.mm` Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com> Pull Request resolved: https://github.com/pytorc...
diff --git a/aten/src/ATen/mps/MPSDevice.h b/aten/src/ATen/mps/MPSDevice.h index 40ab070772..084820ab42 100644 --- a/aten/src/ATen/mps/MPSDevice.h +++ b/aten/src/ATen/mps/MPSDevice.h @@ -22,8 +22,6 @@ typedef void* MTLComputePipelineState_t; typedef void* MTLLibrary_t; #endif -using namespace std; - namespace at::...
2.41.0
4c8002ee0714db7d19d98dc0c84108521c5ce6f
Tue, 16 Apr 2024 02:02:37 +0000
[PATCH 0182/1000] MPS FFT implementation bug (#123274)
Current implementation drops the negative frequency components even when the user doesn't ask for the one-sided transform. The tests for the negative frequency components seem to have worked by accident due to internal implementation details but the issue becomes evident in MacOs 14.4. Co-authored-by: Nikita Shulga <24...
diff --git a/aten/src/ATen/native/mps/operations/FastFourierTransform.mm b/aten/src/ATen/native/mps/operations/FastFourierTransform.mm index 697ab12d42..21fb75bb21 100644 --- a/aten/src/ATen/native/mps/operations/FastFourierTransform.mm +++ b/aten/src/ATen/native/mps/operations/FastFourierTransform.mm @@ -95,10 +95,23 ...
2.41.0
f1e3ff5a503a520c1a310c8e72a383657f9a4bc
Tue, 16 Apr 2024 02:20:58 +0000
[PATCH 0183/1000] [reland] `_foreach_copy` with different src/dst dtypes (#123844)
Attempt to reland https://github.com/pytorch/pytorch/pull/121717. The change is the array bounds check. Pull Request resolved: https://github.com/pytorch/pytorch/pull/123844 Approved by: https://github.com/janeyx99
diff --git a/aten/src/ATen/native/ForeachUtils.h b/aten/src/ATen/native/ForeachUtils.h index 9c22c35ee9..d7a1449463 100644 --- a/aten/src/ATen/native/ForeachUtils.h +++ b/aten/src/ATen/native/ForeachUtils.h @@ -102,12 +102,13 @@ inline void check_foreach_api_restrictions( // corresponding tensors (aligning in index ac...
2.41.0
babf00014d160b29fea4f00435754e677af33a3
Mon, 15 Apr 2024 10:56:16 -0700
[PATCH 0184/1000] [inductor] Bypass FX graph cache when we have HigherOrderOperators (#123325)
Summary: The initial motivation was to avoid caching when we have triton higher order ops, but it's probably safer to avoid the cache for all higher order ops and allow/implement if/when we find it necessary. Test Plan: Unit test cribbed from: https://docs-preview.pytorch.org/pytorch/tutorials/2783/recipes/torch_compi...
diff --git a/test/inductor/test_codecache.py b/test/inductor/test_codecache.py index 04ab69debc..e8ace5cd86 100644 --- a/test/inductor/test_codecache.py +++ b/test/inductor/test_codecache.py @@ -37,6 +37,10 @@ from torch.utils._triton import has_triton HAS_TRITON = has_triton() +if HAS_TRITON: + import triton +...
2.41.0
60efaa471f5b53f45ff67cc18fc67e02ea41b3a
Mon, 15 Apr 2024 17:32:39 +0800
[PATCH 0186/1000] Part 1: UFMT partial files in torch/optim due to the pr-sanity-checks (#124053)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124053 Approved by: https://github.com/ezyang ghstack dependencies: #124048
diff --git a/.lintrunner.toml b/.lintrunner.toml index 73e225e9ee..d94ce427b7 100644 --- a/.lintrunner.toml +++ b/.lintrunner.toml @@ -2112,16 +2112,6 @@ exclude_patterns = [ 'torch/nn/utils/rnn.py', 'torch/nn/utils/spectral_norm.py', 'torch/nn/utils/weight_norm.py', - 'torch/optim/__init__.py', - ...
2.41.0
c74a6783b6dc92b74d8d49dce951b469764abd2
Mon, 15 Apr 2024 17:32:40 +0800
[PATCH 0187/1000] Part 2: UFMT fix 2 files in torch/optim due to the pr-sanity-checks (#124054)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124054 Approved by: https://github.com/ezyang ghstack dependencies: #124048, #124053
diff --git a/.lintrunner.toml b/.lintrunner.toml index d94ce427b7..9ddc5def54 100644 --- a/.lintrunner.toml +++ b/.lintrunner.toml @@ -2112,8 +2112,6 @@ exclude_patterns = [ 'torch/nn/utils/rnn.py', 'torch/nn/utils/spectral_norm.py', 'torch/nn/utils/weight_norm.py', - 'torch/optim/lr_scheduler.py', - ...
2.41.0
91e5db7055337a9ebb9d817af099588e1976847
Mon, 15 Apr 2024 17:32:41 +0800
[PATCH 0188/1000] Part 3: UFMT fix the rest files in torch/optim due to the pr-sanity-checks (#124055)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124055 Approved by: https://github.com/ezyang ghstack dependencies: #124048, #124053, #124054
diff --git a/.lintrunner.toml b/.lintrunner.toml index 9ddc5def54..e29854e0d7 100644 --- a/.lintrunner.toml +++ b/.lintrunner.toml @@ -2112,13 +2112,6 @@ exclude_patterns = [ 'torch/nn/utils/rnn.py', 'torch/nn/utils/spectral_norm.py', 'torch/nn/utils/weight_norm.py', - 'torch/optim/optimizer.py', - ...
2.41.0
4efa311f1c692813befc45142d668f35a66392b
Mon, 15 Apr 2024 14:59:51 -0700
[PATCH 0189/1000] Refactor test_tensor_set_data to be parametrized (#124105)
Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/124105 Approved by: https://github.com/albanD
diff --git a/test/dynamo/test_repros.py b/test/dynamo/test_repros.py index b5ae3c9c83..0f8561f6fb 100644 --- a/test/dynamo/test_repros.py +++ b/test/dynamo/test_repros.py @@ -38,6 +38,8 @@ from torch.nn import functional as F from torch.testing._internal.common_cuda import PLATFORM_SUPPORTS_FLASH_ATTENTION from torch...
2.41.0
f5829d0babaefc6e271897d6fffd40073d8b723
Mon, 15 Apr 2024 17:04:16 -0700
[PATCH 0191/1000] [inductor] let rand_strided support fp8 (#124120)
I'm working on https://fb.workplace.com/groups/1075192433118967/posts/1411161629522044/ (this is a meta internal link about a inefficient inner/persistent reduction kernel generated by inductor). I found the generated benchmark code for a kernel ( https://gist.github.com/shunting314/13a0105f72a1c54d9c220370c7fd3845 ) c...
diff --git a/test/inductor/test_torchinductor.py b/test/inductor/test_torchinductor.py index c477e36b84..786428e690 100644 --- a/test/inductor/test_torchinductor.py +++ b/test/inductor/test_torchinductor.py @@ -9558,6 +9558,17 @@ class CommonTemplate: t = rand_strided((8, 1500, 1), (1504, 1, 1), device=self.de...
2.41.0
3ef3bb12831f7e68f65276d5ba240db508db6e9
Tue, 16 Apr 2024 04:35:25 +0000
[PATCH 0192/1000] Fix AVX512 int4pack_mm_kernel crash if weighs are unaligned (#124128)
By replacing `_mm256_load_si256` with `_mm256_loadu_si256`, as there are no guarantees that tensor should be aligned Fixes crash reported in https://github.com/pytorch/pytorch/issues/124034 though I'm unsure about perf implications if tensor are properly aligned Pull Request resolved: https://github.com/pytorch/pytor...
diff --git a/aten/src/ATen/native/cpu/int4mm_kernel.cpp b/aten/src/ATen/native/cpu/int4mm_kernel.cpp index 436cd16c99..1e7a7c0bcb 100644 --- a/aten/src/ATen/native/cpu/int4mm_kernel.cpp +++ b/aten/src/ATen/native/cpu/int4mm_kernel.cpp @@ -139,7 +139,7 @@ inline void tinygemm_kernel( // when BLOCK_N = 64, handl...
2.41.0
bef127c2ea49280e7fda4f9fa7cad6fa4078e7d
Tue, 16 Apr 2024 04:39:20 +0000
[PATCH 0193/1000] [Environment Variable][1/N] Use thread-safe env variable API in c10 (#119449)
This PR is the beginning of attempts to wrap thread-unsafe getenv and set_env functions inside a RW mutex. Pull Request resolved: https://github.com/pytorch/pytorch/pull/119449 Approved by: https://github.com/albanD
diff --git a/c10/core/impl/alloc_cpu.cpp b/c10/core/impl/alloc_cpu.cpp index 9b7ae22f9f..def4c3a3a9 100644 --- a/c10/core/impl/alloc_cpu.cpp +++ b/c10/core/impl/alloc_cpu.cpp @@ -3,6 +3,7 @@ #include <c10/core/alignment.h> #include <c10/util/Flags.h> #include <c10/util/Logging.h> +#include <c10/util/env.h> #include...
2.41.0
530c5a85d1b4621d61e49336894cf667d698c6a
Tue, 16 Apr 2024 05:38:20 +0000
[PATCH 0194/1000] [DOC] Fix example and typo (#123959)
Fixes #123554 and fixes #123053 Pull Request resolved: https://github.com/pytorch/pytorch/pull/123959 Approved by: https://github.com/mikaylagawarecki
diff --git a/torch/nn/modules/activation.py b/torch/nn/modules/activation.py index 2302ec5ea5..bf15c3342d 100644 --- a/torch/nn/modules/activation.py +++ b/torch/nn/modules/activation.py @@ -1333,7 +1333,7 @@ class PReLU(Module): .. math:: \text{PReLU}(x) = \begin{cases} - x, & \text{ if }...
2.41.0
eab740db3e158eb7c8529684d588c9fb6b1aacb
Tue, 16 Apr 2024 05:51:53 +0000
[PATCH 0195/1000] [Docs][Distributed] Add migration notes for `--local-rank` option style change for `torchrun` in PyTorch 2.0 (#109480)
Fixes https://github.com/pytorch/pytorch/pull/94505#issuecomment-1722777767 Pull Request resolved: https://github.com/pytorch/pytorch/pull/109480 Approved by: https://github.com/ezyang
diff --git a/torch/distributed/launch.py b/torch/distributed/launch.py index 5dbeeb998c..c95804b8e8 100644 --- a/torch/distributed/launch.py +++ b/torch/distributed/launch.py @@ -83,7 +83,7 @@ Parsing the local_rank argument >>> # xdoctest: +SKIP >>> import argparse >>> parser = argparse.ArgumentParser()...
2.41.0
e48f7b0443451b291cc527e1473699deec3bc54
Sat, 13 Apr 2024 13:22:48 +0000
[PATCH 0196/1000] [pytree] add `tree_iter` function (#123913)
- Add a new `tree_iter` function. - Bump `optree` version to `0.11.0` for C++ version of `tree_iter`. This PR is split from #120300. - #120300 Pull Request resolved: https://github.com/pytorch/pytorch/pull/123913 Approved by: https://github.com/zou3519
diff --git a/.ci/docker/requirements-ci.txt b/.ci/docker/requirements-ci.txt index 0b5a7d4ff1..a636d70e32 100644 --- a/.ci/docker/requirements-ci.txt +++ b/.ci/docker/requirements-ci.txt @@ -134,9 +134,9 @@ opt-einsum==3.3 #Pinned versions: 3.3 #test that import: test_linalg.py -optree==0.9.1 +optree==0.11.0 #Desc...
2.41.0
2be63eb2cb3947c00a8c4506eb961603da7d564
Tue, 16 Apr 2024 06:33:21 +0000
[PATCH 0197/1000] Revert "Enable UFMT on all of `test/distributed` (#123539)"
This reverts commit 89ac37fe919997e844f0baa6e28965d0d52b0682. Reverted https://github.com/pytorch/pytorch/pull/123539 on behalf of https://github.com/DanilBaibak due to Broken trunk ([comment](https://github.com/pytorch/pytorch/pull/123539#issuecomment-2058329471))
diff --git a/.lintrunner.toml b/.lintrunner.toml index 9e83a8b96e..817d35f34f 100644 --- a/.lintrunner.toml +++ b/.lintrunner.toml @@ -1015,6 +1015,134 @@ exclude_patterns = [ 'test/_nvfuser/test_python_frontend.py', 'test/_nvfuser/test_torchscript.py', 'test/delete.py', + 'test/distributed/_shard/sha...
2.41.0
b0c768c5b4d6c76c77f287ed5e4b6c7be43d1a9
Mon, 15 Apr 2024 15:39:05 -0700
[PATCH 0199/1000] [dynamo][refactor] Move LazyGraphModule handling (#124113)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124113 Approved by: https://github.com/jansel ghstack dependencies: #124078
diff --git a/torch/_dynamo/eval_frame.py b/torch/_dynamo/eval_frame.py index 52b610be89..20f8675c75 100644 --- a/torch/_dynamo/eval_frame.py +++ b/torch/_dynamo/eval_frame.py @@ -315,32 +315,8 @@ class _TorchDynamoContext: fn = innermost_fn(fn) # add context containing GraphModule to any GraphModule...
2.41.0
b3594f90e77f3e3855f6c41c3d3a916c9a07b81
Mon, 15 Apr 2024 17:40:53 -0700
[PATCH 0201/1000] [dynamo] fix call_finally issue in Python 3.8 (#124122)
Fix https://github.com/pytorch/pytorch/issues/97811 again... Pull Request resolved: https://github.com/pytorch/pytorch/pull/124122 Approved by: https://github.com/jansel
diff --git a/test/dynamo/test_repros.py b/test/dynamo/test_repros.py index 0f8561f6fb..899cebcac7 100644 --- a/test/dynamo/test_repros.py +++ b/test/dynamo/test_repros.py @@ -3875,6 +3875,16 @@ class ReproTests(torch._dynamo.test_case.TestCase): make_fn(None)() + def test_call_finally_python_3_8_2(self)...
2.41.0
309580d69a1fbbe73785785d011d6cf44113103
Mon, 15 Apr 2024 17:49:52 -0700
[PATCH 0202/1000] [dynamo, 3.12] handle possibility of NULL local variables during graph breaks (#124095)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124095 Approved by: https://github.com/jansel
diff --git a/test/dynamo/test_misc.py b/test/dynamo/test_misc.py index 9457c63f8c..14f43ec37d 100644 --- a/test/dynamo/test_misc.py +++ b/test/dynamo/test_misc.py @@ -10161,6 +10161,17 @@ fn opt_fn = torch.compile(fn, backend="eager") opt_fn(torch.randn(3, 3)) + def test_load_fast_and_clear_graph...
2.41.0
e17f62d103bdb6683f9b93b19ca60751192c45a
Mon, 15 Apr 2024 17:49:53 -0700
[PATCH 0203/1000] [dynamo, 3.12] move functorch/test_aotdispatch.py::TestAOTAutograd::test_view_detach from dynamo xfail to skip (#124100)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124100 Approved by: https://github.com/zou3519, https://github.com/jansel ghstack dependencies: #124095
diff --git a/test/dynamo_expected_failures/TestAOTAutograd.test_view_detach b/test/dynamo_skips/TestAOTAutograd.test_view_detach similarity index 100% rename from test/dynamo_expected_failures/TestAOTAutograd.test_view_detach rename to test/dynamo_skips/TestAOTAutograd.test_view_detach
2.41.0
62096bce6715a51bd1dd3f006920be8e7ca6cbc
Mon, 15 Apr 2024 17:49:59 -0700
[PATCH 0204/1000] [dynamo, 3.12] skip some failing profiler dynamo-wrapped tests (#124124)
The dynamo wrapped tests and normal tests give the same results locally. Pull Request resolved: https://github.com/pytorch/pytorch/pull/124124 Approved by: https://github.com/jansel, https://github.com/aaronenyeshi ghstack dependencies: #124095, #124100
diff --git a/test/dynamo_skips/TestExperimentalUtils.test_profiler_for_loop_indexing_pattern b/test/dynamo_skips/TestExperimentalUtils.test_profiler_for_loop_indexing_pattern new file mode 100644 index 0000000000..e69de29bb2 diff --git a/test/dynamo_skips/TestProfiler.test_tensorboard_trace_handler b/test/dynamo_skips/...
2.41.0
dc160864b3aac3c845bd7e2fa565899ab4732f6
Mon, 15 Apr 2024 17:49:59 -0700
[PATCH 0205/1000] [dynamo, 3.12] enable dynamo-wrapped tests in CI (#123307)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/123307 Approved by: https://github.com/jansel, https://github.com/malfet ghstack dependencies: #124095, #124100, #124124
diff --git a/.github/workflows/pull.yml b/.github/workflows/pull.yml index d287b78d70..8f54248101 100644 --- a/.github/workflows/pull.yml +++ b/.github/workflows/pull.yml @@ -215,6 +215,9 @@ jobs: { config: "default", shard: 1, num_shards: 3, runner: "linux.2xlarge" }, { config: "default", shard: ...
2.41.0
f6ce45bcbd7026c00da43db0317ede10830378b
Tue, 16 Apr 2024 11:07:12 +0000
[PATCH 0206/1000] [Inductor] handle AMD special launch options (#124146)
Summary: `matrix_instr_nonkdim` and `waves_per_eu` are AMD specific launch configs that can't be treated as fn input args Test Plan: HIP_VISIBLE_DEVICES=7 numactl --cpunodebind=1 --membind=1 buck2 run mode/{opt,amd-gpu} -c fbcode.triton_backend=amd -c fbcode.enable_gpu_sections=true -c fbcode.rocm_arch=mi300 //hammer/...
diff --git a/torch/_inductor/triton_heuristics.py b/torch/_inductor/triton_heuristics.py index 7a4ea40b5f..aae26554c9 100644 --- a/torch/_inductor/triton_heuristics.py +++ b/torch/_inductor/triton_heuristics.py @@ -300,6 +300,13 @@ class CachingAutotuner(KernelInterface): """Ahead of time compile a given autot...
2.41.0
7bd43b5106bed6365f1986229d7c32079653e76
Tue, 16 Apr 2024 00:12:17 -0700
[PATCH 0208/1000] [compiled autograd][dynamo] use aliases for stack restore when partial graphs steal inputs (#124127)
same idea as https://github.com/pytorch/pytorch/pull/123359, but for when we restore stack variables after calling a partial graph: Illustrated by the test case: before: ```python def function(inputs): graph_out_0 = __compiled_fn_2(inputs) getitem_1 = graph_out_0[0] add = inputs[1] <---- error inputs is already clea...
diff --git a/test/inductor/test_compiled_autograd.py b/test/inductor/test_compiled_autograd.py index 7992ce8c40..27c1ccd3e7 100644 --- a/test/inductor/test_compiled_autograd.py +++ b/test/inductor/test_compiled_autograd.py @@ -257,7 +257,7 @@ main() self.assertNotEqual(grads[1], None) self.assertNotEq...
2.41.0
2271fb07ef88c33a5cb928eec6d8af8bf365493
Tue, 16 Apr 2024 18:49:52 +0000
[PATCH 0209/1000] Add NEON ISA support on aarch64 (#123584)
Fixes #104729 This improves the compiled mode performance of Softmax (by 20%) and other operations (like batchnorm) that invoke the reduce_all function. Thereby also improves BERT inference by around 8%. Tested on a graviton 3 instance (c7g.4xl). Tests were run in a single-threaded manner. Script attached below. Com...
diff --git a/aten/src/ATen/cpu/vec/functional_base.h b/aten/src/ATen/cpu/vec/functional_base.h index 3b183ad965..48d44dc42c 100644 --- a/aten/src/ATen/cpu/vec/functional_base.h +++ b/aten/src/ATen/cpu/vec/functional_base.h @@ -78,6 +78,35 @@ struct VecReduceAllSIMD<float, Op> { #endif // defined(CPU_CAPABILITY_AVX512)...
2.41.0
2e22bb4448402ceef5146451863fbebbbe653ec
Tue, 16 Apr 2024 20:08:07 +0000
[PATCH 0210/1000] [nccl-pg] Pass pg name and desc to NCCL communicator (#124149)
Summary: Pass Process Group Name and Desc to NCCL communicator in order to access pg information in NCCL layer. The information is passed as commDesc string(i.e. "<pg_desc>:<pg_name>") Function only valid when NCCL_COMM_DESCRIPTION is defined. Differential Revision: D55703310 Pull Request resolved: https://github.com...
diff --git a/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp b/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp index 96c6ce2e34..29e4616be9 100644 --- a/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp +++ b/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp @@ -1923,6 +1923,12 @@ std::shared_ptr<NCCLComm> ProcessGroup...
2.41.0
f89bf4188e35e8596252d8b75c7672394073b4a
Tue, 16 Apr 2024 20:23:14 +0000
[PATCH 0211/1000] Revert "[reland] `_foreach_copy` with different src/dst dtypes (#123844)"
This reverts commit ff1e3ff5a503a520c1a310c8e72a383657f9a4bc. Reverted https://github.com/pytorch/pytorch/pull/123844 on behalf of https://github.com/malfet due to Perhaps it enabled it for different dtype, but broke for the same ([comment](https://github.com/pytorch/pytorch/pull/123844#issuecomment-2059861767))
diff --git a/aten/src/ATen/native/ForeachUtils.h b/aten/src/ATen/native/ForeachUtils.h index d7a1449463..9c22c35ee9 100644 --- a/aten/src/ATen/native/ForeachUtils.h +++ b/aten/src/ATen/native/ForeachUtils.h @@ -102,13 +102,12 @@ inline void check_foreach_api_restrictions( // corresponding tensors (aligning in index ac...
2.41.0
6a25cc0db0a47aec560f732a2228c319ca8a589
Tue, 16 Apr 2024 10:22:34 -0700
[PATCH 0214/1000] [DCP] Adds support for non-primatives in async_save by deep copying during cpu offloading (#123941)
Adds support for non-primatives in async_save by deep copying during cpu offloading. If users are not type checking, the expectation in async is likely that the object is copied Differential Revision: [D56065237](https://our.internmc.facebook.com/intern/diff/D56065237/) Pull Request resolved: https://github.com/pyto...
diff --git a/torch/distributed/_state_dict_utils.py b/torch/distributed/_state_dict_utils.py index 074bd5375a..b69be53bb2 100644 --- a/torch/distributed/_state_dict_utils.py +++ b/torch/distributed/_state_dict_utils.py @@ -1,3 +1,4 @@ +import copy import io import math from typing import Any, Callable, Dict, Optiona...
2.41.0
06c4f1367c7eb4a49aa4a9538dd2b1eb92485d6
Tue, 16 Apr 2024 21:18:18 +0000
[PATCH 0215/1000] [PT] [ST] fix test_sharded_tensor (#124103)
Summary: https://github.com/pytorch/pytorch/pull/123230 formalizes the rank validation to support sub groups. It broke a few UTs, some of which got fixed in https://github.com/pytorch/pytorch/pull/123778 This is to fix the remaining one reported by DanilBaibak Test Plan: CI Differential Revision: D56155076 Pull Re...
diff --git a/test/distributed/_shard/sharded_tensor/test_sharded_tensor.py b/test/distributed/_shard/sharded_tensor/test_sharded_tensor.py index e0a71a06d6..da894062ac 100644 --- a/test/distributed/_shard/sharded_tensor/test_sharded_tensor.py +++ b/test/distributed/_shard/sharded_tensor/test_sharded_tensor.py @@ -840,8...
2.41.0
d88339b535f57cd0e2926c9ac4c2542e4490aac
Tue, 16 Apr 2024 22:08:24 +0000
[PATCH 0217/1000] Revert "make sure dynamo doesn't inline DTensor __new__ or __torch_dispatch__ (#123347)"
This reverts commit 63dcb5b0f2ef3578e81841fd8a2166e732c0ca99. Reverted https://github.com/pytorch/pytorch/pull/123347 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/123347#issuecomment-2059994989))
diff --git a/test/distributed/_tensor/test_dtensor_compile.py b/test/distributed/_tensor/test_dtensor_compile.py index 5f98050e83..f9ad0278d7 100644 --- a/test/distributed/_tensor/test_dtensor_compile.py +++ b/test/distributed/_tensor/test_dtensor_compile.py @@ -191,47 +191,6 @@ class TestDTensorCompile(torch._dynamo.t...
2.41.0
d7ac7baa0685557906c730202a96d554b7fbae5
Thu, 11 Apr 2024 17:35:39 -0700
[PATCH 0219/1000] [dtensor][3/N] add torchrec row-wise uneven sharding example (#121392)
**Test** `torchrun --standalone --nnodes=1 --nproc-per-node=4 torch/distributed/_tensor/examples/torchrec_sharding_example.py -e row-wise-uneven` Pull Request resolved: https://github.com/pytorch/pytorch/pull/121392 Approved by: https://github.com/wanchaol ghstack dependencies: #120265
diff --git a/torch/distributed/_tensor/examples/torchrec_sharding_example.py b/torch/distributed/_tensor/examples/torchrec_sharding_example.py index 7e2b249b85..e64a9f215d 100644 --- a/torch/distributed/_tensor/examples/torchrec_sharding_example.py +++ b/torch/distributed/_tensor/examples/torchrec_sharding_example.py @...
2.41.0
419fcd19f30351b2fecfd476445a92be291d808
Thu, 11 Apr 2024 17:35:39 -0700
[PATCH 0220/1000] [dtensor][4/N] have row-wise sharding always use LocalShardsWrapper (#122843)
**Summary** Always wrap local tensor into a `LocalShardsWrapper`. This is for uniformity and it leads to easiness on adoption of DTensor as a wrapper for local shard(s) representation. To support more tensor ops over `LocalShardsWrapper`, users need to extend its `__torch_dispatch__`. **Test** `torchrun --standalone -...
diff --git a/torch/distributed/_tensor/examples/torchrec_sharding_example.py b/torch/distributed/_tensor/examples/torchrec_sharding_example.py index e64a9f215d..523e874bbe 100644 --- a/torch/distributed/_tensor/examples/torchrec_sharding_example.py +++ b/torch/distributed/_tensor/examples/torchrec_sharding_example.py @...
2.41.0
e90e93a78e07618279b3521d265ad157b3fb742
Tue, 16 Apr 2024 11:16:47 -0700
[PATCH 0222/1000] [inductor] disable comprehensive padding in fbcode (#124191)
Comprehension padding cause small NE change and fail an internal test. Disable it for internal use case to mitigate. Differential Revision: [D56197430](https://our.internmc.facebook.com/intern/diff/D56197430) Pull Request resolved: https://github.com/pytorch/pytorch/pull/124191 Approved by: https://github.com/jansel
diff --git a/torch/_inductor/config.py b/torch/_inductor/config.py index 61af8b070c..d7141639d0 100644 --- a/torch/_inductor/config.py +++ b/torch/_inductor/config.py @@ -421,7 +421,8 @@ shape_padding = os.environ.get("TORCHINDUCTOR_SHAPE_PADDING", "1") == "1" # Control if we will do padding for pointwise/reductions...
2.41.0
eea300680113f540e9f5c825e346bcc9510168c
Tue, 16 Apr 2024 08:10:52 -0700
[PATCH 0223/1000] [quant] Do not decompose choose_qparams_per_token_asymmetric (#124178)
Summary: https://github.com/pytorch/pytorch/pull/123452 added backward support to this op by turning it into CompositeImplicitAutograd, which meant it gets decomposed during export/compile. However, this is not desirable behavior for the PTQ case when we try to lower the model. This commit enables QAT without breaking ...
diff --git a/test/quantization/core/test_quantized_tensor.py b/test/quantization/core/test_quantized_tensor.py index 228f1f8ee7..a3488b06bd 100644 --- a/test/quantization/core/test_quantized_tensor.py +++ b/test/quantization/core/test_quantized_tensor.py @@ -1606,7 +1606,7 @@ class TestQuantizedTensor(TestCase): ...
2.41.0
ebf65126c71cdb14ac27b9fb483629500d4c5cf
Tue, 16 Apr 2024 12:41:36 -0700
[PATCH 0225/1000] FakeTensorProp assert consistency of sizes when metadata previously existed (#124059)
Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/124059 Approved by: https://github.com/bdhirsh, https://github.com/thiagocrepaldi ghstack dependencies: #124105
diff --git a/docs/source/fx.experimental.rst b/docs/source/fx.experimental.rst index db08f69291..93974c0819 100644 --- a/docs/source/fx.experimental.rst +++ b/docs/source/fx.experimental.rst @@ -42,3 +42,4 @@ torch.fx.experimental.symbolic_shapes canonicalize_bool_expr statically_known_true lru_cache + ...
2.41.0
7cf6f81ea304644e1fdcad0661d28552dfc84a0
Tue, 16 Apr 2024 13:17:42 -0700
[PATCH 0226/1000] [sym_shapes][perf] Skip assert in check_is_size (#124209)
Differential Revision: [D56207943](https://our.internmc.facebook.com/intern/diff/D56207943) Pull Request resolved: https://github.com/pytorch/pytorch/pull/124209 Approved by: https://github.com/ezyang
diff --git a/torch/fx/experimental/symbolic_shapes.py b/torch/fx/experimental/symbolic_shapes.py index 345e8626ab..aa6d9c1956 100644 --- a/torch/fx/experimental/symbolic_shapes.py +++ b/torch/fx/experimental/symbolic_shapes.py @@ -545,7 +545,8 @@ def _advise_is_size(a): # This must always succeed, because the so...
2.41.0
46b50c788bb1dabbee0e3602f662df6634aabc3
Wed, 17 Apr 2024 00:18:24 +0000
[PATCH 0227/1000] [ez][TD] Increase logging (#124082)
increase logging during td generate an artifact that says which tests got excluded fix minor bug where filter test configs couldnt get commit messages Pull Request resolved: https://github.com/pytorch/pytorch/pull/124082 Approved by: https://github.com/seemethere, https://github.com/malfet
diff --git a/.github/scripts/filter_test_configs.py b/.github/scripts/filter_test_configs.py index dd9553e947..ebeccaeb16 100755 --- a/.github/scripts/filter_test_configs.py +++ b/.github/scripts/filter_test_configs.py @@ -449,7 +449,7 @@ def parse_reenabled_issues(s: Optional[str]) -> List[str]: def get_reenabled...
2.41.0
abd3f60fd6bd388c5bbbccd318ab4ad938cb781
Wed, 17 Apr 2024 00:23:42 +0000
[PATCH 0228/1000] [CI] Reduce CI_SERIAL_LIST list (#124085)
Add serial marker for individual tests so the test file can be removed from the ci serial list Run serial marked tests first in serial Run all other tests afterwards in parallel Slowly reduce list and mark individual tests as serial instead Hope # of serial tests is small so sharding evenness doesn't get too messed u...
diff --git a/pytest.ini b/pytest.ini index b7e7608169..e2ab2ebd0c 100644 --- a/pytest.ini +++ b/pytest.ini @@ -19,3 +19,6 @@ filterwarnings = ignore:Module already imported so cannot be rewritten.*hypothesis:pytest.PytestAssertRewriteWarning xfail_strict = True + +markers = + serial: marks tests as needs to ...
2.41.0
f45ac8c986934610c0fc94808db9131dc0dfd7d
Tue, 16 Apr 2024 14:45:29 -0700
[PATCH 0229/1000] [FSDP2] Added explicit `unshard(async_op)` API (#120952)
This PR adds an `unshard(async_op: bool = False)` API to manually unshard the parameters via all-gather. This can be used for reordering the all-gather with other collectives (e.g. all-to-all). This currently requires the user to set `TORCH_NCCL_AVOID_RECORD_STREAMS=1` to avoid `recordStream` from `ProcessGroupNCCL` a...
diff --git a/test/distributed/_composable/fsdp/test_fully_shard_comm.py b/test/distributed/_composable/fsdp/test_fully_shard_comm.py index 59b1048df8..f41f4f2009 100644 --- a/test/distributed/_composable/fsdp/test_fully_shard_comm.py +++ b/test/distributed/_composable/fsdp/test_fully_shard_comm.py @@ -8,9 +8,10 @@ from...
2.41.0
56c4572a64dfdf669cd1dd25f4161b5b7f90f78
Wed, 17 Apr 2024 00:46:04 +0000
[PATCH 0230/1000] Fix typos in docs (#124218)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/124218 Approved by: https://github.com/albanD
diff --git a/torch/nn/modules/loss.py b/torch/nn/modules/loss.py index 6cd9088349..637e7a7d45 100644 --- a/torch/nn/modules/loss.py +++ b/torch/nn/modules/loss.py @@ -1678,7 +1678,7 @@ class CTCLoss(_Loss): :math:`(\operatorname{sum}(\text{target\_lengths}))`, where :math:`N = \text{batch size}` a...
2.41.0
fd9e320ea415c4b221799863a334c842bdb6ff2
Tue, 16 Apr 2024 13:43:30 -0700
[PATCH 0231/1000] Remove unnecessary FileLock in Fx Graph Cache (#124212)
Writing to file happens via `write_atomic`, there's no need to take a global lock on the file system. This is likely creating unnecessary waits. Differential Revision: [D56208628](https://our.internmc.facebook.com/intern/diff/D56208628/) Pull Request resolved: https://github.com/pytorch/pytorch/pull/124212 Approved b...
diff --git a/torch/_inductor/codecache.py b/torch/_inductor/codecache.py index 72786779c0..baf5869b09 100644 --- a/torch/_inductor/codecache.py +++ b/torch/_inductor/codecache.py @@ -868,29 +868,25 @@ class FxGraphCache: Load a compiled graph from the cache. If a cached entry does not exist, compile t...
2.41.0
e4c4e93b66729145ac87ef384632b1abacaf26d
Wed, 17 Apr 2024 02:12:20 +0000
[PATCH 0232/1000] [Inductor] add contiguous layout optm for bmm input (#122599)
Fixes #117743. Add contiguous layout optimization for `bmm` input, to avoid additional copies. Pull Request resolved: https://github.com/pytorch/pytorch/pull/122599 Approved by: https://github.com/jgong5, https://github.com/leslie-fang-intel, https://github.com/eellison
diff --git a/torch/_inductor/kernel/bmm.py b/torch/_inductor/kernel/bmm.py index 1878cef79f..1bb8c9d820 100644 --- a/torch/_inductor/kernel/bmm.py +++ b/torch/_inductor/kernel/bmm.py @@ -1,5 +1,6 @@ import torch +from .. import ir from ..lowering import register_lowering from ..select_algorithm import ( autot...
2.41.0
2ca18ea3bac636d42310b90c0b899d50eea7689
Wed, 17 Apr 2024 02:18:32 +0000
[PATCH 0233/1000] Handle the case when one of the output of forward pass is None (#123988)
Summary: When applying FSDP-2 to FM-FB benchmark with FullModel model, we ran into an error that one of the output tensors of a forward pass is None. I double checked that the same output tensor is also None in FSDP-1. So, we just need to handle the None properly here. Test Plan: See that in the internal diff. Differ...
diff --git a/torch/distributed/_composable/fsdp/_fsdp_state.py b/torch/distributed/_composable/fsdp/_fsdp_state.py index 547b8e8d9f..88421d1a11 100644 --- a/torch/distributed/_composable/fsdp/_fsdp_state.py +++ b/torch/distributed/_composable/fsdp/_fsdp_state.py @@ -235,7 +235,7 @@ class FSDPState(_State): ...
2.41.0
1cc808ac7b0adf40bc71858c5e1924b9e41934e
Tue, 16 Apr 2024 16:27:20 -0700
[PATCH 0235/1000] [dynamo][cpp-guards] Missing decref on early returns in DictSubclassGuardManager (#124230)
I am sad that I missed this earlier. Good thing is that CI caught it. Will be more careful next time. This was the reason https://github.com/pytorch/pytorch/pull/123547 is reverted - https://github.com/pytorch/pytorch/pull/123547#issuecomment-2058350245 Pull Request resolved: https://github.com/pytorch/pytorch/pull/1...
diff --git a/torch/csrc/dynamo/guards.cpp b/torch/csrc/dynamo/guards.cpp index 578f823d64..f1d8020fb3 100644 --- a/torch/csrc/dynamo/guards.cpp +++ b/torch/csrc/dynamo/guards.cpp @@ -2184,12 +2184,16 @@ class DictSubclassGuardManager : public DictGuardManager { KeyValueManager& key_value_manager = _key_value_m...
2.41.0
50051f412e50d98d506adf0d05aa6e4ceab54bd
Mon, 15 Apr 2024 14:02:35 -0700
[PATCH 0236/1000] Dont precompile already seen keys, limit epilogue choices (#122642)
Two changes: - in epilogue benchmark fusion, only take top 6 choices. There were basically no choices taken after this in HF. - Share a single precompilation function among matmuls with same key. Pull Request resolved: https://github.com/pytorch/pytorch/pull/122642 Approved by: https://github.com/shunting314 ghstack d...
diff --git a/test/hi.py b/test/hi.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/test/inductor/test_max_autotune.py b/test/inductor/test_max_autotune.py index 37b5d84cad..5745f4a405 100644 --- a/test/inductor/test_max_autotune.py +++ b/test/inductor/test_max_autotune.py @@ -445,6 +445,22 @@ class Tes...
2.41.0
36b0d12fa68c62d6af5a9110a23eca4f0a7c4df
Tue, 16 Apr 2024 06:36:22 -0700
[PATCH 0237/1000] Don't clamp slices generated from cat kernel (#124139)
Fixes https://github.com/pytorch/pytorch/issues/123793 Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/124139 Approved by: https://github.com/Microve, https://github.com/peterbell10, https://github.com/Skylion007
diff --git a/test/inductor/test_torchinductor_dynamic_shapes.py b/test/inductor/test_torchinductor_dynamic_shapes.py index b7f80471cb..84fc95a2c6 100644 --- a/test/inductor/test_torchinductor_dynamic_shapes.py +++ b/test/inductor/test_torchinductor_dynamic_shapes.py @@ -376,6 +376,21 @@ class TestInductorDynamic(TestCa...
2.41.0
d3cea3291346e66a870bbec51e4d1a3550300db
Tue, 16 Apr 2024 14:09:59 -0700
[PATCH 0238/1000] Fix derived dim bugs in ep.run_decomp (#123326)
Differential Revision: [D55730289](https://our.internmc.facebook.com/intern/diff/D55730289) Pull Request resolved: https://github.com/pytorch/pytorch/pull/123326 Approved by: https://github.com/avikchaudhuri
diff --git a/test/export/test_export.py b/test/export/test_export.py index 73795fe0eb..b870fdb58e 100644 --- a/test/export/test_export.py +++ b/test/export/test_export.py @@ -690,7 +690,6 @@ class TestExport(TestCase): self.assertEqual(ep.module()(torch.randn(4), torch.randn(5)).size()[0], 4) - @testing...
2.41.0
d22dde877f49c68dd2213a059b49447f5c56ad7
Wed, 17 Apr 2024 06:15:32 +0000
[PATCH 0239/1000] Pointer to the nonzero limit ticket (#124244)
For the nonzero impl limits we are still asking at runtime to fill a new ticket but we had already more then one. So I am pointing to the current open ticket. Pull Request resolved: https://github.com/pytorch/pytorch/pull/124244 Approved by: https://github.com/ezyang
diff --git a/aten/src/ATen/native/cuda/Nonzero.cu b/aten/src/ATen/native/cuda/Nonzero.cu index 5d62f7711d..e87f46cd84 100644 --- a/aten/src/ATen/native/cuda/Nonzero.cu +++ b/aten/src/ATen/native/cuda/Nonzero.cu @@ -112,7 +112,7 @@ void nonzero_cuda_out_impl(const Tensor& self, Tensor& out){ Tensor& nonzero_out_cuda(...
2.41.0
3effa585510f32bc4ebc94b6b7300d7d99e078d
Wed, 17 Apr 2024 06:45:58 +0000
[PATCH 0240/1000] Enable UFMT on all of `test/distributed` (#123539)
Partially addresses #123062 Ran lintrunner on: - `test/distributed` Pull Request resolved: https://github.com/pytorch/pytorch/pull/123539 Approved by: https://github.com/ezyang
diff --git a/.lintrunner.toml b/.lintrunner.toml index 817d35f34f..9e83a8b96e 100644 --- a/.lintrunner.toml +++ b/.lintrunner.toml @@ -1015,134 +1015,6 @@ exclude_patterns = [ 'test/_nvfuser/test_python_frontend.py', 'test/_nvfuser/test_torchscript.py', 'test/delete.py', - 'test/distributed/_shard/sha...
2.41.0
0211e207c78fafac2edaf2e14954f668e898b4a
Tue, 16 Apr 2024 00:56:35 +0000
[PATCH 0241/1000] inductor cpp wrapper: add GIL release back (#123897)
Fixes https://github.com/pytorch/pytorch/issues/123517. This PR adds the GIL release (originally added in https://github.com/pytorch/pytorch/pull/111888) back following the suggestion here: https://github.com/pytorch/pytorch/pull/123897#discussion_r1562509705. We added a default constructor and an assignment operator f...
diff --git a/test/inductor/test_cpu_cpp_wrapper.py b/test/inductor/test_cpu_cpp_wrapper.py index 1380de42af..bb7e6770a6 100644 --- a/test/inductor/test_cpu_cpp_wrapper.py +++ b/test/inductor/test_cpu_cpp_wrapper.py @@ -240,7 +240,10 @@ if RUN_CPU: ), BaseTest("test_mm_views"), BaseTest("test_...
2.41.0
4878abab0c8be0f27eda6c991d0aa453493a2e7
Wed, 17 Apr 2024 09:24:59 +0000
[PATCH 0242/1000] Fix Setup Linux for ARC (#124171)
We can't get information about `ami-id`, `instance-id`, `instance-type` for the ARC runners: ``` 2024-04-16T11:10:17.0098276Z curl: (22) The requested URL returned error: 401 2024-04-16T11:10:17.0110775Z ami-id: 2024-04-16T11:10:17.0159131Z curl: (22) The requested URL returned error: 401 2024-04-16T11:10:17.0167378Z ...
diff --git a/.github/actions/setup-linux/action.yml b/.github/actions/setup-linux/action.yml index b1c9081fe5..98c796e0ca 100644 --- a/.github/actions/setup-linux/action.yml +++ b/.github/actions/setup-linux/action.yml @@ -15,10 +15,12 @@ runs: category=$1 # If it is GCP runner (runner name contai...
2.41.0
cc466751b2723eb913fd3148b4f054189bbf1ab
Wed, 17 Apr 2024 09:44:07 +0000
[PATCH 0243/1000] Add bfloat16 support to binary_cross_entropy for CPU (#123823)
Fixes #123715 As the title stated. But, maybe we should pay attention to this https://github.com/pytorch/pytorch/pull/33206, which removed the half support for cpu about 4 years ago. Pull Request resolved: https://github.com/pytorch/pytorch/pull/123823 Approved by: https://github.com/Skylion007, https://github.com/m...
diff --git a/aten/src/ATen/native/Loss.cpp b/aten/src/ATen/native/Loss.cpp index ea0cb5419a..e21d9f6008 100644 --- a/aten/src/ATen/native/Loss.cpp +++ b/aten/src/ATen/native/Loss.cpp @@ -273,27 +273,30 @@ Tensor& binary_cross_entropy_out_cpu(const Tensor& input, const Tensor& target, .add_owned_const_input(at::s...
2.41.0
7ad630f5d404858dd19d6e7f79ab3573dbb7c0a
Wed, 17 Apr 2024 11:31:32 +0000
[PATCH 0244/1000] Revert "Dont precompile already seen keys, limit epilogue choices (#122642)"
This reverts commit 050051f412e50d98d506adf0d05aa6e4ceab54bd. Reverted https://github.com/pytorch/pytorch/pull/122642 on behalf of https://github.com/DanilBaibak due to Broken trunk ([comment](https://github.com/pytorch/pytorch/pull/124030#issuecomment-2061044960))
diff --git a/test/hi.py b/test/hi.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/test/inductor/test_max_autotune.py b/test/inductor/test_max_autotune.py index 5745f4a405..37b5d84cad 100644 --- a/test/inductor/test_max_autotune.py +++ b/test/inductor/test_max_autotune.py @@ -445,22 +445,6 @@ class...
2.41.0
f89f565bb17bcb70cb6938540af0cd154c8344a
Wed, 17 Apr 2024 11:31:33 +0000
[PATCH 0245/1000] Revert "Re-land precompile triton templates (#124030)"
This reverts commit d68196e7ef5eb8f62064ef70c75032f4d8b4a4fa. Reverted https://github.com/pytorch/pytorch/pull/124030 on behalf of https://github.com/DanilBaibak due to Broken trunk ([comment](https://github.com/pytorch/pytorch/pull/124030#issuecomment-2061044960))
diff --git a/test/inductor/test_max_autotune.py b/test/inductor/test_max_autotune.py index 37b5d84cad..d1f074de51 100644 --- a/test/inductor/test_max_autotune.py +++ b/test/inductor/test_max_autotune.py @@ -328,8 +328,7 @@ class TestMaxAutotune(TestCase): inputs: str, benchmark: Callable[[Any]...
2.41.0
dc15b684980c5b7f964a797dbee7915399a089c
Wed, 17 Apr 2024 11:47:02 +0000
[PATCH 0246/1000] Revert "[sparse] Add fast semi-structured spasification kernels (#122350)"
This reverts commit 14b2273b0c58b4000e10b2e441341eeafb7dd2f6. Reverted https://github.com/pytorch/pytorch/pull/122350 on behalf of https://github.com/DanilBaibak due to Broken trunk ([comment](https://github.com/pytorch/pytorch/pull/122350#issuecomment-2061070350))
diff --git a/aten/src/ATen/native/native_functions.yaml b/aten/src/ATen/native/native_functions.yaml index 3a14d19ee5..6e96a8a6aa 100644 --- a/aten/src/ATen/native/native_functions.yaml +++ b/aten/src/ATen/native/native_functions.yaml @@ -3342,18 +3342,6 @@ dispatch: CUDA: _cslt_sparse_mm_search
- dispatch:
efcb6c718c6954df2157d93989ec7ed1821aafe
Wed, 17 Apr 2024 12:22:50 +0000
[PATCH 0247/1000] Fix wrong ufmt exclusions in `.lintrunner.toml` (#124135)
Part of: #123062 In this pull request(#123809), there were some exclusions that should have been removed, but weren't. Pull Request resolved: https://github.com/pytorch/pytorch/pull/124135 Approved by: https://github.com/ezyang
diff --git a/.lintrunner.toml b/.lintrunner.toml index 9e83a8b96e..0bbac77322 100644 --- a/.lintrunner.toml +++ b/.lintrunner.toml @@ -1016,118 +1016,6 @@ exclude_patterns = [ 'test/_nvfuser/test_torchscript.py', 'test/delete.py', 'test/expect/__init__.py', - 'test/jit/__init__.py', - 'test/jit/_im...
2.41.0
7dbfecd374a5b4363dab27bc90aa563285ce50c
Tue, 16 Apr 2024 10:57:00 -0700
[PATCH 0248/1000] Rename impl_abstract to register_fake, part 1/2 (#123937)
This PR: - adds a new torch.library.register_fake and deprecates torch.library.impl_abstract. The motivation is that we have a lot of confusion around the naming so we are going to align the naming with the actual subsystem (FakeTensor). - renames `m.impl_abstract_pystub("fbgemm_gpu.sparse_ops")` to `m.has_python_regis...
diff --git a/aten/src/ATen/core/MetaFallbackKernel.cpp b/aten/src/ATen/core/MetaFallbackKernel.cpp index fe56568bbb..8523a55878 100644 --- a/aten/src/ATen/core/MetaFallbackKernel.cpp +++ b/aten/src/ATen/core/MetaFallbackKernel.cpp @@ -8,14 +8,14 @@ static void metaFallback( const c10::OperatorHandle& op, c10:...
2.41.0
f378e1853558d603028c2062373869c6f192885
Tue, 16 Apr 2024 17:47:00 +0200
[PATCH 0249/1000] [Inductor cutlass backend] Fix flaky test ( CUDA IMA ) (#124106)
A unit test within test_cutlass_backend.py can fail with CUDA illegal memory accesses due to the fact that some CUTLASS Kernels contain bugs. By using autotuning in subprocesses, this CUDA illegal memory access simply leads to the buggy Cutlass Kernels being filtered out, instead of causing it to bring down the entire...
diff --git a/test/inductor/test_cutlass_backend.py b/test/inductor/test_cutlass_backend.py index bc411ad9ce..0bcd895b00 100644 --- a/test/inductor/test_cutlass_backend.py +++ b/test/inductor/test_cutlass_backend.py @@ -385,10 +385,12 @@ class TestCutlassBackend(TestCase): with config.patch( { ...
2.41.0
880a71010acbe30e5e2a63f3b216aa635413df9
Wed, 17 Apr 2024 14:12:29 +0000
[PATCH 0250/1000] [BE] Add missing `std::` prefix to `Unique.mm` (#124232)
Follow up after https://github.com/pytorch/pytorch/pull/124117 fixes following warning ``` /Users/malfet/git/pytorch/pytorch/aten/src/ATen/native/mps/operations/Unique.mm:282:26: warning: use of function template name with no prior declaration in function call with explicit template arguments is a C++20 extension [-Wc+...
diff --git a/aten/src/ATen/native/mps/operations/Unique.mm b/aten/src/ATen/native/mps/operations/Unique.mm index a1241d5395..fc30c2d0b7 100644 --- a/aten/src/ATen/native/mps/operations/Unique.mm +++ b/aten/src/ATen/native/mps/operations/Unique.mm @@ -279,7 +279,7 @@ static std::tuple<Tensor, Tensor, Tensor> _unique_imp...
2.41.0
a735ece6b46248b6bb224bae4d0d7df24a335f0
Wed, 17 Apr 2024 14:13:51 +0000
[PATCH 0251/1000] Remove @abock from ONNX approvers/codeowners (#124259)
As he is no longer interested in the project Pull Request resolved: https://github.com/pytorch/pytorch/pull/124259 Approved by: https://github.com/kit1980, https://github.com/BowenBao
diff --git a/.github/merge_rules.yaml b/.github/merge_rules.yaml index 8e9f05051c..cd1f5cb7c4 100644 --- a/.github/merge_rules.yaml +++ b/.github/merge_rules.yaml @@ -28,7 +28,6 @@ - caffe2/python/onnx/** approved_by: - BowenBao - - abock - justinchuby - shubhambhokare1 - thiagocrepaldi diff --git a/...
2.41.0
2b0c0a34e0c5daba1e7bb3b9e4dece5020c9414
Wed, 17 Apr 2024 14:30:26 +0300
[PATCH 0252/1000] Fix index_reduce sampler filter when op_info.variant_test_name is specified (#123375)
As in the title: `index_reduce` sample must correspond to reduction type specified by `variant_test_name`. Pull Request resolved: https://github.com/pytorch/pytorch/pull/123375 Approved by: https://github.com/zou3519, https://github.com/peterbell10
diff --git a/test/distributed/_tensor/test_dtensor_ops.py b/test/distributed/_tensor/test_dtensor_ops.py index 9a43cf2e78..d14d2d851f 100644 --- a/test/distributed/_tensor/test_dtensor_ops.py +++ b/test/distributed/_tensor/test_dtensor_ops.py @@ -190,7 +190,10 @@ dtensor_fails = { xfail("index_copy"), xfail("...
2.41.0
e1c98c171d3e7e8ea2ad5e740e3f9c29bd88b07
Wed, 17 Apr 2024 11:49:08 +0000
[PATCH 0254/1000] [dynamo] support `object.__setattr__(obj, name, value)` (#124068)
Resolves #114964 Resolves #114966 - #114964 - #114966 Pull Request resolved: https://github.com/pytorch/pytorch/pull/124068 Approved by: https://github.com/jansel
diff --git a/test/dynamo/test_misc.py b/test/dynamo/test_misc.py index 14f43ec37d..8ee9037e4c 100644 --- a/test/dynamo/test_misc.py +++ b/test/dynamo/test_misc.py @@ -3058,6 +3058,66 @@ utils_device.CURRENT_DEVICE == None""".split( self.assertEqual(cnts.frame_count, 1) self.assertEqual(cnts.op_count, ...
2.41.0
b1d6c8d98e1014ca5c9827b67d5a29eb2f14741
Wed, 17 Apr 2024 16:04:09 +0000
[PATCH 0255/1000] improve F.adaptive_avg_pool2d error messages on mps (#124143)
Gives better error messages on mps. Partially fixes #123725 in the case of `F.adaptive_avg_pool2d`. This also relates to #96056. Pull Request resolved: https://github.com/pytorch/pytorch/pull/124143 Approved by: https://github.com/albanD, https://github.com/malfet
diff --git a/aten/src/ATen/native/mps/operations/AdaptivePooling.mm b/aten/src/ATen/native/mps/operations/AdaptivePooling.mm index c88d468f7e..c38d5faec6 100644 --- a/aten/src/ATen/native/mps/operations/AdaptivePooling.mm +++ b/aten/src/ATen/native/mps/operations/AdaptivePooling.mm @@ -37,8 +37,9 @@ static void set_ker...
2.41.0
6324fe0732c58578c4623a84c49922ee76626d9
Wed, 17 Apr 2024 16:04:22 +0000
[PATCH 0256/1000] Speedup int4mm_kernel with NEON (#124257)
By unrolling middle loop by 16 elements and using neon to decode packed int4 to float32. Unrolling entire `n` loop actually makes it a tad slower, probably because ARM has smaller register file that x86 Before/after performance running stories110M on M2Pro | eager (before) | eager (after) | compile(before) | compile (...
diff --git a/aten/src/ATen/native/cpu/int4mm_kernel.cpp b/aten/src/ATen/native/cpu/int4mm_kernel.cpp index 1e7a7c0bcb..28dfd84c0d 100644 --- a/aten/src/ATen/native/cpu/int4mm_kernel.cpp +++ b/aten/src/ATen/native/cpu/int4mm_kernel.cpp @@ -339,6 +339,56 @@ inline void tinygemm_kernel( #endif +#if !defined(C10_MOBIL...
2.41.0