commitId stringlengths 40 40 | datetime stringlengths 30 31 | subject stringlengths 37 266 | comment stringlengths 109 15.2k | diff stringlengths 238 914k | gitVersion stringclasses 9
values |
|---|---|---|---|---|---|
43351718165579d908150e785815d1e65d439ab | Tue, 16 Apr 2024 09:24:14 -0700 | [PATCH 0257/1000] [dynamo][decorator] Support disable on nn modules (#124185) | Fixes https://github.com/pytorch/pytorch/issues/123979 Pull Request resolved: https://github.com/pytorch/pytorch/pull/124185 Approved by: https://github.com/weifengpy, https://github.com/yoyoyocmu | diff --git a/test/dynamo/test_decorators.py b/test/dynamo/test_decorators.py
index ab38b26ff9..3bff8b7177 100644
--- a/test/dynamo/test_decorators.py
+++ b/test/dynamo/test_decorators.py
@@ -84,6 +84,56 @@ class DecoratorTests(torch._dynamo.test_case.TestCase):
# to callsites of eval_frame.innermost_fn. A warn... | 2.41.0 |
4cecf06d7c3d2f38870cdb01f1e322678abea9c | Tue, 16 Apr 2024 14:03:00 -0700 | [PATCH 0258/1000] Update autotune jk knobs (#124214) | Differential Revision: [D56201145](https://our.internmc.facebook.com/intern/diff/D56201145/) Pull Request resolved: https://github.com/pytorch/pytorch/pull/124214 Approved by: https://github.com/aakhundov | diff --git a/torch/_inductor/triton_heuristics.py b/torch/_inductor/triton_heuristics.py
index aae26554c9..f5c6dae70d 100644
--- a/torch/_inductor/triton_heuristics.py
+++ b/torch/_inductor/triton_heuristics.py
@@ -979,10 +979,8 @@ def should_use_remote_autotune_cache():
from triton.runtime.fb_memcache import ME... | 2.41.0 |
efdf9a6a673b2d70befe826ff8c009997efbef0 | Wed, 17 Apr 2024 18:05:11 +0000 | [PATCH 0259/1000] fix pytorch version for onnx in doc (#124182) | Fixes [ 123845](https://github.com/pytorch/pytorch/issues/123845) Pull Request resolved: https://github.com/pytorch/pytorch/pull/124182 Approved by: https://github.com/albanD | diff --git a/docs/source/onnx.rst b/docs/source/onnx.rst
index 3a72c726bb..ffaa8ef836 100644
--- a/docs/source/onnx.rst
+++ b/docs/source/onnx.rst
@@ -18,7 +18,7 @@ Microsoft's `ONNX Runtime <https://www.onnxruntime.ai>`_.
TorchDynamo-based ONNX Exporter
-------------------------------
-*The TorchDynamo-based ONNX ... | 2.41.0 |
726a23d4edc43db072747582031257f5f7016e7 | Wed, 17 Apr 2024 19:16:32 +0000 | [PATCH 0260/1000] change `tf32` thresholds for `test_per_sample_grads_embeddingnet` (#124104) | TF32 causes issues with the tolerances here; we might also consider migrating some of the `with_tf32_off` tests in this file to `tf32_on_and_off` in case it would be useful to get signal for TF32. CC @malfet @atalman Pull Request resolved: https://github.com/pytorch/pytorch/pull/124104 Approved by: https://github.com/... | diff --git a/test/functorch/test_eager_transforms.py b/test/functorch/test_eager_transforms.py
index ce8aa84bf3..b54b9762fb 100644
--- a/test/functorch/test_eager_transforms.py
+++ b/test/functorch/test_eager_transforms.py
@@ -51,7 +51,12 @@ from torch._ops import HigherOrderOperator
from torch._subclasses.fake_tensor... | 2.41.0 |
3e249969bd0f0625a2b06fd23f3c1ef4aa6b16b | Wed, 17 Apr 2024 19:29:30 +0000 | [PATCH 0261/1000] [BE] enable `ruff` rule `RSE` and remove useless parentheses in `raise` statements (#124261) | Remove useless parentheses in `raise` statements if the exception type is raised with no argument. Pull Request resolved: https://github.com/pytorch/pytorch/pull/124261 Approved by: https://github.com/albanD | diff --git a/benchmarks/dynamo/common.py b/benchmarks/dynamo/common.py
index 91a8aeba39..0d6965c148 100644
--- a/benchmarks/dynamo/common.py
+++ b/benchmarks/dynamo/common.py
@@ -1858,7 +1858,7 @@ class TimeOutException(Exception):
def alarm_handler(signum, frame):
- raise TimeOutException()
+ raise TimeOutE... | 2.41.0 |
403757913689d200683a4158c565bc3dbade74b | Wed, 17 Apr 2024 20:45:35 +0000 | [PATCH 0263/1000] [DeviceMesh][Test] Add 3d unit test for `get_local_rank()` (#124142) | Fixes #ISSUE_NUMBER Pull Request resolved: https://github.com/pytorch/pytorch/pull/124142 Approved by: https://github.com/xunnanxu, https://github.com/fegin, https://github.com/XilunWu | diff --git a/test/distributed/test_device_mesh.py b/test/distributed/test_device_mesh.py
index a98916a922..697cc7fec0 100644
--- a/test/distributed/test_device_mesh.py
+++ b/test/distributed/test_device_mesh.py
@@ -195,6 +195,45 @@ class DeviceMeshTestNDim(DTensorTestBase):
self.assertNotEqual(hash(mesh), hash... | 2.41.0 |
ec05c769b7e1c6ab5ba75f86b4ae6d43d77ac96 | Wed, 17 Apr 2024 21:32:18 +0000 | [PATCH 0264/1000] all_gather and reduce_scatter autograd (#123989) | This adds `all_gather_tensor_autograd` and `reduce_scatter_tensor_autograd` to the functional_collectives library. This only supports `sum` mode for `reduce_scatter` but should be easy to extend in the future. The backwards implementations match the behavior in https://github.com/pytorch/torchrec/blob/main/torchrec/d... | diff --git a/test/distributed/test_functional_api.py b/test/distributed/test_functional_api.py
index 90f750d400..d26dcf970a 100644
--- a/test/distributed/test_functional_api.py
+++ b/test/distributed/test_functional_api.py
@@ -294,7 +294,7 @@ class TestTraceableCollectives(MultiThreadedTestCase):
for dim i... | 2.41.0 |
44d0466451be755f226c379201220d989f45d7e | Wed, 17 Apr 2024 22:31:30 +0000 | [PATCH 0265/1000] Revert "[DeviceMesh][Test] Add 3d unit test for `get_local_rank()` (#124142)" | This reverts commit a403757913689d200683a4158c565bc3dbade74b. Reverted https://github.com/pytorch/pytorch/pull/124142 on behalf of https://github.com/malfet due to Broke lint ([comment](https://github.com/pytorch/pytorch/pull/124142#issuecomment-2062587289)) | diff --git a/test/distributed/test_device_mesh.py b/test/distributed/test_device_mesh.py
index 697cc7fec0..a98916a922 100644
--- a/test/distributed/test_device_mesh.py
+++ b/test/distributed/test_device_mesh.py
@@ -195,45 +195,6 @@ class DeviceMeshTestNDim(DTensorTestBase):
self.assertNotEqual(hash(mesh), hash... | 2.41.0 |
c18afa25f81aeeb6e817254e06cf537a20c656c | Tue, 16 Apr 2024 06:58:14 +0000 | [PATCH 0266/1000] Intel GPU oneDNN upstreaming for primitive integration (#117112) | # Motivation As proposed in https://github.com/pytorch/pytorch/issues/114848 and https://github.com/pytorch/pytorch/issues/114723, oneDNN library is an important component for Intel GPU software ecosystem. Current PR is based on #117098, where oneDNN library for Intel GPU should be ready. This PR is the integration ... | diff --git a/aten/src/ATen/native/mkldnn/xpu/detail/Attr.h b/aten/src/ATen/native/mkldnn/xpu/detail/Attr.h
new file mode 100644
index 0000000000..56e5870849
--- /dev/null
+++ b/aten/src/ATen/native/mkldnn/xpu/detail/Attr.h
@@ -0,0 +1,365 @@
+#pragma once
+
+#include <ATen/ATen.h>
+#include <oneapi/dnnl/dnnl.hpp>
+#incl... | 2.41.0 |
23bf9cef0ed582bd17801c9e36faba499adc993 | Wed, 17 Apr 2024 11:59:20 -0700 | [PATCH 0267/1000] Add fake impl for aten.unique2 (#124306) | Reapply of: https://github.com/pytorch/pytorch/pull/121571 Differential Revision: [D56258431](https://our.internmc.facebook.com/intern/diff/D56258431) Pull Request resolved: https://github.com/pytorch/pytorch/pull/124306 Approved by: https://github.com/gmagogsfm | diff --git a/test/test_ops.py b/test/test_ops.py
index 3be010f83f..3aa36bae6d 100644
--- a/test/test_ops.py
+++ b/test/test_ops.py
@@ -2374,6 +2374,15 @@ dynamic_output_op_tests = (
"linalg.lstsq.grad_oriented",
)
+# Ops that have dynamic output shapes that we can handle when
+# allow_dynamic_shape_ops is True ... | 2.41.0 |
ebdbb63ceaea0bba3643307cd3328c470c9d28c | Wed, 17 Apr 2024 12:44:54 -0700 | [PATCH 0268/1000] Introduce set_example_value and use it throughout Dynamo (#124176) | I'm going to setup some extra behavior when we set example value, so I need a convenient place to interpose. I cannot easily do it on meta itself because its a generic dict with no interposition point. Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/1241... | diff --git a/torch/_dynamo/eval_frame.py b/torch/_dynamo/eval_frame.py
index 20f8675c75..17249d8cc8 100644
--- a/torch/_dynamo/eval_frame.py
+++ b/torch/_dynamo/eval_frame.py
@@ -58,7 +58,7 @@ from .code_context import code_context
from .exc import CondOpArgsMismatchError, UserError, UserErrorType
from .mutation_guar... | 2.41.0 |
330acae768273bae28e2df38046992da5d49937 | Tue, 16 Apr 2024 16:08:36 +0000 | [PATCH 0269/1000] Refactored implementation for upsample_nearest decompostions (#122783) | Pull Request resolved: https://github.com/pytorch/pytorch/pull/122783 Approved by: https://github.com/peterbell10 | diff --git a/torch/_decomp/decompositions.py b/torch/_decomp/decompositions.py
index 56f63952f4..3ef43ad4b1 100644
--- a/torch/_decomp/decompositions.py
+++ b/torch/_decomp/decompositions.py
@@ -2647,71 +2647,45 @@ def get_scale_value(scales, idx):
@register_decomposition(aten.upsample_nearest1d.vec)
+@register_de... | 2.41.0 |
4162eecfcb5cc11139260c034c653e972a9073a | Wed, 17 Apr 2024 23:08:45 +0000 | [PATCH 0271/1000] Update Security Policy to provide Security Guidance for users (#120531) | Fixes #120530 Co-authored-by: albanD <desmaison.alban@gmail.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/120531 Approved by: https://github.com/malfet, https://github.com/albanD | diff --git a/SECURITY.md b/SECURITY.md
index 0651f82b70..e8e0249fc8 100644
--- a/SECURITY.md
+++ b/SECURITY.md
@@ -1,9 +1,56 @@
-# Reporting Security Issues
+# Security Policy
-If you believe you have found a security vulnerability in PyTorch, we encourage you to let us know right away. We will investigate all legiti... | 2.41.0 |
d7aeedb72f8a96d0f168308292e0d41c095f01b | Wed, 17 Apr 2024 23:26:55 +0000 | [PATCH 0273/1000] [Dynamo] Check for __bool__ attribute before accessing it (#120943) | This PR checks if __bool__ attribute is available before accessing it when handling a UserDefinedObjectVariable Fixes #119782 Pull Request resolved: https://github.com/pytorch/pytorch/pull/120943 Approved by: https://github.com/zou3519 | diff --git a/test/dynamo_expected_failures/TestComposability.test_convert_without_squash_mask b/test/dynamo_expected_failures/TestComposability.test_convert_without_squash_mask
deleted file mode 100644
index e69de29bb2..0000000000
diff --git a/test/dynamo_expected_failures/TestComposability.test_fusion_before_s_prep b/... | 2.41.0 |
4f42bfd528ccb8cf1ac5c4debc6fa101ca5edb7 | Tue, 16 Apr 2024 20:26:48 +0000 | [PATCH 0274/1000] [dynamo] Support list.reverse (#124210) | fixes #123974 Pull Request resolved: https://github.com/pytorch/pytorch/pull/124210 Approved by: https://github.com/peterbell10 | diff --git a/test/dynamo/test_repros.py b/test/dynamo/test_repros.py
index 50b42b6557..754aef6d34 100644
--- a/test/dynamo/test_repros.py
+++ b/test/dynamo/test_repros.py
@@ -4777,6 +4777,29 @@ def forward(self, s0 : torch.SymInt, s1 : torch.SymInt, L_x_ : torch.Tensor):
global_fn = new_fn
self.assert... | 2.41.0 |
5235694f467e75003164a5078f5ec67406146b0 | Wed, 17 Apr 2024 08:53:03 -0700 | [PATCH 0275/1000] [FSDP2] Made `unshard` return type consistent (#124293) | We can always return an `UnshardHandle` if `async_op=True` even if the FSDP module does not manage any parameters and hence does not have an `FSDPParamGroup`. Pull Request resolved: https://github.com/pytorch/pytorch/pull/124293 Approved by: https://github.com/weifengpy ghstack dependencies: #120952 | diff --git a/test/distributed/_composable/fsdp/test_fully_shard_comm.py b/test/distributed/_composable/fsdp/test_fully_shard_comm.py
index f41f4f2009..d5e3393f82 100644
--- a/test/distributed/_composable/fsdp/test_fully_shard_comm.py
+++ b/test/distributed/_composable/fsdp/test_fully_shard_comm.py
@@ -577,7 +577,7 @@ c... | 2.41.0 |
5049de24298a229fc133f8ed272e558ca81c73a | Wed, 17 Apr 2024 23:44:00 +0000 | [PATCH 0277/1000] Revert "[Environment Variable][1/N] Use thread-safe env variable API in c10 (#119449)" | This reverts commit 5bef127c2ea49280e7fda4f9fa7cad6fa4078e7d. Reverted https://github.com/pytorch/pytorch/pull/119449 on behalf of https://github.com/PaliC due to your using TORCH_INTERNAL_ASSERT incorrectly ([comment](https://github.com/pytorch/pytorch/pull/119449#issuecomment-2062696010)) | diff --git a/c10/core/impl/alloc_cpu.cpp b/c10/core/impl/alloc_cpu.cpp
index def4c3a3a9..9b7ae22f9f 100644
--- a/c10/core/impl/alloc_cpu.cpp
+++ b/c10/core/impl/alloc_cpu.cpp
@@ -3,7 +3,6 @@
#include <c10/core/alignment.h>
#include <c10/util/Flags.h>
#include <c10/util/Logging.h>
-#include <c10/util/env.h>
#include... | 2.41.0 |
1e1d671ef893070c7a527fc729fa8757385d062 | Wed, 17 Apr 2024 05:48:37 -0700 | [PATCH 0278/1000] Stop requiring a pystub for register_fake by default (#124064) | Previously, if someone used `register_fake` to add a fake impl for an operator defined in C++, we would require them to add a `m.set_python_module(<module>)` call to C++. This was to avoid situations where a user imported the C++ operator without importing the fake impl. This "breaks" open registration: there's no way... | diff --git a/test/custom_operator/test_custom_ops.py b/test/custom_operator/test_custom_ops.py
index 5badf01d07..1b490c49e9 100644
--- a/test/custom_operator/test_custom_ops.py
+++ b/test/custom_operator/test_custom_ops.py
@@ -6,6 +6,7 @@ import tempfile
import unittest
import torch
+import torch._library.utils as ... | 2.41.0 |
25387f4dd00de337f53d897bd0d30983f1a8f49 | Thu, 18 Apr 2024 00:13:32 +0000 | [PATCH 0281/1000] [ez][CI] Reduce CI_SERIAL_LIST pt2 (#124298) | #124085 Add @serialTest() to some tests slow gradcheck already runs serially Doing this slowly so its easier to check flaky issues that might get made Pull Request resolved: https://github.com/pytorch/pytorch/pull/124298 Approved by: https://github.com/kit1980 | diff --git a/test/profiler/test_profiler.py b/test/profiler/test_profiler.py
index a4269d84d3..8e4e31718d 100644
--- a/test/profiler/test_profiler.py
+++ b/test/profiler/test_profiler.py
@@ -60,6 +60,7 @@ from torch.testing._internal.common_utils import (
IS_WINDOWS,
parametrize,
run_tests,
+ serialTe... | 2.41.0 |
7a3dc56d4cd33c670c629aa94174e1f31225eb7 | Wed, 17 Apr 2024 13:54:21 -0700 | [PATCH 0283/1000] Small Adamax fix (#123498) | Pull Request resolved: https://github.com/pytorch/pytorch/pull/123498 Approved by: https://github.com/janeyx99 | diff --git a/torch/optim/adamax.py b/torch/optim/adamax.py
index 2269c8484d..443fbd2458 100644
--- a/torch/optim/adamax.py
+++ b/torch/optim/adamax.py
@@ -240,7 +240,9 @@ def adamax(
See :class:`~torch.optim.Adamax` for details.
"""
- if not all(isinstance(t, torch.Tensor) for t in state_steps):
+ if ... | 2.41.0 |
c66c43d51e5cba234e1f84a1155c12ac4c86a3f | Tue, 16 Apr 2024 15:08:44 +0000 | [PATCH 0286/1000] Make macro with AMP more generic (#124050) | # Motivation According to [[RFC] Intel GPU Upstreaming](https://github.com/pytorch/pytorch/issues/114723), we would like to upstream amp autocast policy to facilitate the functionality and accuracy of `torch.compile` on e2e benchmarks. # Solution The first PR aims to make macro `KERNEL` to be generic. It accepts two t... | diff --git a/aten/src/ATen/autocast_mode.cpp b/aten/src/ATen/autocast_mode.cpp
index 7282bba9e6..d7c5a9b768 100644
--- a/aten/src/ATen/autocast_mode.cpp
+++ b/aten/src/ATen/autocast_mode.cpp
@@ -247,15 +247,15 @@ TORCH_LIBRARY_IMPL(_, Autocast, m) {
TORCH_LIBRARY_IMPL(aten, Autocast, m) {
// lower_precision_fp
- ... | 2.41.0 |
ae31495ff36b587fbde6eb509efd8584d238aff | Thu, 18 Apr 2024 01:17:44 +0000 | [PATCH 0287/1000] Try to speed up lintrunner in CI (#124311) | Before timing: clang is 19min and noclang is 16min After timing: clang is 17min and noclang is 15min This is still crazy slow so most likely more could be done but didn't check the logs in details. Pull Request resolved: https://github.com/pytorch/pytorch/pull/124311 Approved by: https://github.com/ezyang, https://git... | diff --git a/.github/scripts/lintrunner.sh b/.github/scripts/lintrunner.sh
index e1403d13d6..82f472b0f1 100755
--- a/.github/scripts/lintrunner.sh
+++ b/.github/scripts/lintrunner.sh
@@ -6,6 +6,9 @@ CONDA_ENV=$(conda env list --json | jq -r ".envs | .[-1]")
eval "$(command conda 'shell.bash' 'hook' 2> /dev/null)"
con... | 2.41.0 |
30bb13fe84c88ab5c988351543362b60fefb556 | Wed, 17 Apr 2024 09:19:38 -0700 | [PATCH 0288/1000] Re-land precompile triton templates (#124030) | Re-land precompile triton templates. This got reverted because we were precompiling templates without checking the cache. I have since added logic and a test to ensure we do not precompile if there is a cache hit. Pull Request resolved: https://github.com/pytorch/pytorch/pull/124030 Approved by: https://github.com/shu... | diff --git a/test/inductor/test_max_autotune.py b/test/inductor/test_max_autotune.py
index d1f074de51..af87aba112 100644
--- a/test/inductor/test_max_autotune.py
+++ b/test/inductor/test_max_autotune.py
@@ -328,7 +328,8 @@ class TestMaxAutotune(TestCase):
inputs: str,
benchmark: Callable[[Any]... | 2.41.0 |
3b4ac956eabb34c9f7e52b8c7fa50b085f667e5 | Wed, 17 Apr 2024 20:20:18 +0300 | [PATCH 0289/1000] Add index_reduce decomposition (#122579) | As in the title. Pull Request resolved: https://github.com/pytorch/pytorch/pull/122579 Approved by: https://github.com/peterbell10 ghstack dependencies: #123375 | diff --git a/test/inductor/test_torchinductor_opinfo.py b/test/inductor/test_torchinductor_opinfo.py
index 75c4f61ca3..7319656905 100644
--- a/test/inductor/test_torchinductor_opinfo.py
+++ b/test/inductor/test_torchinductor_opinfo.py
@@ -168,7 +168,7 @@ inductor_skips["cpu"] = {
"linalg.ldl_factor": {f32, f64}, ... | 2.41.0 |
71423c2e4881bb8dbe4dc302e77030ef53058fb | Wed, 17 Apr 2024 14:52:47 -0700 | [PATCH 0290/1000] [inductor] let coordesc tuner respect max RBLOCK (#124325) | Fix https://github.com/pytorch/pytorch/issues/124251 . Coordesc tuner need respect max RBLOCK. When rnumel is a multiple of max-RBLOCK, inductor codegen will skip rmask. If coordesc tuner does not consider max-RBLOCK and pick a RBLOCK larger than that, we would get CUDA IMA (illegal memory access) error. Pull Request... | diff --git a/test/inductor/test_coordinate_descent_tuner.py b/test/inductor/test_coordinate_descent_tuner.py
index 86212e891f..5b9f35fa9c 100644
--- a/test/inductor/test_coordinate_descent_tuner.py
+++ b/test/inductor/test_coordinate_descent_tuner.py
@@ -98,6 +98,18 @@ class TestCoordinateDescentTuner(TestCase):
... | 2.41.0 |
8f3b99a94659d1d14e28f6271e123cc48dcef71 | Wed, 17 Apr 2024 09:42:57 +0800 | [PATCH 0291/1000] [inductor] Modify the rules for freezing the layout of x.unwrap_view() in convert_to_reinterpret_view (#122760) | Fix https://github.com/pytorch/pytorch/issues/121607 Modify the rules for freezing the layout of `x.unwrap_view()` in `convert_to_reinterpret_view`: If any read of `x.unwrap_view()` is in channels_last format, freeze the layout of `x.unwrap_view()` to channels_last format. Pull Request resolved: https://github.com/py... | diff --git a/torch/_inductor/ir.py b/torch/_inductor/ir.py
index 2a66ea69d3..ea80192422 100644
--- a/torch/_inductor/ir.py
+++ b/torch/_inductor/ir.py
@@ -4138,7 +4138,30 @@ class ExternKernel(InputsKernel):
# NOTE: Don't use extract_read_writes here as it fails when
# make_loader() inlines the comp... | 2.41.0 |
1a56efbb91377237eb1b81d72c7d598ad61b14e | Mon, 15 Apr 2024 20:48:42 +0800 | [PATCH 0292/1000] [inductor] modify the output_stride of ConcatKernel (#122761) | Fix https://github.com/pytorch/pytorch/issues/121613. Modify the `output_stride` of `ConcatKernel`: If any input to `Concat` is `Pointwise`, check the layout of all inputs to `Pointwise`, if any of the inputs is in channels_last format, set channels_last strides for the `output_stride`. Pull Request resolved: https://... | diff --git a/torch/_inductor/ir.py b/torch/_inductor/ir.py
index ea80192422..4cfb582e22 100644
--- a/torch/_inductor/ir.py
+++ b/torch/_inductor/ir.py
@@ -3856,6 +3856,20 @@ class ConcatKernel(NopKernel):
# use CL stride for the output
output_stride = make_channels_last_strides... | 2.41.0 |
f04c29be5bcd5670fb9c05e0cf8f4650559052f | Mon, 15 Apr 2024 13:58:03 +0800 | [PATCH 0293/1000] [inductor] Freeze the layout of the conv input to channels_last (#122765) | Fix https://github.com/pytorch/pytorch/issues/118082. Pull Request resolved: https://github.com/pytorch/pytorch/pull/122765 Approved by: https://github.com/jgong5, https://github.com/leslie-fang-intel, https://github.com/jansel | diff --git a/torch/_inductor/graph.py b/torch/_inductor/graph.py
index 404c49edde..aaaee95887 100644
--- a/torch/_inductor/graph.py
+++ b/torch/_inductor/graph.py
@@ -18,6 +18,7 @@ from torch._decomp import get_decompositions
from torch._dynamo.utils import defake, dynamo_timed
from torch._higher_order_ops.effects im... | 2.41.0 |
6b757701e6564621463ab9da0b25778d9326043 | Tue, 16 Apr 2024 00:12:23 -0700 | [PATCH 0294/1000] [aot] trim refcount for subclass runtime wrapper (#124155) | On torchtrain, before <img width="1218" alt="image" src="https://github.com/pytorch/pytorch/assets/9547562/b340c114-071a-440c-904c-c042de4d92c5"> after  Pull Request resolved: https://github.com/pytorch/pytorch/pull/12415... | diff --git a/test/inductor/test_compiled_autograd.py b/test/inductor/test_compiled_autograd.py
index 27c1ccd3e7..5f9dd9b84d 100644
--- a/test/inductor/test_compiled_autograd.py
+++ b/test/inductor/test_compiled_autograd.py
@@ -1287,6 +1287,61 @@ TORCH_LIBRARY(test_autograd_cpp_node_data_dependent, m) {
out... | 2.41.0 |
b4b857a60150205fb53e08f7b09e791b0723ace | Wed, 17 Apr 2024 15:31:35 -0700 | [PATCH 0295/1000] [dynamo][nn_module] Enable torch.compile/disable as decorators on the class (#124187) | Support something like. This is UI change, so please review carefully. ~~~ @torch._dynamo.disable class SimpleLinear(torch.nn.Module): def __init__(self): super().__init__() self.layer0 = torch.nn.Linear(4, 4) def forward(self, inp): return self.layer0(torch.sigmoid(inp)) @torch.compile(backend=cnts) class SimpleMod... | diff --git a/test/dynamo/test_decorators.py b/test/dynamo/test_decorators.py
index 3bff8b7177..890edca40c 100644
--- a/test/dynamo/test_decorators.py
+++ b/test/dynamo/test_decorators.py
@@ -134,6 +134,55 @@ class DecoratorTests(torch._dynamo.test_case.TestCase):
all(node.target is not torch.sigmoid for no... | 2.41.0 |
ddd17bdc6c69a5765c39ab91b9acae74b58e8d3 | Tue, 16 Apr 2024 15:02:26 -0700 | [PATCH 0296/1000] [benchmarks] Add --snapshot-memory to get memory pickles for eager vs compiled (#119411) | creates memory snapshot pickles e.g. ``` inductor_no_cudagraphs_torchbench_amp_training_cuda_performance_compiled_pytorch_stargan.pickle inductor_no_cudagraphs_torchbench_amp_training_cuda_performance_eager_pytorch_stargan.pickle ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/119411 Approved by: ht... | diff --git a/benchmarks/dynamo/common.py b/benchmarks/dynamo/common.py
index 0d6965c148..9908b44508 100644
--- a/benchmarks/dynamo/common.py
+++ b/benchmarks/dynamo/common.py
@@ -1994,6 +1994,29 @@ def maybe_init_distributed(should_init_distributed, rank, world_size, port="6789
torch.distributed.destroy_pr... | 2.41.0 |
c94652d7d3a465fd17125d4f07eb7ce33da0b4b | Wed, 17 Apr 2024 15:03:22 -0700 | [PATCH 0297/1000] [benchmarks] Add --use-warm-peak-memory (#124326) | Measuring peak memory on the first run can capture cases where compiled artifacts leak into runtime, but it also introduces a lot of noise from cudnn/triton autotuning which generally uses as much memory as it can. Setting this flag as a default will need some discussion, so I will only add it to unblock compiled backw... | diff --git a/benchmarks/dynamo/common.py b/benchmarks/dynamo/common.py
index 9908b44508..9e946df174 100644
--- a/benchmarks/dynamo/common.py
+++ b/benchmarks/dynamo/common.py
@@ -2724,6 +2724,10 @@ class BenchmarkRunner:
eager_latency, eager_peak_mem, _ = warmup(
self.model_iter_fn... | 2.41.0 |
12bae09bec34a724bf03c37a73d466c4a9d7e38 | Wed, 17 Apr 2024 11:55:22 -0700 | [PATCH 0298/1000] [dynamo] fix 3.11+ refleak (#124238) | Fixes https://github.com/pytorch/pytorch/issues/119607 for 3.11+. In 3.11+, `_PyFrame_FastToLocalsWithError` could implicity run `COPY_FREE_VARS` on the original frame, leading to double incref's since the dynamo shadow frame can rerun `COPY_FREE_VARS`. So the solution is to skip the first `COPY_FREE_VARS` instruction... | diff --git a/test/dynamo/test_misc.py b/test/dynamo/test_misc.py
index 8ee9037e4c..36e5a32a6f 100644
--- a/test/dynamo/test_misc.py
+++ b/test/dynamo/test_misc.py
@@ -44,7 +44,6 @@ from torch._dynamo.testing import (
same,
skipIfNotPy311,
unsupported,
- xfailIfPy311,
)
from torch._dynamo.utils impor... | 2.41.0 |
fed2e826beef1498aecd56ee2260046fabd08ab | Thu, 18 Apr 2024 03:29:16 +0000 | [PATCH 0300/1000] [DTensor][Test] Add unit tests to keep track of DTensor sharding for 2D (#123687) | Fixes #ISSUE_NUMBER Pull Request resolved: https://github.com/pytorch/pytorch/pull/123687 Approved by: https://github.com/wanchaol | diff --git a/test/distributed/_tensor/test_utils.py b/test/distributed/_tensor/test_utils.py
index a19e6e4753..7ba49ae520 100644
--- a/test/distributed/_tensor/test_utils.py
+++ b/test/distributed/_tensor/test_utils.py
@@ -3,13 +3,15 @@
import itertools
import torch
-from torch.distributed._tensor import distribute... | 2.41.0 |
213f262af77d4e2b8fc2333cb4391996296eca4 | Wed, 17 Apr 2024 10:09:27 -0700 | [PATCH 0301/1000] [dynamo][cpp-guards] Improve when to use Dict vs DictSubclassGuardManager (#124237) | Pull Request resolved: https://github.com/pytorch/pytorch/pull/124237 Approved by: https://github.com/jansel, https://github.com/mlazos ghstack dependencies: #124230 | diff --git a/test/dynamo/test_guard_manager.py b/test/dynamo/test_guard_manager.py
index 7f161ec008..88cb5e5968 100644
--- a/test/dynamo/test_guard_manager.py
+++ b/test/dynamo/test_guard_manager.py
@@ -11,6 +11,7 @@ from torch.testing._internal.common_utils import set_default_dtype
RootGuardManager = guards.RootGua... | 2.41.0 |
7daa110c8629398ebe6d6c3ccc1d9d692eb9872 | Thu, 18 Apr 2024 03:33:51 +0000 | [PATCH 0302/1000] Back out "Refresh OpOverloadPacket if a new OpOverload gets added (#123578)" (#124324) | Summary: Original commit changeset: 528276bc8a92 Original Phabricator Diff: D56057952 Differential Revision: D56271240 Pull Request resolved: https://github.com/pytorch/pytorch/pull/124324 Approved by: https://github.com/davidberard98 | diff --git a/test/test_custom_ops.py b/test/test_custom_ops.py
index 6e04e8890d..69a83eddde 100644
--- a/test/test_custom_ops.py
+++ b/test/test_custom_ops.py
@@ -2458,30 +2458,6 @@ Please use `add.register_fake` to add an fake impl.""",
y = f(x)
self.assertEqual(y, x.sin())
- @skipIfTorchDynamo(... | 2.41.0 |
9ab9248ce0391c233e7189c0ee71f2aca6a786c | Tue, 16 Apr 2024 19:35:36 -0700 | [PATCH 0303/1000] [Inductor Intel GPU backend Upstream] Generalize device-bias code in (#124249) | Generalize device-bias code in tirton_utils.py Pull Request resolved: https://github.com/pytorch/pytorch/pull/124249 Approved by: https://github.com/EikanWang, https://github.com/guangyey, https://github.com/jansel | diff --git a/torch/testing/_internal/triton_utils.py b/torch/testing/_internal/triton_utils.py
index dd1ab9e6d6..301c3cd472 100644
--- a/torch/testing/_internal/triton_utils.py
+++ b/torch/testing/_internal/triton_utils.py
@@ -3,11 +3,11 @@
import unittest
from torch.testing._internal.inductor_utils import HAS_CUDA... | 2.41.0 |
e1c0d2497811bdb1855202747c7708154ab4d7e | Thu, 18 Apr 2024 04:17:38 +0000 | [PATCH 0304/1000] [cublas] Keep explicit workspace creation to avoid OOM (#124250) | Summary: We explicitly set the cublas workspace even though CUDA 12.2+ fixed the issue where memory usage increased during graph capture. Original issue: https://github.com/pytorch/pytorch/pull/83461 This is because in CUDA 12.2+, the use of cudaMallocAsync in cublas will allocate memory dynamically (even if they're c... | diff --git a/aten/src/ATen/cuda/CublasHandlePool.cpp b/aten/src/ATen/cuda/CublasHandlePool.cpp
index ba10cc01c3..95d1ba2fb4 100644
--- a/aten/src/ATen/cuda/CublasHandlePool.cpp
+++ b/aten/src/ATen/cuda/CublasHandlePool.cpp
@@ -77,7 +77,7 @@ using CuBlasPoolType = DeviceThreadHandlePool<cublasHandle_t, createCublasHandl... | 2.41.0 |
ad66e05d2c7618e0fb752455cdc6144aeb4c3d4 | Thu, 18 Apr 2024 04:19:37 +0000 | [PATCH 0305/1000] [4/x][AMD][Lowering Enablement] Enabling meta internal AOTInductor compilation on ROCM (#124123) | Summary: as title Test Plan: CI & unit test Differential Revision: D56163334 Pull Request resolved: https://github.com/pytorch/pytorch/pull/124123 Approved by: https://github.com/chenyang78, https://github.com/jansel | diff --git a/torch/_inductor/codecache.py b/torch/_inductor/codecache.py
index d4f2805773..4e3c24cf96 100644
--- a/torch/_inductor/codecache.py
+++ b/torch/_inductor/codecache.py
@@ -962,7 +962,7 @@ class CompiledFxGraph:
def cpp_compiler() -> str:
if config.is_fbcode():
- return build_paths.cc()
+ ... | 2.41.0 |
ff85b42f9c6a2bf3b7ae597a0abb9fd56204a98 | Thu, 18 Apr 2024 06:12:54 +0000 | [PATCH 0307/1000] Revert "Add swap_tensors path to nn parametrizations (#124130)" | This reverts commit 64f6ddf12c11738c3f4b1ed01cf4f699541496bf. Reverted https://github.com/pytorch/pytorch/pull/124130 on behalf of https://github.com/DanilBaibak due to Broken trunk ([comment](https://github.com/pytorch/pytorch/pull/124130#issuecomment-2063074856)) | diff --git a/test/dynamo_expected_failures/TestNNParametrization.test_deepcopy_after_parametrization_swap_False b/test/dynamo_expected_failures/TestNNParametrization.test_deepcopy_after_parametrization
similarity index 100%
rename from test/dynamo_expected_failures/TestNNParametrization.test_deepcopy_after_parametrizat... | 2.41.0 |
e86a40694f03c66680201c01c13347eff38a951 | Thu, 18 Apr 2024 06:34:32 +0000 | [PATCH 0308/1000] Revert "[Dynamo] Check for __bool__ attribute before accessing it (#120943)" | This reverts commit dd7aeedb72f8a96d0f168308292e0d41c095f01b. Reverted https://github.com/pytorch/pytorch/pull/120943 on behalf of https://github.com/DanilBaibak due to Broken trunk ([comment](https://github.com/pytorch/pytorch/pull/120943#issuecomment-2063098295)) | diff --git a/test/dynamo_expected_failures/TestComposability.test_convert_without_squash_mask b/test/dynamo_expected_failures/TestComposability.test_convert_without_squash_mask
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/test/dynamo_expected_failures/TestComposability.test_fusion_before_s_prep b/test... | 2.41.0 |
04fac5618dbefcf6b29832c0215ae5f5e0fa2c3 | Wed, 17 Apr 2024 10:09:27 -0700 | [PATCH 0309/1000] [dynamo][cpp-guard] Reland Attempt 1 - Enable cpp guard manager (#124231) | Pull Request resolved: https://github.com/pytorch/pytorch/pull/124231 Approved by: https://github.com/jansel ghstack dependencies: #124230, #124237 | diff --git a/torch/_dynamo/config.py b/torch/_dynamo/config.py
index d95e7986b4..9482cfabcc 100644
--- a/torch/_dynamo/config.py
+++ b/torch/_dynamo/config.py
@@ -341,7 +341,7 @@ numpy_default_int = "int64"
use_numpy_random_stream = False
# Use C++ guard manager
-enable_cpp_guard_manager = os.environ.get("TORCHDYNA... | 2.41.0 |
b82345e487a7f1a6ba3a663201e0e0235d87b23 | Thu, 18 Apr 2024 07:21:41 +0000 | [PATCH 0310/1000] Revert "Re-land precompile triton templates (#124030)" | This reverts commit 030bb13fe84c88ab5c988351543362b60fefb556. Reverted https://github.com/pytorch/pytorch/pull/124030 on behalf of https://github.com/DanilBaibak due to Broken trunk ([comment](https://github.com/pytorch/pytorch/pull/124030#issuecomment-2063191117)) | diff --git a/test/inductor/test_max_autotune.py b/test/inductor/test_max_autotune.py
index af87aba112..d1f074de51 100644
--- a/test/inductor/test_max_autotune.py
+++ b/test/inductor/test_max_autotune.py
@@ -328,8 +328,7 @@ class TestMaxAutotune(TestCase):
inputs: str,
benchmark: Callable[[Any]... | 2.41.0 |
59f1da62f6a74d815dede5ac5513b37c49733a3 | Wed, 17 Apr 2024 02:15:59 -0700 | [PATCH 0312/1000] [sym_shapes][perf] _find not update unchanged replacements (#124274) | Differential Revision: [D56236380](https://our.internmc.facebook.com/intern/diff/D56236380) Pull Request resolved: https://github.com/pytorch/pytorch/pull/124274 Approved by: https://github.com/ezyang | diff --git a/torch/fx/experimental/symbolic_shapes.py b/torch/fx/experimental/symbolic_shapes.py
index 1a9df2466f..37fa27b44b 100644
--- a/torch/fx/experimental/symbolic_shapes.py
+++ b/torch/fx/experimental/symbolic_shapes.py
@@ -3976,7 +3976,9 @@ class ShapeEnv:
return a
res = self.replacements[... | 2.41.0 |
fcbeb34894288fa10ec1b561cd0b58b5407e0a7 | Thu, 18 Apr 2024 01:03:38 -0700 | [PATCH 0313/1000] [ATen] Add CPU fp16 support for nll_loss and cross_entropy_loss (#123256) | Add CPU FP16 support for nll_loss and cross_entropy_loss. Resolve issue #123328. Pull Request resolved: https://github.com/pytorch/pytorch/pull/123256 Approved by: https://github.com/jgong5, https://github.com/EikanWang, https://github.com/malfet | diff --git a/aten/src/ATen/native/LossNLL.cpp b/aten/src/ATen/native/LossNLL.cpp
index 0bf787b99b..0e7de9c272 100644
--- a/aten/src/ATen/native/LossNLL.cpp
+++ b/aten/src/ATen/native/LossNLL.cpp
@@ -304,8 +304,12 @@ void nll_loss_forward_out_cpu_template(
const Tensor& weight,
int64_t reduction,
int64_t ... | 2.41.0 |
032a780080646828bdda15f3af0277288b2fa34 | Thu, 18 Apr 2024 12:06:24 +0000 | [PATCH 0314/1000] Migrate linux-focal-cuda11_8-py3_10-gcc9-build to ARC (#123721) | Migrate linux-focal-cuda11_8-py3_10-gcc9-build to ARC Pull Request resolved: https://github.com/pytorch/pytorch/pull/123721 Approved by: https://github.com/jeanschmidt | diff --git a/.github/workflows/pull.yml b/.github/workflows/pull.yml
index 8f54248101..15c9e35587 100644
--- a/.github/workflows/pull.yml
+++ b/.github/workflows/pull.yml
@@ -232,7 +232,7 @@ jobs:
linux-focal-cuda11_8-py3_10-gcc9-build:
name: linux-focal-cuda11.8-py3.10-gcc9
- uses: ./.github/workflows/_li... | 2.41.0 |
5d4ebe9aeabc1fc46ca39dee2d446f9b5e9e114 | Thu, 18 Apr 2024 12:06:53 +0000 | [PATCH 0315/1000] Migrate linux-focal-cuda12_1-py3_10-gcc9-build to ARC (#123722) | Migrate linux-focal-cuda12_1-py3_10-gcc9-build to ARC Pull Request resolved: https://github.com/pytorch/pytorch/pull/123722 Approved by: https://github.com/jeanschmidt | diff --git a/.github/workflows/pull.yml b/.github/workflows/pull.yml
index 15c9e35587..56ce145913 100644
--- a/.github/workflows/pull.yml
+++ b/.github/workflows/pull.yml
@@ -257,7 +257,7 @@ jobs:
linux-focal-cuda12_1-py3_10-gcc9-build:
name: linux-focal-cuda12.1-py3.10-gcc9
- uses: ./.github/workflows/_li... | 2.41.0 |
135c4b921dadf90dcacd68bc54cb03e66166703 | Wed, 17 Apr 2024 12:28:30 -0700 | [PATCH 0317/1000] torch.library.register_fake now accepts more types (#124066) | We allow it to accept: - a string with the op name - an opoverload - a new-style custom op If any of these are referring to a new-style custom op (created with the custom_op decorator), then we dispatch to CustomOpDef.register_fake. Otherwise, we do what we previously did. Test Plan: - new tests Pull Request resolved... | diff --git a/test/test_custom_ops.py b/test/test_custom_ops.py
index 69a83eddde..037be095f6 100644
--- a/test/test_custom_ops.py
+++ b/test/test_custom_ops.py
@@ -2283,6 +2283,73 @@ class TestCustomOpAPI(TestCase):
continue
self.assertGreater(after, prev)
+ @skipIfTorchDynamo("Expecte... | 2.41.0 |
45173a0b58c9fb23d862a6a4c170b78b9807718 | Wed, 17 Apr 2024 16:49:34 -0700 | [PATCH 0318/1000] Add torch.library.register_autograd (#124071) | Allows registering autograd for all custom op entry points: - the new-style custom op API (custom_op) - the old-style torch.library APIs - C++ operator registration Test Plan: - tests Pull Request resolved: https://github.com/pytorch/pytorch/pull/124071 Approved by: https://github.com/albanD ghstack dependencies: #123... | diff --git a/docs/source/autograd.rst b/docs/source/autograd.rst
index 046ee42717..195e96cd39 100644
--- a/docs/source/autograd.rst
+++ b/docs/source/autograd.rst
@@ -199,6 +199,8 @@ Tensor autograd functions
Function.jvp
Function.vmap
+.. _context_method_mixins:
+
Context method mixins
^^^^^^^^^^^^^^^^^^... | 2.41.0 |
48c39c47d159add4f6217182812fbef31406244 | Wed, 17 Apr 2024 16:49:35 -0700 | [PATCH 0319/1000] Add OpOverload.redispatch; use it in new custom ops API (#124089) | A kernel has "dispatcher convention" if there is an additional keyset arg at the beginning of the argument list. This PR: - adds a way to register kernels with dispatcher_convention using Library.impl (pass dispatcher_convention = True) - adds OpOverload.redispatch We use both of the above in the new custom ops API: w... | diff --git a/c10/core/impl/PyInterpreter.cpp b/c10/core/impl/PyInterpreter.cpp
index 04b95d14ba..04f9d7c972 100644
--- a/c10/core/impl/PyInterpreter.cpp
+++ b/c10/core/impl/PyInterpreter.cpp
@@ -34,7 +34,9 @@ struct NoopPyInterpreterVTable final : public PyInterpreterVTable {
void python_op_registration_trampoline(
... | 2.41.0 |
542874311e82787abf4cee82ca3a1ca5e582d49 | Wed, 17 Apr 2024 16:49:35 -0700 | [PATCH 0320/1000] Delete qualname from custom_op decorator (#124092) | I forgot to delete this in an earlier PR. Pull Request resolved: https://github.com/pytorch/pytorch/pull/124092 Approved by: https://github.com/albanD ghstack dependencies: #123937, #124064, #124065, #124066, #124071, #124089 | diff --git a/torch/_library/custom_ops.py b/torch/_library/custom_ops.py
index 508cca9ef9..f36d9e3393 100644
--- a/torch/_library/custom_ops.py
+++ b/torch/_library/custom_ops.py
@@ -28,7 +28,6 @@ def custom_op(
*,
mutates_args: Iterable[str],
device_types: device_types_t = None,
- qualname: Optional[... | 2.41.0 |
51f66c1950a582dd18d1b2ee67df840a8c4dbbe | Thu, 18 Apr 2024 13:35:48 +0000 | [PATCH 0321/1000] [Environment Variable][1/N] Use thread-safe env variable API in c10 (#119449) | This PR is the beginning of attempts to wrap thread-unsafe getenv and set_env functions inside a RW mutex. Pull Request resolved: https://github.com/pytorch/pytorch/pull/119449 Approved by: https://github.com/albanD | diff --git a/c10/core/impl/alloc_cpu.cpp b/c10/core/impl/alloc_cpu.cpp
index 9b7ae22f9f..def4c3a3a9 100644
--- a/c10/core/impl/alloc_cpu.cpp
+++ b/c10/core/impl/alloc_cpu.cpp
@@ -3,6 +3,7 @@
#include <c10/core/alignment.h>
#include <c10/util/Flags.h>
#include <c10/util/Logging.h>
+#include <c10/util/env.h>
#include... | 2.41.0 |
325fd94a4927af5f08dcb063711097f1b034e38 | Thu, 18 Apr 2024 18:41:37 +0000 | [PATCH 0322/1000] Support xpu autocast policy (#124052) | Pull Request resolved: https://github.com/pytorch/pytorch/pull/124052 Approved by: https://github.com/jgong5, https://github.com/EikanWang, https://github.com/gujinghui, https://github.com/albanD | diff --git a/aten/src/ATen/autocast_mode.cpp b/aten/src/ATen/autocast_mode.cpp
index 14b8a36199..0b1dac55f3 100644
--- a/aten/src/ATen/autocast_mode.cpp
+++ b/aten/src/ATen/autocast_mode.cpp
@@ -569,5 +569,42 @@ TORCH_LIBRARY_IMPL(aten, AutocastCPU, m) {
}
+TORCH_LIBRARY_IMPL(_, AutocastXPU, m) {
+ m.fallback(tor... | 2.41.0 |
385ef2a5dbd62cb877e863c91ff29a43c340456 | Thu, 18 Apr 2024 14:07:00 +0000 | [PATCH 0323/1000] Revert "Skip workspace permission change for ROCm CI (#123816)" | This reverts commit 4322a0e782119f870ba1a17aec2be8a0ef1103d7. Reverted https://github.com/pytorch/pytorch/pull/123816 on behalf of https://github.com/DanilBaibak due to Broken trunk ([comment](https://github.com/pytorch/pytorch/pull/123816#issuecomment-2063949316)) | diff --git a/.ci/pytorch/build.sh b/.ci/pytorch/build.sh
index 13069482ae..3a51f255fe 100755
--- a/.ci/pytorch/build.sh
+++ b/.ci/pytorch/build.sh
@@ -223,23 +223,19 @@ if [[ "${BUILD_ENVIRONMENT}" != *android* && "${BUILD_ENVIRONMENT}" != *cuda* ]]
export BUILD_STATIC_RUNTIME_BENCHMARK=ON
fi
-# Do not change wor... | 2.41.0 |
f93402f619f58d651845981ccd1eba1d68da077 | Wed, 17 Apr 2024 20:45:43 -0400 | [PATCH 0325/1000] [NJT] Inline through torch.nested.nested_tensor_from_jagged instead of graph break (#124343) | Pull Request resolved: https://github.com/pytorch/pytorch/pull/124343 Approved by: https://github.com/jbschlosser | diff --git a/test/dynamo/test_subclasses.py b/test/dynamo/test_subclasses.py
index 387b6bf59b..8005d6e3a2 100644
--- a/test/dynamo/test_subclasses.py
+++ b/test/dynamo/test_subclasses.py
@@ -1361,6 +1361,14 @@ class TestNestedTensor(torch._dynamo.test_case.TestCase):
self._check_recompiles(fn, (nt,), (nt2,), F... | 2.41.0 |
677128cb892d17fe2281beae9e394fd6f89e455 | Thu, 18 Apr 2024 15:21:01 +0000 | [PATCH 0326/1000] [MPS] Fix crash with binary_cross_entropy is invoked for half dtypes (#124258) | By creating constants using input tensors dtype One line reproducer: ``` python -c "import torch; x=torch.arange(3, dtype=torch.float16,device='mps');print(torch.nn.functional.binary_cross_entropy(x, x))" ``` Before the change ``` loc("mps_subtract"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/ce725a5f-c761-11ee-... | diff --git a/aten/src/ATen/native/mps/operations/LossOps.mm b/aten/src/ATen/native/mps/operations/LossOps.mm
index 7b10476106..77727cb197 100644
--- a/aten/src/ATen/native/mps/operations/LossOps.mm
+++ b/aten/src/ATen/native/mps/operations/LossOps.mm
@@ -76,7 +76,7 @@ static Tensor& mse_loss_backward_out_impl(const Ten... | 2.41.0 |
15a8f6398818854b782221737e288d50f4903d9 | Thu, 18 Apr 2024 16:30:05 +0000 | [PATCH 0328/1000] Fixed issue in affine_grid_backward when grad_grid is non-contiguous (#124370) | Description: - replaced .view with .reshape to fix the problem when grad_grid is channels last 2d/3d - added a consistency test Fixes #124154 Pull Request resolved: https://github.com/pytorch/pytorch/pull/124370 Approved by: https://github.com/lezcano | diff --git a/aten/src/ATen/native/AffineGridGenerator.cpp b/aten/src/ATen/native/AffineGridGenerator.cpp
index 17e45acb1b..315027d706 100644
--- a/aten/src/ATen/native/AffineGridGenerator.cpp
+++ b/aten/src/ATen/native/AffineGridGenerator.cpp
@@ -110,7 +110,7 @@ static Tensor affine_grid_generator_4D_backward(
AT_AS... | 2.41.0 |
1062f57382980cbfa123df76556159c21561244 | Thu, 18 Apr 2024 16:35:51 +0000 | [PATCH 0329/1000] [export] Add a printer to unflattened module. (#124315) | Summary: add a helper method to print graph in every level of unflattened module. Test Plan: {F1489609684} Differential Revision: D56263195 Pull Request resolved: https://github.com/pytorch/pytorch/pull/124315 Approved by: https://github.com/tugsbayasgalan | diff --git a/torch/export/unflatten.py b/torch/export/unflatten.py
index 0ec948e513..33fd10a4e9 100644
--- a/torch/export/unflatten.py
+++ b/torch/export/unflatten.py
@@ -237,6 +237,12 @@ class UnflattenedModule(torch.nn.Module):
fqn_order.keys()
)
+ def _print_graph(self):
+ for fqn, ... | 2.41.0 |
8cf91c39533bb5223b983e309188d346aaa17c2 | Thu, 18 Apr 2024 17:02:38 +0000 | [PATCH 0330/1000] Fix predispatch tracing for aten::lift_fresh_copy (#124198) | Differential Revision: D56200666 Previously, when we hit the Functionalize kernel for lift_fresh_copy, we directly dispatch self.clone() to proxy dispatch. As a result, we end up receiving a functional tensor at proxy dispatch. As a work around, I unwrap self manually. Not sure, why it works ok in aot-dispatch tho Pu... | diff --git a/aten/src/ATen/FunctionalizeFallbackKernel.cpp b/aten/src/ATen/FunctionalizeFallbackKernel.cpp
index 52de86935a..594f627e17 100644
--- a/aten/src/ATen/FunctionalizeFallbackKernel.cpp
+++ b/aten/src/ATen/FunctionalizeFallbackKernel.cpp
@@ -210,7 +210,13 @@ static at::Tensor lift_fresh_functionalize_copy(cons... | 2.41.0 |
a6edb0b6644eb2b28650ea3be1c806e4a57e351 | Mon, 15 Apr 2024 11:06:09 -0700 | [PATCH 0331/1000] Possible fix for einops warning (#124084) | See https://github.com/arogozhnikov/einops/issues/315 Pull Request resolved: https://github.com/pytorch/pytorch/pull/124084 Approved by: https://github.com/peterbell10 | diff --git a/torch/_dynamo/trace_rules.py b/torch/_dynamo/trace_rules.py
index 0698b5fd8f..daeb8626c1 100644
--- a/torch/_dynamo/trace_rules.py
+++ b/torch/_dynamo/trace_rules.py
@@ -3036,10 +3036,6 @@ def add_module_init_func(name: str, init_func: Callable[[], None]) -> None:
"""Register a module without eagerly ... | 2.41.0 |
b17721899d4d6a55d66d4f7188e36c20a078231 | Wed, 17 Apr 2024 12:05:21 -0700 | [PATCH 0332/1000] Build device generic torch.Stream and torch.Event based on c10::Stream/Event (#123611) | This diff intends to build device generic torch.Stream and torch.Event for newly added accelerators in PyTorch. ------------ **torch.Stream APIs** ``` # Defined in torch/csrc/Stream.cpp class Stream(_StreamBase): stream_id: _int # Stream id device_index: _int device_type: _int device: _device # The device of the str... | diff --git a/build_variables.bzl b/build_variables.bzl
index 571623076d..71b5c5eda8 100644
--- a/build_variables.bzl
+++ b/build_variables.bzl
@@ -795,6 +795,7 @@ libtorch_python_core_sources = [
"torch/csrc/StorageMethods.cpp",
"torch/csrc/StorageSharing.cpp",
"torch/csrc/Stream.cpp",
+ "torch/csrc/E... | 2.41.0 |
7215a4fa2399584163631faf8c6a1bbcbbfe0ae | Wed, 17 Apr 2024 17:48:19 -0700 | [PATCH 0334/1000] Fix memory leak in pattern_matcher (#124345) | #121313 changed precompiled patterns so they are more integrated with the pattern matching code. This resulted with a list of "known" patterns (with their example data) being stored globally. Unfortunately since small FakeTensors store a constant of the original tensor it meant that we leaked cuda tensors in the examp... | diff --git a/torch/_inductor/pattern_matcher.py b/torch/_inductor/pattern_matcher.py
index 177d7c9466..0f2f966185 100644
--- a/torch/_inductor/pattern_matcher.py
+++ b/torch/_inductor/pattern_matcher.py
@@ -1293,6 +1293,9 @@ def gen_register_replacement(
scalar_workaround=(),
exclusive_arg_names=(),
):
+ ... | 2.41.0 |
6f0159db08c1ad55fe57a5e92d8933e21ea543e | Wed, 17 Apr 2024 12:05:23 -0700 | [PATCH 0335/1000] Add test_cpp_extensions tests for stream_and_event and mita_backend (#123614) | Test the generic torch.Stream/Event with fake device gurad and hooks. @exported-using-ghexport Differential Revision: [D55902506](https://our.internmc.facebook.com/intern/diff/D55902506/) Pull Request resolved: https://github.com/pytorch/pytorch/pull/123614 Approved by: https://github.com/albanD ghstack dependencies: ... | diff --git a/test/cpp_extensions/mtia_extension.cpp b/test/cpp_extensions/mtia_extension.cpp
new file mode 100644
index 0000000000..3b02d3968e
--- /dev/null
+++ b/test/cpp_extensions/mtia_extension.cpp
@@ -0,0 +1,219 @@
+#include <ATen/detail/MTIAHooksInterface.h>
+#include <c10/core/Device.h>
+#include <c10/core/Strea... | 2.41.0 |
4bedbb9e10e59e7c3c971944001055fc3cbfdbb | Thu, 18 Apr 2024 18:20:11 +0000 | [PATCH 0336/1000] [export] Serialize rational symint ranges (#123884) | Some symints result in rational ranges like 10/3 which runs into an error ([example](https://www.internalfb.com/intern/everpaste/?handle=GMG2AxkeoFUrh-UDAFcE8pKPgjoUbsIXAAAB)). Ed will eventually get rid(?) of these rational ranges but as a workaround export can just clamp the results during serialization time Pull Re... | diff --git a/test/export/test_serialize.py b/test/export/test_serialize.py
index 8645eff8f6..60c148defb 100644
--- a/test/export/test_serialize.py
+++ b/test/export/test_serialize.py
@@ -28,7 +28,7 @@ from torch._export.serde.serialize import (
from torch._higher_order_ops.torchbind import enable_torchbind_tracing
fr... | 2.41.0 |
9407eca3b0be3c0272b5c583f8e77b9108a71f8 | Thu, 18 Apr 2024 18:38:26 +0000 | [PATCH 0337/1000] Capture triton kernel in execution trace (#124140) | Summary: This DIFF is to capture triton kernels in execution trace. Test Plan: buck test mode/dev-nosan caffe2/test:profiler -- test_execution_trace_with_pt2 Differential Revision: D56162599 Pull Request resolved: https://github.com/pytorch/pytorch/pull/124140 Approved by: https://github.com/briancoutinho | diff --git a/test/profiler/test_profiler.py b/test/profiler/test_profiler.py
index 8e4e31718d..d9012d0e89 100644
--- a/test/profiler/test_profiler.py
+++ b/test/profiler/test_profiler.py
@@ -21,6 +21,7 @@ import torch.nn as nn
import torch.optim
import torch.utils.data
import torch.utils.data.datapipes as dp
+from t... | 2.41.0 |
1bc188f42458786bb7e24a5bb9f5b6ddf05adb8 | Thu, 18 Apr 2024 18:53:59 +0000 | [PATCH 0338/1000] Revert "[Environment Variable][1/N] Use thread-safe env variable API in c10 (#119449)" | This reverts commit b51f66c1950a582dd18d1b2ee67df840a8c4dbbe. Reverted https://github.com/pytorch/pytorch/pull/119449 on behalf of https://github.com/malfet due to Broke gcc9 builds ([comment](https://github.com/pytorch/pytorch/pull/119449#issuecomment-2064936414)) | diff --git a/c10/core/impl/alloc_cpu.cpp b/c10/core/impl/alloc_cpu.cpp
index def4c3a3a9..9b7ae22f9f 100644
--- a/c10/core/impl/alloc_cpu.cpp
+++ b/c10/core/impl/alloc_cpu.cpp
@@ -3,7 +3,6 @@
#include <c10/core/alignment.h>
#include <c10/util/Flags.h>
#include <c10/util/Logging.h>
-#include <c10/util/env.h>
#include... | 2.41.0 |
a0900d04b94831d0bd2b8c0a3051f0be1f5f202 | Thu, 18 Apr 2024 18:55:46 +0000 | [PATCH 0339/1000] Revert "[NJT] Inline through torch.nested.nested_tensor_from_jagged instead of graph break (#124343)" | This reverts commit ef93402f619f58d651845981ccd1eba1d68da077. Reverted https://github.com/pytorch/pytorch/pull/124343 on behalf of https://github.com/DanilBaibak due to Broken trunk ([comment](https://github.com/pytorch/pytorch/pull/124343#issuecomment-2064937192)) | diff --git a/test/dynamo/test_subclasses.py b/test/dynamo/test_subclasses.py
index 8005d6e3a2..387b6bf59b 100644
--- a/test/dynamo/test_subclasses.py
+++ b/test/dynamo/test_subclasses.py
@@ -1361,14 +1361,6 @@ class TestNestedTensor(torch._dynamo.test_case.TestCase):
self._check_recompiles(fn, (nt,), (nt2,), F... | 2.41.0 |
e48b39603411a41c5025efbe52f89560b827825 | Wed, 17 Apr 2024 22:00:41 -0700 | [PATCH 0340/1000] Fix example_value of map (#124203) | Previously, we didn't expand the shape of example_value of map to the same as inputs (edit: the first mapped dimension). This pr fixes this bug. To make this easier, we change _call_function_and_unflatten_output to accept example_values directly instead of retrieving them from the variable trackers. Also remove a redu... | diff --git a/test/dynamo/test_higher_order_ops.py b/test/dynamo/test_higher_order_ops.py
index 3bbfa4d7a3..4b46c568af 100644
--- a/test/dynamo/test_higher_order_ops.py
+++ b/test/dynamo/test_higher_order_ops.py
@@ -1292,6 +1292,59 @@ def forward(self, getitem, const):
return (sin,)""",
)
+ def te... | 2.41.0 |
4f6340f21c173da96a9edc6dfaa1b7ecbdcc568 | Mon, 15 Apr 2024 21:31:51 +0000 | [PATCH 0343/1000] realize inputs to mem bound mm decomposition (#123165) | Differential Revision: [D55639709](https://our.internmc.facebook.com/intern/diff/D55639709) Pull Request resolved: https://github.com/pytorch/pytorch/pull/123165 Approved by: https://github.com/jackiexu1992 | diff --git a/test/inductor/test_decompose_mem_bound_mm.py b/test/inductor/test_decompose_mem_bound_mm.py
index 4ae770bf22..c26233785e 100644
--- a/test/inductor/test_decompose_mem_bound_mm.py
+++ b/test/inductor/test_decompose_mem_bound_mm.py
@@ -7,6 +7,8 @@ import torch
import torch._inductor
from torch._dynamo.util... | 2.41.0 |
0792cf3d6e0542354a87f7548cc3c7890c4defd | Thu, 18 Apr 2024 23:14:55 +0000 | [PATCH 0344/1000] Make copy_cast, softmax and cat_out unranked (#123191) | Fixes #ISSUE_NUMBER This helps with the performance as it removes multiple copies of the graphs saved due to their shapes. Pull Request resolved: https://github.com/pytorch/pytorch/pull/123191 Approved by: https://github.com/DenisVieriu97 | diff --git a/aten/src/ATen/native/mps/operations/Copy.mm b/aten/src/ATen/native/mps/operations/Copy.mm
index da1fe731ef..572582f5cb 100644
--- a/aten/src/ATen/native/mps/operations/Copy.mm
+++ b/aten/src/ATen/native/mps/operations/Copy.mm
@@ -48,9 +48,10 @@ static void copy_cast_mps(at::Tensor& dst,
@autoreleasepo... | 2.41.0 |
a6a0e1348ba7dcade1833d983b1b4ca12a5c1e1 | Thu, 18 Apr 2024 10:26:43 -0700 | [PATCH 0346/1000] [c10d] remove the env of TORCH_NCCL_ABORT_IN_DESTROY_PG (#124334) | Summary: This ENV was introduced to safely rollout the behavior change in destroy process group (e.g., call ncclCommsAbort). Now that this behavior change were already rolled out, we no longer need this env and we should clean up it to keep our code cleaner Test Plan: Modified/existing ut pass Tags: Pull Request reso... | diff --git a/test/distributed/test_c10d_nccl.py b/test/distributed/test_c10d_nccl.py
index bbe4461e0c..8743b37157 100644
--- a/test/distributed/test_c10d_nccl.py
+++ b/test/distributed/test_c10d_nccl.py
@@ -1221,39 +1221,11 @@ class ProcessGroupNCCLTest(MultiProcessTestCase):
# First allreduce to initialize st... | 2.41.0 |
ed9b22ec01d6b56046f483c36a2f13d2d6b4f67 | Thu, 18 Apr 2024 03:06:42 +0000 | [PATCH 0347/1000] Implement efficient_conv_bn_eval_decomp_graph_transform to handle conv and bn fusion after decomp (#123680) | Pull Request resolved: https://github.com/pytorch/pytorch/pull/123680 Approved by: https://github.com/ezyang, https://github.com/youkaichao | diff --git a/test/inductor/test_efficient_conv_bn_eval.py b/test/inductor/test_efficient_conv_bn_eval.py
index ec83bf3a4b..c65b7585f9 100644
--- a/test/inductor/test_efficient_conv_bn_eval.py
+++ b/test/inductor/test_efficient_conv_bn_eval.py
@@ -103,7 +103,12 @@ class MultiUserConvOp(nn.Module):
class EfficientConvBN... | 2.41.0 |
befaf2a37b5f0659395fea3e1301a4e36a40013 | Thu, 11 Apr 2024 08:18:42 -0700 | [PATCH 0348/1000] [AOTI] Move c10/util ostream function implementations to their headers (#123847) | Summary: AOTInductor generated code for CPU models may have direct reference to these c10-implemented data types, see _inductor/codegen/cpp_prefix.h. To make sure the AOTI generated code is ABI backward compatible, we need to change those headers to a header-only implementation. The next PR in this stack will add tests... | diff --git a/c10/util/BFloat16.h b/c10/util/BFloat16.h
index ab65eebedb..95bc5f9183 100644
--- a/c10/util/BFloat16.h
+++ b/c10/util/BFloat16.h
@@ -7,8 +7,8 @@
#include <cmath>
#include <cstdint>
#include <cstring>
-
#include <iosfwd>
+#include <ostream>
#if defined(__CUDACC__) && !defined(USE_ROCM)
#include <cu... | 2.41.0 |
946638f06e5916ea9bd0f790ff620bdb78a92a3 | Thu, 11 Apr 2024 19:55:18 -0700 | [PATCH 0349/1000] [AOTI] Add ABI-compatiblity tests (#123848) | Summary: In AOTInductor generated CPU model code, there can be direct references to some aten/c10 utility functions and data structures, e.g. at::vec and c10::Half. These are performance critical and thus it doesn't make sense to create C shim for them. Instead, we make sure they are implemented in a header-only way, a... | diff --git a/.ci/pytorch/test.sh b/.ci/pytorch/test.sh
index 5408e0f596..23eaf8a2dd 100755
--- a/.ci/pytorch/test.sh
+++ b/.ci/pytorch/test.sh
@@ -334,7 +334,7 @@ test_inductor() {
# TODO: need a faster way to build
if [[ "$BUILD_ENVIRONMENT" != *rocm* ]]; then
BUILD_AOT_INDUCTOR_TEST=1 python setup.py dev... | +- test/cpp/aoti_abi_check/** |
dd0ed1b430f3241490d9bc071036188466b0f22 | Fri, 19 Apr 2024 00:57:05 +0000 | [PATCH 0350/1000] distributed: templated ring attention (#124215) | This adds a templated version of the ring attention forwards function as well as tests it with memory efficient attention. This doesn't add support for memory efficient attention in DTensor. That will be added in a follow up PR. This templating is also a POC of how to support other attention ops such as Jagged/nested ... | diff --git a/test/distributed/_tensor/test_attention.py b/test/distributed/_tensor/test_attention.py
index 3a34af11d9..db5a26d438 100644
--- a/test/distributed/_tensor/test_attention.py
+++ b/test/distributed/_tensor/test_attention.py
@@ -10,6 +10,8 @@ from torch.distributed._tensor.experimental.attention import (
... | 2.41.0 |
affd230140e1cdb5b96e9a4a83c8519c08251fe | Fri, 19 Apr 2024 00:57:16 +0000 | [PATCH 0351/1000] Enable UFMT on test/test_python_dispatch.py (#124373) | Part of https://github.com/pytorch/pytorch/issues/123062 Pull Request resolved: https://github.com/pytorch/pytorch/pull/124373 Approved by: https://github.com/ezyang | diff --git a/.lintrunner.toml b/.lintrunner.toml
index e914794148..6bd56ba7ac 100644
--- a/.lintrunner.toml
+++ b/.lintrunner.toml
@@ -1152,7 +1152,6 @@ exclude_patterns = [
'test/test_proxy_tensor.py',
'test/test_pruning_op.py',
'test/test_public_bindings.py',
- 'test/test_python_dispatch.py',
'... | 2.41.0 |
bde4efa842957ef304aac44e72c6782213a1fea | Fri, 19 Apr 2024 01:54:26 +0000 | [PATCH 0352/1000] Fix broken test in `test_aot_inductor.py` (#124329) | Doesn't seem to run in upstream CI due to sm90 requirement but it is failing on our end due to the extra positional argument Pull Request resolved: https://github.com/pytorch/pytorch/pull/124329 Approved by: https://github.com/chenyang78 | diff --git a/test/inductor/test_aot_inductor.py b/test/inductor/test_aot_inductor.py
index f5e38e48de..4c89d978fd 100644
--- a/test/inductor/test_aot_inductor.py
+++ b/test/inductor/test_aot_inductor.py
@@ -2375,7 +2375,7 @@ class AOTInductorTestsTemplate:
def __init__(self):
super().__ini... | 2.41.0 |
620c3e814ee85f252d62b2ca07090a2b7253131 | Thu, 18 Apr 2024 11:40:41 -0700 | [PATCH 0353/1000] Optimized templated attention to use exp2 (#124356) | 0.705 (vs. FA2) to 0.860 after this change. <img width="1270" alt="image" src="https://github.com/pytorch/pytorch/assets/6355099/d58f57ba-e50e-44ea-8a8a-4f13b8650adf"> to <img width="1277" alt="image" src="https://github.com/pytorch/pytorch/assets/6355099/f1945b67-0cfc-463c-a2f6-5812b90677fe"> Pull Request resolved... | diff --git a/benchmarks/transformer/score_mod.py b/benchmarks/transformer/score_mod.py
index c2fe082f63..067f18f8e3 100644
--- a/benchmarks/transformer/score_mod.py
+++ b/benchmarks/transformer/score_mod.py
@@ -6,9 +6,9 @@ from typing import Callable, List
import numpy as np
import torch
-import torch.utils.benchma... | 2.41.0 |
89e3eeed319ecbceded1cb9b007d2670d80a4ad | Thu, 18 Apr 2024 10:45:17 -0700 | [PATCH 0354/1000] Avoid cuda init to FakeTensorMode (#124413) | Also partially fixes #122109 This PR: - We add a C++ flag (only_lift_cpu_tensors) to toggle the torch.tensor(1, device='cuda') ctor strategy. When false (default), it does the current PyTorch behavior of unconditionally constructing a concrete CUDA tensor then calling lift_fresh on it. When true, we instead construct ... | diff --git a/test/test_fake_tensor.py b/test/test_fake_tensor.py
index bc820153d8..1ffb0a6cb3 100644
--- a/test/test_fake_tensor.py
+++ b/test/test_fake_tensor.py
@@ -1167,6 +1167,8 @@ class FakeTensorOperatorInvariants(TestCase):
torch.ones(10, device='cuda')
torch.zeros(10, device='cuda')
... | 2.41.0 |
90e3e7abb8ad9e1d716b2218147d4c5dfe2049b | Thu, 18 Apr 2024 09:36:51 -0700 | [PATCH 0355/1000] Add ability to save TORCH_COMPILE_DEBUG logs for CI failures (#124408) | Summary: The intent is that we can whitelist certain benchmarks to a) enable TORCH_COMPILE_DEBUG=1, and b) save the generated artifacts in test/debug in case of a failure. Via the rules in action.yml, we can then upload test/debug/ to S3 whenever it exists. I chose to introduce a new directory (test/debug/) rather than... | diff --git a/.github/actions/upload-test-artifacts/action.yml b/.github/actions/upload-test-artifacts/action.yml
index ae27729720..04cb43b20c 100644
--- a/.github/actions/upload-test-artifacts/action.yml
+++ b/.github/actions/upload-test-artifacts/action.yml
@@ -46,7 +46,7 @@ runs:
env:
FILE_SUFFIX: ${{... | 2.41.0 |
03a08f8ae4a60882bdcedd4862e8e6eef81bd47 | Fri, 19 Apr 2024 02:57:12 +0000 | [PATCH 0356/1000] [ROCm] Add cublasGemmAlgo_t -> hipblasGemmAlgo_t (#121030) | This PR is to add cublasGemmAlgo_t -> hipblasGemmAlgo_t to cuda_to_hip_mappings.py. It is required for DeepSpeed transformer extension build on ROCm. Pull Request resolved: https://github.com/pytorch/pytorch/pull/121030 Approved by: https://github.com/jeffdaily, https://github.com/ezyang | diff --git a/torch/utils/hipify/cuda_to_hip_mappings.py b/torch/utils/hipify/cuda_to_hip_mappings.py
index 86ff5daf3e..6652240938 100644
--- a/torch/utils/hipify/cuda_to_hip_mappings.py
+++ b/torch/utils/hipify/cuda_to_hip_mappings.py
@@ -454,6 +454,7 @@ CUDA_TYPE_NAME_MAP = collections.OrderedDict(
("cublasDi... | 2.41.0 |
68ce2cddad2057349d1194274a5f93c47c5ac88 | Fri, 19 Apr 2024 03:31:13 +0000 | [PATCH 0357/1000] [Profiler] Unify the device(CUDA, XPU, PrivateUse1) in torch profiler post processing (#123247) | This PR unifies the CUDA, XPU and PrivateUse1 in the torch profiler. Now CUDA, XPU and PrivateUse1 can together use string object `use_device` to distinguish each other and share one device path for calculating kineto time durations and memory statistics for post processing. #suppress-api-compatibility-check Co-autho... | diff --git a/test/profiler/test_profiler.py b/test/profiler/test_profiler.py
index d9012d0e89..56771eb188 100644
--- a/test/profiler/test_profiler.py
+++ b/test/profiler/test_profiler.py
@@ -1095,7 +1095,7 @@ class TestProfiler(TestCase):
stats = run_profiler(create_cuda_tensor)
check_metrics(... | 2.41.0 |
7f44d70b195d9f4bdfa3f564b47e015d8e2ec8f | Fri, 19 Apr 2024 04:07:00 +0000 | [PATCH 0358/1000] =?UTF-8?q?[torch/distributed]=20Check=20gloo=20?= =?UTF-8?q?availability=20when=20doing=20isinstance(pg,=E2=80=A6=20(#12423?= =?UTF-8?q?3)?=MIME-Version: 1.0Content-Type: text/plain; charset=UTF-8Content-Transfer-Encoding: 8bit | Fixes a bug where a reference to `_ProcessGroupWrapper` is used without first checking whether gloo is available. This fails on pytorch builds that do not include gloo becuase `_ProcessGroupWrapper` is only pybinded when building with `USE_GLOO=1`. Therefore, creation of a new process group fails with a `NameError` whe... | diff --git a/test/distributed/test_pg_wrapper.py b/test/distributed/test_pg_wrapper.py
index 1305ddd042..d7e59f1c90 100644
--- a/test/distributed/test_pg_wrapper.py
+++ b/test/distributed/test_pg_wrapper.py
@@ -3,15 +3,19 @@
import os
import sys
from datetime import timedelta
+from unittest.mock import patch
impo... | 2.41.0 |
ba85b34dd5b1eafa71c158ef2aec433ebf86e8f | Fri, 19 Apr 2024 04:47:27 +0000 | [PATCH 0359/1000] [AOTI] Enbale mmaped weights when CUDA is used (#124346) | By refactoring the logic that returns the start to constant pointer into `_get_constants_start()` method and call it from both CUDA and CPU readers It has no runtime impact, but export time is down from 10m to 3m if mmaped weights are used on AWS p4d.24xlarge Pull Request resolved: https://github.com/pytorch/pytorch/... | diff --git a/torch/_inductor/codecache.py b/torch/_inductor/codecache.py
index 7c0ee56e19..e120e6446a 100644
--- a/torch/_inductor/codecache.py
+++ b/torch/_inductor/codecache.py
@@ -1827,10 +1827,8 @@ class AotCodeCompiler:
if name not in graph.folded_constants
)
# TODO: Fix ... | 2.41.0 |
20bc1080e3cdf432257178756553f5f34064771 | Fri, 19 Apr 2024 09:09:03 +0000 | [PATCH 0364/1000] Revert "[Profiler] Unify the device(CUDA, XPU, PrivateUse1) in torch profiler post processing (#123247)" | This reverts commit 768ce2cddad2057349d1194274a5f93c47c5ac88. Reverted https://github.com/pytorch/pytorch/pull/123247 on behalf of https://github.com/DanilBaibak due to Broken trunk ([comment](https://github.com/pytorch/pytorch/pull/123247#issuecomment-2066152611)) | diff --git a/test/profiler/test_profiler.py b/test/profiler/test_profiler.py
index 56771eb188..d9012d0e89 100644
--- a/test/profiler/test_profiler.py
+++ b/test/profiler/test_profiler.py
@@ -1095,7 +1095,7 @@ class TestProfiler(TestCase):
stats = run_profiler(create_cuda_tensor)
check_metrics(... | 2.41.0 |
8e403c739e67e3377d6dcb391f4cad86625a196 | Fri, 19 Apr 2024 09:22:58 +0000 | [PATCH 0365/1000] Added a docstring for torch.Size.numel. (#124186) | Fixes #61231. Fixes #124167. This PR documents a rather long-standing issue w.r.t. unexpected behavior of `torch.Size.numel`, first reported almost 5 years ago. Pull Request resolved: https://github.com/pytorch/pytorch/pull/124186 Approved by: https://github.com/janeyx99 | diff --git a/docs/source/index.rst b/docs/source/index.rst
index 2d6d3eea13..a7afe60bc2 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -108,6 +108,7 @@ Features described in this documentation are classified by release status:
torch.random <random>
masked
torch.nested <nested>
+ size
... | 2.41.0 |
a71d12d920f3cafd552623e82e251d143b0c614 | Fri, 19 Apr 2024 10:32:08 +0000 | [PATCH 0366/1000] [CUDAGraphTree] Support mutated inputs from prior cudagraph pool (#123231) | # PR This PR supports mutating inputs in cudagraph trees, if these inputs are outputs from previous cudagraph. Please check #121861 for more details. # Note on Optimistic Mutation Check To determine whether applying cudagraph, we need to check input mutations, falling into four categories: a) no mutation, b) mutation ... | diff --git a/test/inductor/test_cudagraph_trees.py b/test/inductor/test_cudagraph_trees.py
index 88881d542f..cc701337c7 100644
--- a/test/inductor/test_cudagraph_trees.py
+++ b/test/inductor/test_cudagraph_trees.py
@@ -312,8 +312,9 @@ if HAS_CUDA and not TEST_WITH_ASAN:
).run(captured_output[0])
... | 2.41.0 |
412b75b42b27918c7cccd7a4f1121c8da14cb71 | Fri, 19 Apr 2024 09:54:05 +0000 | [PATCH 0367/1000] [optim] add fused_adam/adamw_kernel support for CPU device (#123074) | On par with `CUDA` implementation. For `autocast` logic, same with `CUDA` + `Fused Adam`: - check inf in `gradscalar.step` - In fused kernel, if there is `inf`, do nothing. If not, unscale the grad ( also write back) and update the param. **TestPlan**: ``` # extend CUDA only test for CPU fused adagrad python test_opt... | diff --git a/aten/src/ATen/native/FusedAdam.cpp b/aten/src/ATen/native/FusedAdam.cpp
new file mode 100644
index 0000000000..b3be769b24
--- /dev/null
+++ b/aten/src/ATen/native/FusedAdam.cpp
@@ -0,0 +1,175 @@
+#define TORCH_ASSERT_ONLY_METHOD_OPERATORS
+#include <ATen/core/Tensor.h>
+#include <ATen/native/DispatchStub.h... | 2.41.0 |
e280862ffa0ff34e73f8763f84e4b77925a3570 | Thu, 18 Apr 2024 22:45:31 -0700 | [PATCH 0368/1000] Add custom joint graph passes (#124443) | Pull Request resolved: https://github.com/pytorch/pytorch/pull/124443 Approved by: https://github.com/aorenste, https://github.com/malfet | diff --git a/test/inductor/test_custom_post_grad_passes.py b/test/inductor/test_custom_post_grad_passes.py
index acb11d4d93..5b1d3031c9 100644
--- a/test/inductor/test_custom_post_grad_passes.py
+++ b/test/inductor/test_custom_post_grad_passes.py
@@ -57,6 +57,12 @@ aten = torch.ops.aten
mkldnn = torch.ops.mkldnn
+... | 2.41.0 |
8fa843e58712c60e5d95fc638f45ce8f1033e23 | Fri, 19 Apr 2024 12:34:00 +0000 | [PATCH 0369/1000] Add vectorized norm fill for ppc64le (#113351) | This patch adds the vectorized norm fill for ppc64le. Pull Request resolved: https://github.com/pytorch/pytorch/pull/113351 Approved by: https://github.com/jgong5 | diff --git a/aten/src/ATen/native/cpu/DistributionTemplates.h b/aten/src/ATen/native/cpu/DistributionTemplates.h
index f4890c2f7e..93a9b33b29 100644
--- a/aten/src/ATen/native/cpu/DistributionTemplates.h
+++ b/aten/src/ATen/native/cpu/DistributionTemplates.h
@@ -15,7 +15,6 @@
#include <c10/util/irange.h>
#endif
-
... | 2.41.0 |
6724a769b88fdc0714a9ae94f0f1ff076590025 | Fri, 19 Apr 2024 13:12:42 +0000 | [PATCH 0370/1000] [ptd] drop ncclGroupStart/end for ncclCommInit (#124363) (#124416) | Summary: ``` ncclGroupStart() ncclCommInit(..) ncclGroupEnd() ``` above pattern is only needed when we have *single-thread* to manage multiple GPUs in our case, we always have 1 process managing 1 GPU, we don't need group operation. Test Plan: CI Differential Revision: D56274975 Co-authored-by: Cen Zhao <cenzhao@... | diff --git a/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp b/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp
index a84647cfc6..bf21fc0dc6 100644
--- a/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp
+++ b/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp
@@ -1953,10 +1953,8 @@ std::shared_ptr<NCCLComm> ProcessGroup... | 2.41.0 |
9db59e9e4d425f9d4e9f55247888c24b0d638e8 | Thu, 18 Apr 2024 11:48:41 -0700 | [PATCH 0371/1000] [sparse] Add fast semi-structured spasification kernels (#122350) | This PR adds in fast semi-structured sparsification kernels to PyTorch. These kernels allow for accelerated semi-structured sparsification kernels in PyTorch. The kernels have been added as aten native functions In particular, three new functions have been added: * `torch._sparse_semi_structured_tile` This functio... | diff --git a/aten/src/ATen/native/native_functions.yaml b/aten/src/ATen/native/native_functions.yaml
index f3f5683350..cbdb998c81 100644
--- a/aten/src/ATen/native/native_functions.yaml
+++ b/aten/src/ATen/native/native_functions.yaml
@@ -3342,6 +3342,18 @@
dispatch:
CUDA: _cslt_sparse_mm_search
+- func: _spa... | 2.41.0 |
56e057814565b2ae33b2106b4d0136179aa18f8 | Fri, 19 Apr 2024 13:39:38 +0000 | [PATCH 0373/1000] [Environment Variable][1/N] Use thread-safe env variable API in c10 (#119449) | This PR is the beginning of attempts to wrap thread-unsafe getenv and set_env functions inside a RW mutex. Pull Request resolved: https://github.com/pytorch/pytorch/pull/119449 Approved by: https://github.com/malfet, https://github.com/albanD | diff --git a/c10/core/impl/alloc_cpu.cpp b/c10/core/impl/alloc_cpu.cpp
index 9b7ae22f9f..def4c3a3a9 100644
--- a/c10/core/impl/alloc_cpu.cpp
+++ b/c10/core/impl/alloc_cpu.cpp
@@ -3,6 +3,7 @@
#include <c10/core/alignment.h>
#include <c10/util/Flags.h>
#include <c10/util/Logging.h>
+#include <c10/util/env.h>
#include... | 2.41.0 |
2a2f676c38367b12ddb94b019921acacece5bba | Thu, 18 Apr 2024 19:44:44 -0700 | [PATCH 0374/1000] [custom_op] add ability to provide manual schema (#124180) | Test Plan: - new tests Pull Request resolved: https://github.com/pytorch/pytorch/pull/124180 Approved by: https://github.com/albanD | diff --git a/test/test_custom_ops.py b/test/test_custom_ops.py
index f06fea8b0a..4fc708ad33 100644
--- a/test/test_custom_ops.py
+++ b/test/test_custom_ops.py
@@ -2135,6 +2135,50 @@ class TestCustomOpAPI(TestCase):
self.assertEqual(z, x + y)
self.assertTrue(cpu_called)
+ @skipIfTorchDynamo("Expec... | 2.41.0 |
918dfedc5b1f33ea8951fe2aaa22d13a05b3704 | Thu, 18 Apr 2024 19:44:44 -0700 | [PATCH 0375/1000] [custom_op] Rename register_impl to register_kernel (#124200) | Motivation: - The API is used for registering an implementation for a specific device type. - "impl" is ambiguous and can be confused with Library.impl. Test Plan: - existing tests Pull Request resolved: https://github.com/pytorch/pytorch/pull/124200 Approved by: https://github.com/albanD ghstack dependencies: #124180... | diff --git a/test/test_custom_ops.py b/test/test_custom_ops.py
index 4fc708ad33..2fd7ecf99d 100644
--- a/test/test_custom_ops.py
+++ b/test/test_custom_ops.py
@@ -2123,7 +2123,7 @@ class TestCustomOpAPI(TestCase):
cpu_called = False
- @add.register_impl("cpu")
+ @add.register_kernel("cpu")
... | 2.41.0 |
ad8d25881d850eaf0b326f6ce5c78305e38c001 | Thu, 18 Apr 2024 19:44:45 -0700 | [PATCH 0376/1000] Add torch.library.register_kernel (#124299) | This mirrors the .register_kernel method on the object produced by the custom_op decorator. Test Plan: - new tests Pull Request resolved: https://github.com/pytorch/pytorch/pull/124299 Approved by: https://github.com/albanD ghstack dependencies: #124180, #124200 | diff --git a/docs/source/library.rst b/docs/source/library.rst
index 2b09e5c9ea..b2ae235414 100644
--- a/docs/source/library.rst
+++ b/docs/source/library.rst
@@ -21,12 +21,12 @@ Use :func:`torch.library.custom_op` to create new custom ops.
Extending custom ops (created from Python or C++)
---------------------------... | 2.41.0 |
36f8378e1b5a8cb7127977b8d068fbf9c3e1247 | Thu, 18 Apr 2024 21:38:18 -0700 | [PATCH 0377/1000] Re-land precompile triton templates (#124030) | Re-land precompile triton templates. This got reverted because we were precompiling templates without checking the cache. I have since added logic and a test to ensure we do not precompile if there is a cache hit. Pull Request resolved: https://github.com/pytorch/pytorch/pull/124030 Approved by: https://github.com/shu... | diff --git a/test/inductor/test_max_autotune.py b/test/inductor/test_max_autotune.py
index d1f074de51..beb1b22df8 100644
--- a/test/inductor/test_max_autotune.py
+++ b/test/inductor/test_max_autotune.py
@@ -294,6 +294,7 @@ class TestMaxAutotune(TestCase):
self.assertEqual(num_get, 3)
self.asse... | 2.41.0 |
62169a8fa84246babe98410bb5600693db62a14 | Thu, 18 Apr 2024 13:15:03 -0700 | [PATCH 0378/1000] Support torchbind op dispatch in python (#123367) | We override the `__call__` method and register fake, functional, proxy default dispatch mode implementation in its python_key_mode_table. The idea is: 1. when inputs contains FakeScriptObject, we dispatch it through _get_dispatch mechanism. We implement dispatch mode keys automatically in the operator's constructor. ... | diff --git a/aten/src/ATen/core/dispatch/Dispatcher.h b/aten/src/ATen/core/dispatch/Dispatcher.h
index c6d336510c..caf73d7ceb 100644
--- a/aten/src/ATen/core/dispatch/Dispatcher.h
+++ b/aten/src/ATen/core/dispatch/Dispatcher.h
@@ -403,6 +403,10 @@ public:
return operatorDef_->op.hasKernelForDispatchKey(k);
}
... | 2.41.0 |
93f756cdc1a86a65d9572d050ae62387a84dc60 | Thu, 18 Apr 2024 13:15:03 -0700 | [PATCH 0379/1000] Support aot_export torchbind op (#123370) | Pull Request resolved: https://github.com/pytorch/pytorch/pull/123370 Approved by: https://github.com/zou3519 ghstack dependencies: #123367 | diff --git a/test/export/test_torchbind.py b/test/export/test_torchbind.py
index b5312fec19..3ec3e7dfab 100644
--- a/test/export/test_torchbind.py
+++ b/test/export/test_torchbind.py
@@ -4,6 +4,7 @@ import unittest
import torch
import torch.utils._pytree as pytree
+from torch._functorch.aot_autograd import aot_expo... | 2.41.0 |
03d111c54cf7e3c9a5a288d11aa1641d7972a22 | Thu, 18 Apr 2024 22:32:03 -0700 | [PATCH 0380/1000] Enable dynamo test_forloop_goes_right_direction_multi_gpu (#123324) | Pull Request resolved: https://github.com/pytorch/pytorch/pull/123324 Approved by: https://github.com/janeyx99 | diff --git a/torch/testing/_internal/common_optimizers.py b/torch/testing/_internal/common_optimizers.py
index a1bea634ff..da6e9c5407 100644
--- a/torch/testing/_internal/common_optimizers.py
+++ b/torch/testing/_internal/common_optimizers.py
@@ -1127,13 +1127,6 @@ optim_db: List[OptimizerInfo] = [
"te... | 2.41.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.