title
stringlengths
5
164
labels
list
bodyText
stringlengths
0
46.7k
NCCL 2.7.8 error, ncclSystemError: System call (socket, malloc, munmap, etc) failed
[ "bug", "help wanted" ]
πŸ› Bug I use pytorch official image pytorch/pytorch:1.8.0-cuda11.1-cudnn8-runtime, and based that installed pytorch-lightning to use multi-GPU -- 4 3090, and ran into this proble. it seems a pytorch problem, how can I tackle this? Full stack: `/opt/conda/lib/python3.8/site-packages/pytorch_lightning/utilities/distribut...
Inconsistent accuracy with pl.metrics.Accuracy() across PL 1.1.8 and PL 1.2.x
[ "bug", "help wanted" ]
πŸ› Bug I have a simple binary segmentation model and train it to segment objects in an image. I measure the accuracy with pl.metrics.Accuracy(). After I switched from PL 1.1.8 to PL 1.2.x without any code-changes the accuracy-values where different (see also my discussion-topic). I tried to reproduce the problem and ev...
Different training loss behavior between pytorch-lightning and pure pytorch
[ "bug", "help wanted" ]
πŸ› Bug You can see three train loss curves here, actually there are four curves (in the legend below): orange: pl on k80+cu110 red: pure pytorch on k80+cu110,which is fully overlaped under the orange one, so you can't see it. grey: pl on p100+cu101 blue: pure pytorch on p100+cu101 I seed everything in these two mod...
In multi-gpus, trainer.test Error when ModelCheckpoint specifies the filename.
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug In ddp mode (trainer(accelerator='ddp') with multi-gpus, the trainer could not find the saved custom-named checkpoints when .test(), which would lead to the following error: In addition, the program could hang, not be closed (even with CTRL-C) and the main GPU's Volatile is always 100%: The following modelc...
Standardize API for logging images, histograms, etc.
[ "feature", "help wanted", "refactor" ]
πŸš€ Feature Standardized way to log non-scalar data Motivation Different logging backends supported by PL use different ways to log such data, so it would be very convenient to have standardized way to do it. I mentioned it in Slack and got approval to make a PR, so I want to discuss details here. Pitch Add methods that...
Support Spawn for DeepSpeed
[ "feature", "help wanted", "won't fix", "distributed", "3rd party" ]
πŸš€ Feature Create a deepspeed_spawn for Notebooks. This will allow users to train within notebooks using DeepSpeed! There will probably be quite a lot of duplication if we go the current route with no mixins, so it's something to consider. Motivation User would like to use DeepSpeed in a notebook minimaxir/aitextgen#10...
Support DDP communication hook for speeding up training
[ "feature", "help wanted", "distributed" ]
πŸš€ Feature Motivation https://pytorch.org/docs/1.8.0/ddp_comm_hooks.html control communicate gradients across workers for all_reduce in DistributedDataParallel. such as fp16_compress_hook converts gradients to fp16 before all reduce is an effective way to improve training speed when using multi nodes. Pitch In DDPPlug...
Number of steps per epoch don't match number of batchs in train loader
[ "bug", "help wanted", "working as intended" ]
πŸ› Bug When using trainer, the number of steps don't match the number of batch in the train loader. For example, when using the Boring model, we have 313 batches in the train dataloader, however, when training we get 626 train steps according to the progress bar. Furthermore, when using the val_check_interval parameter...
Progress bar does not present training loss step when Ranger optimizer is used.
[ "bug", "help wanted", "won't fix", "waiting on author", "priority: 1" ]
πŸ› Bug AdamW: Ranger: Please reproduce using the BoringModel To Reproduce Use pytorch_ranger or Less Wright's package https://github.com/lessw2020/Ranger-Deep-Learning-Optimizer The code is in my repo: https://github.com/psandev/PyTorch_Lightning_YOLOv3 Expected behavior Environment Note: Bugs with code are solved...
NeptuneLogger ignores NEPTUNE_PROJECT environment variable
[ "bug", "help wanted", "won't fix", "3rd party" ]
πŸ› Bug If project_name is not given, NeptuneLogger does not fetch the project name from the NEPTUNE_PROJECT environment variable and instead raises a neptune.api_exceptions.ProjectNotFound with the message "Project None not found". To Reproduce https://colab.research.google.com/gist/cifkao/70eb23d9021d8470b3208c7eb3607...
`LightningModule.log(on_epoch, on_step)`: Hard to get same behavior for train and val?
[ "bug", "help wanted", "priority: 0", "logging" ]
πŸ› Bug I see a table of different default behaviors for log(), which leads me to believe that if I want train/eval to have same behavior (e.g. reduction, frequency, etc.), I could just set them. However, that doesn't seem to be the case. Was seeing noisy validation values logged, whereas training values were smoother. ...
[RFC] Introduce a dataloader factory class to better manage data modules
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature Create a new class to manage dataloaders Proposal: https://docs.google.com/document/d/1c0dBmASUfQy0kIpliGD7sGmdzC0sgbuOQStkM02UySM/edit Motivation DataModules bundle together training, validation, and testing dataloaders Often times, we want to configure different dataloader settings for each of these phas...
Validation metrics assumed to be logged within the first training epoch
[ "bug", "help wanted", "won't fix", "priority: 1" ]
πŸ› Bug In TrainLoop.on_train_end a call to check_checkpoint_callback is made. Within that method a call to on_validation_end is performed. As per the docs (and the fact that the ModelCheckpoint fires on on_validation_end), the expectation is to monitor validation metrics. However, if in the Trainer we set num_sanity_va...
BoringModel uses deprecated code
[ "bug", "help wanted", "waiting on author", "priority: 2" ]
πŸ› Bug The BoringModel is out of date. This model is used to help generate bug reports, but it uses syntax that was deprecated as of 1.2.6. This should be a simple fix to update, but I also propose making legacy BoringModels for pre-1.2.6 versions, in case a user has an issue with old code.
LOCAL_RANK not being set in slurm
[ "bug", "help wanted", "priority: 0", "environment: slurm" ]
πŸ› Bug A lot of the PTL tooling around multiprocess depends on a specific environment variable: LOCAL_RANK being set correctly, it seems that when running in slurm this isnt set causing it to return the default of 0 for all processes which makes every process do things that should only be done on rank 0, like log stuff...
Training stalls with DDP and iterable training dataset at validation step for any val_check_interval>1
[ "feature", "help wanted", "waiting on author", "priority: 1" ]
πŸ› Bug Training stalls with DDP and iterable training dataset at validation step for any val_check_interval>1. It works fine for val_check_interval=1. To Reproduce Here is a toy example to reproduce: import numpy as np import pybmi.torch as pt import pytorch_lightning as pl import torch from torch import nn from pytorc...
Some properties of LightningModule were removed from the code, but left in the doc.
[ "docs" ]
Properties use_dp, use_ddp, use_ddp2, use_tpu were removed at in #5300 but left in the documentation. (Now we need to set the use_ddp parameter in our modules manually if we want to check whether the trainer uses DDP, am I right?)
When I use manual optimization lightning still check optimizer_idx argument.
[ "bug", "help wanted" ]
πŸ› Bug I set self.automatic_optimization = False in __init__ like that in official docs, but still got an error: ValueError: Your LightningModule defines 2 optimizers but training_step is missing the "optimizer_idx" argument. This is confusion, since in example there are no optimizer_idx in training_step , also I think...
Handle cases where an IterableDataset doesn't produce a batch for an epoch.
[ "bug", "help wanted", "data handling", "priority: 2" ]
πŸ› Bug If the IterableDataset for training doesn't generate any batch for an epoch, the Trainer raises a rather cryptic exception: Traceback (most recent call last): File "/anaconda/envs/py37_pytorch/lib/python3.7/site-packages/pytorch_lightning/trainer/connectors/data_connector.py", line 47, in _with_is_last la...
Multi-GPU training fails when using GCP Deep Learning image
[ "bug", "help wanted" ]
πŸ› Bug Multi-GPU training fails when using GCP Deep Learning image. Occurs when using terminal. Occurs with dp and ddp_spawn accelerators; does not occur with a ddp accelerator. Does not occur when using the same system for single-GPU training. /opt/conda/lib/python3.7/site-packages/pytorch_lightning/utilities/distribu...
[trainer] Simplify Trainer dependencies by making TrainerTrainingTricksMixin a utils class
[ "feature", "help wanted" ]
πŸš€ Feature Training tricks doesn't need to be inherited from the core Trainer class. These methods could be utility functions that sit completely outside the Trainer class hierarchy. pytorch-lightning/pytorch_lightning/trainer/training_tricks.py Line 28 in bb9ace4 ...
Adagrad not working with GPU and DDP
[ "bug", "help wanted", "priority: 0", "distributed" ]
πŸ› Bug Adagrad doesn't work with GPUs and DDP as the optimizer is created before the model is moved to CUDA. I believe this issue has been addressed in an earlier version: #554 How to reproduce using the BoringModel https://colab.research.google.com/drive/1HfyL5htoOkPETggTLwYNfh94HrNc6TOS?usp=sharing The error emerged ...
`None` parameters not sanitized during pruning
[ "bug", "help wanted" ]
πŸ› Bug ModelPruning callback fails when a module parameter is None. This can happen, for instance, in a Linear() when bias=False. Please reproduce using the BoringModel https://colab.research.google.com/drive/1UApprg-5htIQbosiSyyLLXm1B8wE8EbN?usp=sharing Expected behavior Unavailable parameters are already taken care ...
Code in Colab notebook is broken due to broken download links
[ "bug", "good first issue", "docs" ]
The "Lightning in 2 steps" page on the docs (docs/source/starter/new-project.rst) points to a notebook (https://colab.research.google.com/drive/1F_RNcHzTfFuQf-LeKvSlud6x7jXYkG31) with examples of systems. On said notebook, the example code for BERT (#scrollTo=yr7eaxkF-djf) is broken due to broken download links. The li...
'No TPU devices were found' continues to exist for v2-32.
[ "bug", "help wanted", "accelerator: tpu", "3rd party", "priority: 1" ]
πŸ› Bug The error is still similar to that previously, as described in #6778. I am running the check code with pytorch-lightning master branch. All the 3 slaves show the same exception. raceback (most recent call last): File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/pytorch_lightning/utilities/xla_devi...
global process count incorrect with elastic, fault tolerant training
[ "bug", "help wanted", "priority: 0", "waiting on author" ]
πŸ› Bug Problem Count of the total number of processes incorrectly set. Context I am trying to run elastic training with torchelastic. I have tried with both gloo and nccl backends. Error message Error coming from gloo backend: Traceback (most recent call last): File "train_hydra.py", line 20, in hydra_main train...
Deepspeed + Auto Select GPUs = CUDA Out of Memory Error
[ "bug", "help wanted", "won't fix", "3rd party", "priority: 1" ]
πŸ› Bug Please reproduce using the BoringModel https://colab.research.google.com/drive/17Bt2m570f4o16iwbEV1fpUhgO04cuCqg?usp=sharing To Reproduce You can see the code on the BoringModel above, but I don't think it'll run on Colab because it's a multigpu issue. Basically, when I have a large-ish model (2M parameters), I...
"TypeError: can't pickle _thread.lock objects" when logging tables to WandB
[ "bug", "help wanted", "won't fix", "waiting on author", "3rd party", "priority: 2" ]
πŸ› Bug To Reproduce Try to log tables using WandLogger, e.g.: def validation_epoch_end(self, outputs: List[Any]) -> None: df = pd.DataFrame( { 'my_stats': [1,2,3] } ) table = wandb.Table(dataframe=df) self.log("examples", table) After the fir...
CI Testing ROCm
[ "feature", "help wanted", "ci" ]
Since PyTorch now ships ROCm binaries for AMD GPUs we should test against it. cc @Borda
Latest FairScale + Sharded Training crashes using default trainer parameters
[ "bug", "help wanted", "3rd party" ]
πŸ› Bug When validation/training is used (as default with the boring model) sharded crashes. This is because internally SDP relies on knowing the training state of the model, and when we run the validation sanity check, we do not set the eval mode correctly on the SDP model itself, so it waits for grads to be reduced si...
PL computes wrong accuracy with drop_last=False in PyTorch Geometric
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug PyTorch Lightning computes wrong accuracy when using a DataLoader with drop_last=False in PyTorch Geometric. There seems to be an issue in which PL cannot determine the correct batch_size of mini-batches. from typing import Optional import torch import torch.nn.functional as F from torch.nn import Linear from p...
[BUG] `BaseFinetuning` not working with `resume_from_checkpoint`
[ "bug", "help wanted", "priority: 0", "callback" ]
πŸ› Bug Using BaseFinetuning will add parameter groups during training, as a result when trying to load from checkpoint you will get the following ValueError: loaded state dict has a different number of parameter groups. Because when loading these groups won't exist yet ! To Reproduce from pl_bolts.models.regression imp...
DDP_SPAWN + Double Precision gives pickle error
[ "bug", "help wanted", "priority: 1" ]
πŸ› Bug Pickle error with double precision + DDP-spawn Can't pickle <function BoringModel.training_step at 0x7f8d684573b0>: it's not the same object as __main__.BoringModel.training_step Fixing general pickle error may also solve discussion #6851 Please reproduce using the BoringModel To Reproduce Use bug_report_model...
IndexError: dimension specified as 0 but tensor has no dimensions
[ "bug", "help wanted", "priority: 0", "accelerator: tpu" ]
πŸ› Bug TPUs training throwing the following error during validation. https://colab.research.google.com/drive/1rHBxrtopwtF8iLpmC_e7yl3TeDGrseJL?usp=sharing Expected behavior Environment Note: Bugs with code are solved faster ! Colab Notebook should be made public ! IDE: Please, use our python bug_report_model.py tem...
Optional alternate logging method for LR monitor callback
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature Allow user to pass any callable to be used in addition to trainer.logger.log_metrics() Motivation The LR monitor callback handles many best practices, but it doesn't offer integration with a plain Python logger. As mentioned on this discussions thread, I prefer to use a plain Python logger to output progress...
`Trainer(gradient_clip_algorithm='value')` has no effect (from #6123)
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug I couldn't find anywhere in the code where the gradient_clip_algorithm argument (implemented in #6123) got passed to Accelerator.clip_gradients method and suspected that the default algorithm (GradClipAlgorithmType.NORM) is always used no matter what. After a brief investigation, I believe I've confirmed that it...
Inconsistent `outputs` format between `training_epoch_end` and `on_train_epoch_end`
[ "bug", "help wanted", "priority: 0", "logging" ]
πŸ› Bug The outputs object for on_train_epoch_end should not include the extra field To Reproduce def test_bug(tmpdir): class TestModel(BoringModel): def training_step(self, batch, batch_idx): output = self(batch) loss = self.loss(batch, output) return {"loss": loss, "foo"...
Missing LightningModule datamodule reference
[ "bug", "help wanted", "good first issue", "docs", "data handling", "priority: 1" ]
πŸ› Bug This docs snippet does not work: https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html#datamodule To Reproduce def test_bug(tmpdir): class TestModel(BoringModel): def configure_optimizers(self): # works len(self.trainer.datamodule.train_dataloader()) ...
Fairscale integration not working for me
[ "bug", "help wanted", "priority: 0", "waiting on author", "distributed" ]
πŸ› Bug When I try to train a model with Fairscale (ddp_sharded), I get this error always: File "/home/ubuntu/anaconda3/envs/torch/lib/python3.7/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 500, in _setup_backward_hooks assert p_tmp.grad_fn is not None AssertionError Could anyone please point me ...
Learning rate interval update not working properly
[ "bug", "help wanted" ]
πŸ› Bug When I use Onecycle scheduler, it only works properly if I set steps_per_epoch = 1, even though I have set 'interval': 'step'. scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr, ...
XLA + IterableDataset support should now work on XLA master
[ "bug", "help wanted", "won't fix", "priority: 1" ]
πŸ› Bug Following pytorch/xla#2866 XLA should now support IterableDatasets (or at least not fail on trying to get the length). This means we can edit the check from #6875 to only fail with older XLA versions. Note: the fix from pytorch/xla#2866 has not yet been released Please reproduce using the BoringModel This messa...
Turning SWA on makes scheduler lr change to epoch, instead of batch [with colab ex]
[ "bug", "help wanted", "callback" ]
πŸ› Bug Below is my optimizer/scheduler code. If my trainer has stochastic_weight_avg=True, then my learning rate is shown below, in green, and I get the warning: /home/felipe/anaconda3/envs/ML_38_new/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:68: UserWarning: Swapping lr_scheduler <torch.opt...
Avoid wrapping LightningModule in *DataParallel overrides when not fitting
[ "feature", "help wanted", "let's do it!" ]
πŸš€ Feature For distributed testing or prediction, we don't need to wrap the LightningModule inside of DistributedDataParallel or DataParallel for testing as there are no gradients we need to synchronize. We only need this during the fit stage when model training occurs Motivation This can reduce overhead with distribut...
Why is my gpu-util low?
[ "bug", "help wanted", "distributed" ]
I use one node and 4 gpus for training. And I use dali dataloader, I don't know why my gpu util is low, and training is also slow. About 1:30 per epoch, I train for 200 epoches, which will cost 5 hours. It's slower than the project mmclassification, which only cost 3.5 hours. Compared to mmclassification project which ...
Performance Optimization for DDP sharded
[ "feature", "help wanted" ]
πŸš€ Feature Motivation Experiments show that enabling buckets and compressing in FP16 before broadcasting improves performance for multi-nodes. 20% for 1B parameter model enabling buckets 5% for 0.6B parameter model 2 nodes compressing FP16 before broadcasting Pitch set smarter default for DDP sharded If single-node:...
Docs wrong callback reference
[ "docs" ]
πŸ“š Documentation Fixed wrong on_fit_start callback reference in documentation. PR: #7001 Currently: on_fit_start ^^^^^^^^^^^^ .. automethod:: pytorch_lightning.callbacks.Callback.on_save_checkpoint :noindex: Fix: on_fit_start ^^^^^^^^^^^^ .. automethod:: pytorch_lightning.callbacks.Callback.on_fit_start :noindex:
PyTorch Lightning ignores traditional WORLD_SIZE/RANK specifications in environment and doesn't document replacement
[ "bug", "help wanted", "priority: 0", "distributed" ]
πŸ› Bug Standard torch.distributed environment variables seem to be handled differently. To Reproduce $ MASTER_ADDR=192.168.1.3 MASTER_PORT=1234 WORLD_SIZE=2 RANK=0 python3 boring_model.py Expected behavior Should wait for second job to connect to the MASTER_PORT. Instead, just starts training. No combination of argume...
Training Type Plugin environment related setting from Trainer
[ "feature", "help wanted" ]
πŸš€ Feature Motivation When user provide a specified training type plugin, user has to pass in num_nodes and sync_batchnorm explicitly. For example DDPPlugin. These parameters are set from Trainer, probably it is better to reuse the setting from Trainer instead of specifying again. Now we have to specify like: traine...
seed for DistributedSampler
[ "feature", "help wanted", "distributed" ]
πŸ› Bug torch.utils.data.distributed.DistributedSampler defaults to random seed 0 to shuffle the sampler if shuffle=True. As lightning does not specify a seed, multi-gpu training with DistributedSampler in lightning will use a deterministic order independently from the user-specified seed (seed_everything). ...
RuntimeError: All input tensors must be on the same device. Received cpu and cuda:0
[ "bug", "help wanted", "priority: 2" ]
πŸ› Bug At the end of the epoch, I get the error mentioned in the title. Here is a full stack-trace: File "main.py", line 255, in <module> trainer.fit(trainVQG, data_loader, val_data_loader) File "/data/nv419/anaconda3/envs/blt-vqg/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 473, in fit...
Can't get attribute '_gpus_arg_default'
[ "bug", "help wanted" ]
πŸ› Bug when I use the code model = model.load_from_checkpoint(ckpt_path) It pops out a problem: Can't get attribute '_gpus_arg_default' on <module 'pytorch_lightning.utilities.argparse' from '/opt/conda/envs/lasaft/lib/python3.8/site-packages/pytorch_lightning/utilities/argparse.py'> To Reproduce Very hard to describe ...
Add version suffix to last.ckpt
[ "feature", "callback: model checkpoint" ]
πŸš€ Feature If you use ModelCheckpoint(save_last=True) and you run an experiment twice in the same directory, then this set of checkpoints is generated: file-v0.ckpt file-v1.ckpt ... last.ckpt (the last of the second run) the idea is to add a version also to last.ckpt if it would get overwritten: file-v0.ckpt file-v1.c...
Dead Links in "Implementing your own DDP" documentation
[ "help wanted", "good first issue", "docs" ]
πŸ“š Documentation In the "Implementing your own DDP section" (https://pytorch-lightning.readthedocs.io/en/stable/multi_gpu.html#implement-your-own-distributed-ddp-training), the text says: If you need your own way to init PyTorch DDP you can override pytorch_lightning.core.LightningModule.(). If you also need to use yo...
metrics from "sanity check" do not appear on wandb / epoch 1 becomes 0
[ "question", "won't fix" ]
❓ Questions and Help What is your question? I'm using so called "sanity check" to run actual full validation before I do any training, using num_sanity_val_steps=-1 however I could not see any metrics displayed on wandb. Eventually I had to implement custom validation_epoch_end where instead of log_dict I'm using self....
ddp opens processes on GPUs (and consumes memory) not assigned to the Trainer
[ "bug", "duplicate", "help wanted", "priority: 0", "distributed" ]
πŸ› Bug When training on multiple GPUs with ddp, it is still starting processes on the unused GPUs (with a lot less memory, but still). It's not running any work there (GPU-Util is 0% in nvidia-smi). How to fix this manually: I can run CUDA_VISIBLE_DEVICES="2,3" python train.py --gpus 0,1 and then it works, it only runs...
train_epoch_end called before epoch_end
[ "bug", "help wanted", "priority: 1" ]
πŸ› Bug Please reproduce using the BoringModel and post here To Reproduce https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/trainer/training_loop.py#L858 Expected behavior Environment Note: Bugs with code are solved faster ! Colab Notebook should be made public ! IDE: Please, use ...
Is there a way to overide scheduler calls in pytorch_lightning ?
[ "question" ]
What I want to do is the following: I have 2 optimizers and 2 lr_schedulers The 1st optimizer and 1st scheduler will step normally but the 2nd optimizer and 2nd scheduler will step only after the 4th epoch. Now I wrote this code in which i could modify the stepping of the optimizer how can i do the same for the sched...
`Missing logger folder Warning` due to `isdir` of `fsspec`
[ "bug", "help wanted", "priority: 1" ]
πŸ› Bug I linked a folder to PROJECT_ROOT/lightning_logs, but pytorch-lightning showed the warning Missing logger folder: PROJECT_ROOT/lightning_logs, and then wrote the tensorboard event log file to PROJECT_ROOT/lightning_logs/version_0, instead of creating a new version_xxx. I think this is because in this case, fssp...
Error in Logger on epoch end when using Multiple GPUs
[ "bug", "help wanted", "priority: 0", "strategy: dp" ]
πŸ› Bug When using multiple GPUs with 'dp', the error RuntimeError: All input tensors must be on the same device. Received cuda:1 and cuda:0 occurs. It means the collections on epoch end would be from different device. Expected behavior While they might need to be on the same device, or maybe the aggregating function sh...
Add option for overriding scheduler step
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature Motivation Currently there is no way to overide the scheduler.step(). scheduler.step() calls for all the schedulers are called sequentially i.e, for scheduler in schedulers: scheduler.step() There should be something a hook that enables to override the scheduler.step() call something similar to the exi...
Incorrect example in `configure_optimizers`
[ "bug", "help wanted" ]
πŸ› Bug Example shows to use scheduler key to pass the scheduler in the dict. Correct key is lr_scheduler instead. The example is: { 'scheduler': lr_scheduler, # The LR schduler 'interval': 'epoch', # The unit of the scheduler's step size 'frequency': 1, # The frequency of the scheduler 'reduce_on_platea...
Trainer parser error: argument --fast_dev_run: invalid int value: 'True'
[ "bug", "help wanted", "priority: 1" ]
πŸ› Bug When generating a parser for Trainer arguments with Trainer.add_argparse_args, the type of fast_dev_run is no longer interpreted correctly (was working correctly on PL 1.08, but no longer works with PL 1.1). When interpreting an argument like follows: python my_script.py --fast_dev_run=True [...] now raises the...
When saving top-k models, does it contain the best model?
[ "question" ]
When saving top-k models, it seems that it save k models around the best model. But the best model is not contained in it.
Edge case bug when validating on multiple data sets if .log is not called at least once for each data set.
[ "bug", "help wanted", "won't fix", "priority: 2", "logging" ]
πŸ› Bug When validating on multiple data sets, if you call self.log(..) for any data set, self.log(..) also has to have been called for all previous data sets at least once. If not, you get a KeyError when the results are auto reduced. I encountered this because for one of my validation sets I only created and logged im...
ModelCheckpoint fails at garbage collecting checkpoint passed to Trainer.resume_from_checkpoint
[ "bug", "help wanted", "won't fix", "priority: 1" ]
πŸ› Bug When passing a checkpoint to Trainer via resume_from_checkpoint, it is not tracked/garbage collected by ModelCheckpoint class. Instead, a new checkpoint is instantiated and gargabe collected/updated as usual. Please reproduce using the BoringModel and post here https://colab.research.google.com/drive/1QJrLngpOZ...
Optimization docs and override optimizer_step docs
[]
πŸ“š Documentation Optimization docs need to be updated with the correct way of overriding optimizer_step() method, which now takes in a closure as a non-optional parameter. https://pytorch-lightning.readthedocs.io/en/latest/optimizers.html#step-optimizers-at-arbitrary-intervals
pip install pytorch-lightning["extra"] doesn't install FairScale
[ "bug", "help wanted", "won't fix", "docs", "3rd party" ]
πŸ› Bug The documentation [0] states that to enable sharded training one needs to install the extras packages with pip install pytorch-lightning["extra"], in my case only following the second option pip install https://github.com/PyTorchLightning/fairscale/archive/pl_1.1.0.zip actually installed fairscale. [0] https://p...
Using gradient_clip_val only for Discriminator in GAN
[ "bug", "feature", "design", "priority: 1" ]
How should I use gradient_clip_val if I only want to clip the discriminator in GAN? Currently, I try to clip the discriminator as follows and I get an error: def optimizer_step(self, current_epoch, batch_idx, optimizer, optimizer_idx,*args,**kwargs): optimizer.step() optimizer.zero_grad() if optimizer...
pass a variable instead of string to self.save_hyperparameters()
[ "feature", "help wanted" ]
πŸš€ Feature pass a variable instead of string to function self.save_hyperparameters() Motivation >>> class ManuallyArgsModel(LightningModule): ... def __init__(self, arg1, arg2, arg3): ... super().__init__() ... # manually assign arguments ... self.save_hyperparameters('arg1', 'arg3') I t...
Pretty print tensors in `trainer.test()`
[ "bug", "help wanted", "good first issue", "priority: 2" ]
πŸ› Bug When printing the results of trainer.test() (saying "DATALOADER:0 TEST RESULTS"), it shows "tensor(...)", sometimes also including the device, when just a number should be shown. Please reproduce using the BoringModel and post here I copied the template and didn't modify it, as it already shows the issue: https:...
Question about transfer learining.
[ "question", "won't fix" ]
I have a trained LightningModule, named AutoEncoder. When using it in other LightningModule, i want to know that if both AutoEncoder.freeze() and AutoEncoder.eval() are needed.
How to save my model with half precision
[ "question" ]
My model includes 5 resnet18, if they are saved with default precision(float32), then about 220MB space in my disk is occupied. My idea is to reduce the storage to 110MB, so I used model.half() to apply precision 16. I used torch.save(model.state_dict(),'model.pt') to save my model, however there still is 220MB for the...
Invalid usage of torch.no_grad Context manager
[ "bug", "help wanted", "distributed" ]
πŸ› Bug Using no_grad context manager in the following line pytorch-lightning/pytorch_lightning/utilities/distributed.py Line 210 in 127454a with torch.no_grad: is incorrect as the context manager is calla...
How to forbid save optimizer's state ?
[ "question" ]
Sometime's it's time-consuming.
NeptuneObserver raises Neptune.api_exceptions.ChannelsValuesSendBatchError
[ "bug", "help wanted", "logger" ]
πŸ› Bug NeptuneObserver throws Failed to send channel value. Traceback (most recent call last): File "/home/wojciech/miniconda3/envs/ml/lib/python3.7/site-packages/neptune/internal/channels/channels_values_sender.py", line 156, in _send_values self._experiment._send_channels_values(channels_with_values) File "/...
Wrong name in APEX optimization levels
[ "bug", "good first issue", "docs", "priority: 2" ]
APEX optimization levels are β€œO1, O2, O3” and not β€œ01, 02, 03”. We should fix this in code + docs. Make sure it is clear from the docs that the use of Apex anymore and recommend that users use upstream native AMP available since PyTorch 1.6
Attempting to unscale FP16 gradients
[ "question", "won't fix" ]
I want to train and save my model with 16-bit precision, what should I do ? def train(): model = Model(**config) model.half() data_module = MyDataModule(config['train_root'], config['folds'], config['size'], config['batch_size'], config['num_workers']) for i in range(confi...
Trainer test cannot load from checkpoint when training on multiple GPUs
[ "bug", "help wanted", "waiting on author" ]
πŸ› Bug The Trainer.test() looks for epoch=X-v0.ckpt when only epoch=X.ckpt exists, thus the result is: Traceback (most recent call last): File "/home/wojciech/tmp/pytorch-lightining/main.py", line 16, in <module> result = trainer.test() File "/home/wojciech/miniconda3/envs/ml/lib/python3.8/site-packages/pytorch...
Metrics reduction during logging/checkpointing
[ "bug", "help wanted", "priority: 0", "checkpointing", "logging" ]
πŸ› Bug When logging and checkpointing/early stopping with the metrics like shown in the code below, I get: Traceback (most recent call last): File "pytorch_lightning/trainer/trainer.py", line 521, in train self.train_loop.run_training_epoch() File "pytorch_lightning/trainer/training_loop.py", line 590, in run_...
Allow configuring optimizer_step based on the gradients.
[ "bug", "help wanted", "priority: 1" ]
As of now, there is no way to skip optimizer_step based on the gradients after they are calculated since the backward resides inside the closure and this closure is passed to the optimizer.step. Also if training_step returns None it still calls optimizer_step due to the closure which I think it should not if there are ...
Add train validity check
[ "feature", "help wanted", "design" ]
πŸš€ Feature Motivation Currently, only validation check is being run. In order to fix: #4797, we need the introduction of train validity check. Refer to discussion in #5084. Pitch Alternatives Additional context
performance loss from 1.0.8 to 1.1.* when using 16 bit precision
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug After updating pytorch-lightning from 1.0.8 to 1.1.0/1.1.1 the use of 16 bit precision destroys the performances. In my actual code of object detection losses are by a factor of 4 larger at the beginning than compared to 32 bit or 16 bit with pl 1.08. They converge to a much higher value and the resulting model ...
Reproducibility bug between LightningOptimizer activate / deactivate
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug Please reproduce using the BoringModel and post here import os import torch from torch.utils.data import Dataset from pytorch_lightning import Trainer, LightningModule, seed_everything class RandomDataset(Dataset): def __init__(self, size, length): self.len = length self.data = torch.rand...
How to compute metric in each step without reseting the metric
[ "feature", "question" ]
I want to know a metric during the process of an epoch. That is, I want to know the metric value until now. However, if I call metric.compute() in each step, it will reset my metric, which is not expected. I read the source code and didn't find a clever way to do that. So could you tell me how to do that?
Failing computer_vision_fine_tuning example
[ "bug", "good first issue", "example", "priority: 1" ]
Hello, I noticed in the current master branch, the computer_vision_fine_tuning example fails to run with the following error. TypeError: __init__() got an unexpected keyword argument 'root_data_path' The error lies in the fact that root_data_path is being passed to pl.LightningModule.__init__ from main . ...
How to inference on GPU?
[ "question" ]
❓ Questions and Help Hi. I have trained a Model with Trainer.fit(). Now I want to load the checkpoint at another place and preform inference. But I have no idea how to inference on GPU. Where could I assign a GPU for my inference just like assigning a GPU before training: trainer = pl.Trainer(max_epochs = cfg['n_epoch...
PyTorch Geometric example removed.
[ "question" ]
I've save this link https://github.com/PyTorchLightning/pytorch-lightning/tree/master/pl_examples/pytorch_ecosystem/pytorch_geometric a few days ago, but now I get a 404. Did it get moved somewhere else? I tried to google for it, but I can't find it anywhere. Thanks.
WandbLogger: add more flexibility with logging step
[ "feature", "help wanted", "logger" ]
πŸš€ Feature Let WandbLogger log using: the trainer step by default (current behavior) an auto-incremented step, independent from the trainer Motivation The current default of using trainer step is a good default and seems to work fine for most people, automatically associated correctly training and validation metric...
ImportError: cannot import name 'invoke_rpc_builtin' from 'torch.distributed'
[ "bug", "help wanted", "priority: 0", "checkpointing" ]
Python 3.7.9 (default, Aug 31 2020, 07:22:35) [Clang 10.0.0 ] :: Anaconda, Inc. on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import pytorch_lightning as pl Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/ericwiener/anaconda3/envs/donkey/lib...
MisconfigurationError: No TPU devices were found even when TPU is connected
[ "won't fix" ]
Have been frustrated over the past few hours over a problem, though It's likely its a problem I started myself hah. I'm trying to connect to the TPU in Colab. I'm pretty sure I've gotten all the import stuff down. My code is here. I'm not completely set on everything, so the entire document isn't functional, but you sh...
Cannot use SyncBN with sharded DDP.
[ "bug", "help wanted", "3rd party" ]
πŸ› Bug To Reproduce Using SyncBN with sharded DDP plugin makes this error message. AttributeError: SyncBatchNorm is only supported within torch.nn.parallel.DistributedDataParallel I think this is not a PL bug but a bug of PyTorch and fair-scale. Nevertheless, I think there is a way to support this combination in PL (l...
Add more dataloader options to the trainer for TPU training.
[ "feature", "help wanted", "won't fix", "accelerator: tpu" ]
πŸš€ Feature Add more dataloader options to the trainer for TPU training. Motivation As is described in https://pytorch.org/xla/release/1.5/_modules/torch_xla/distributed/parallel_loader.html, the signature for ParallelLoader is def __init__(self, loader, devices, batchdim=0...
split_batches instead of accumulate_grad_batches
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature Add a split_batches argument to the trainer that splits the batches of your dataloaders into smaller mini-batches (inverse of accumulate_grad_batches), without changing any of the other parts of training. Optimizers would then only run every n mini-batches and for all other intents and purposes, it would app...
enable_pl_optimizer causes optimizers to not be restored properly
[ "bug", "help wanted", "priority: 1" ]
πŸ› Bug enable_pl_optimizer (default!) causes optimizers to not be restored properly from the checkpoint specified by resume_from_checkpoint. BoringModel Colab Reproduction The model is trained for 3 epochs and saved in a checkpoint. The checkpoint is then restored and further trained for 1 epoch (with different values ...
How to disable automatic SLURM detection / signal handling?
[ "feature", "won't fix", "environment: slurm", "priority: 1" ]
❓ Questions and Help What is your question? I'm running single-GPU jobs on a SLURM cluster. PyTorch Lightning uses environment variables to detect that I'm on SLURM, and automatically interrupts SIGTERM signals. However, when I'm debugging, I don't want the SIGTERM to be bypassed-- I need to know where the signal is or...
How to split my model to different gpu?
[ "question" ]
pytorch-lightning forum The problem is above. Want some help ,thx!
TPU training freeze when define custom backward operation
[ "bug", "help wanted", "won't fix", "accelerator: tpu" ]
As is mentioned. A minimal example is here on the colab. Here is the address for an example. I have noted where it doesn't work. https://colab.research.google.com/gist/rwbfd/57dde430bc168505cfe7d5c42a31924e/tpu_freeze.ipynb
Returning None from training_step with multi GPU DDP training
[ "feature", "help wanted", "distributed", "priority: 1" ]
πŸ› Bug Returning None from training_step with multi GPU DDP training freezes the training without exception To Reproduce Starting multi-gpu training with a None-returning training_step function. Example training_step function: def training_step(self, batch, batch_idx): data, target = batch model_out...
Schedule model testing every N training epochs
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature A check_test_every_n_epoch trainer option to schedule model testing every n epochs, just like check_val_every_n_epoch for validation. Motivation Sometimes validation and test tasks are very different. For instance, in unsupervised anomaly detection or segmentation, the training and validation set cannot cont...
Why is there a difference in loss value logged using self.log ?
[ "question" ]
What is your question? (New to Lightning) The progress bar shows loss for each step. Before logging other metrics using self.log, wanted to sanity test it to log the loss again with a different name. Please check the last 2 lines of the code. I'm logging the same loss value under "train_loss" that is being returned so ...