question
stringlengths
9
229
context
stringlengths
0
17.6k
answer
stringlengths
5
3.54k
id
stringlengths
18
28
url
stringlengths
94
97
What is the purpose of reload_dataloaders_every_epoch?
I'm looking to train on chunks of my entire dataset at a time per epoch (preferably over every n epochs, but this is not yet implemented officially), since the size of my dataset exceeds my total RAM. I'd therefore like to update the data in the DataLoaders every epoch. From all the examples I've seen, the actual data ...
since the size of my dataset exceeds my total RAM That's not unusual. Often datasets don't fit into the RAM and that's fine. DataLoaders are designed to asynchronously load the data from your hard disk into ram and then onto the GPU. From my understanding, reload_dataloaders_every_epoch=True calls train_dataloader() ...
MDEwOkRpc2N1c3Npb24zMzUxNDcy
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7372#discussioncomment-700062
Hyperparameter Tuning in Lightning CLI
I wonder how people do Hyperparameter Tuning with Lightning CLI? Any suggestion of good practices? Thanks!
Personally when I tune hyperparameters (e.g. with optuna or nevergrad), I don't use the Lightning CLI much but use the programmatic way to inject the arguments there (since it's easier for communication across different python libs directly in python and not leaving it to os calls).
MDEwOkRpc2N1c3Npb24zNTM4ODc2
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9108#discussioncomment-1233981
accumulate_grad_batches and DDP
Hi! When I use no multi-gpu settings and just a single GPU and the following parameters: batch_size = 16 accumulate_grad_batches=2 my effective batch size is 32. My question is how accumulate_grad_batches and DDP interact. If I am using 2 GPUS that are on the same machine and i use the parameters batch_size = 16 accumu...
Yes: https://pytorch-lightning.readthedocs.io/en/latest/advanced/multi_gpu.html#batch-size
MDEwOkRpc2N1c3Npb24zMzc0OTYx
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7632#discussioncomment-765379
Trainer flags: amp_level vs. precision
Hi! I'm a bit confused regarding the trainer's flags. According to Lightning's documentation, the default settings are: amp_level='O2' (i.e., "Almost FP16" Mixed Precision, cf. Nvidia documentation) precision=32 Isn't it a bit contradictory ? Is the default training mode full precision or mixed precision? Thanks in ...
Dear @dianemarquette, precision=32 is the default one. However, if you turn on precision=16 and set amp_backend="apex", then amp_level=02 is considered as the default one. Best, T.C
MDEwOkRpc2N1c3Npb24zNTIyMjk1
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8923#discussioncomment-1188895
hyper parameters not restored while resuming training
I call save_hyperparameters() in __init__(), and all hyper parameters sent to PL model are saved to checkpoint file. However, when i resume training from a checkpoint(call trainer.fit(..., ckpt_path=checkpoint_file_path)), the hyper parameters are not restored from checkpoint file and all of them keep initial values.
hyperparameters are not restored by default because it allows users to update them if they want, using the checkpoint, while resuming. you can do this: model = LitModel.load_from_checkpoint(checkpoint_file_path) trainer.fit(model, ..., ckpt_path=checkpoint_file_path)
D_kwDOCqWgoM4APOMH
https://github.com/PyTorchLightning/pytorch-lightning/discussions/12639#discussioncomment-2515490
Train on cpu but gives error "You have asked for native AMP on CPU, but AMP is only available on GPU"
hi dear friends, I wanted to train on cpu for debug purpose which can help me understand more about codes. so in my trainer class, i didn't pass gpus parameter, the parameter looks like below: args of trainer Namespace(accelerator=None, accumulate_grad_batches=1, adam_epsilon=1e-08, amp_backend='native', amp_level='...
Setting Trainer(precision=16) is only supported on GPU! If you have them available, you can do: Trainer(gpus=N, precision=16)
MDEwOkRpc2N1c3Npb24zNTEzMDQ5
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8855#discussioncomment-1159101
Is it possible for SLURM auto submit to work on DP?
In my experience, it never works. I looked to the trainer code and saw that the code managing this works only on DDP. def configure_slurm_ddp(self, num_gpu_nodes): self.is_slurm_managing_tasks = False ### !!HERE!! if self.use_ddp: self.num_requested_gpus = self.num_gpus * num_gpu_nodes self...
actually, lightning supports slurm no matter what backend you use... def register_slurm_signal_handlers(self): # see if we're using slurm (not interactive) on_slurm = False try: job_name = os.environ['SLURM_JOB_NAME'] if job_name != 'bash': on_slurm = ...
MDEwOkRpc2N1c3Npb244MjI1MQ==
https://github.com/PyTorchLightning/pytorch-lightning/discussions/1456#discussioncomment-238184
ValueError: `Dataloader` returned 0 length. Please make sure that it returns at least 1 batch
use this code, i can get test data. But when i use pl data module to fit train model, i got dataloader returned 0 length error import os from typing import Optional import PIL import cv2 import json import copy import numpy as np import pytorch_lightning as pl import torch from torchvision import transforms from torch....
Dear @morestart, Would you mind unit-testing your code ? Can you check your DBDataModule train and val ICDARDataset length aren't 0 ? Lightning doesn't manipulate your dataset / dataloaders, so maybe your dataset are empty. Best, T.C
MDEwOkRpc2N1c3Npb24zNTY4NDY4
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9478#discussioncomment-1315018
How to scale learning rate with batch size for DDP training?
When using LARS optimizer, usually the batch size is scale linearly with the learning rate. Suppose I set the base_lr to be 0.1 * batch_size / 256. Now for 1 GPU training with batch size 512, the learning rate should be 0.1 * 2 = 0.2 However when I use 2 GPUs with DDP backend and batch size of 512 on each GPU. Should m...
As far as I know, learning rate is scaled with the batch size so that the sample variance of the gradients is kept approx. constant. Since DDP averages the gradients from all the devices, I think the LR should be scaled in proportion to the effective batch size, namely, batch_size * num_accumulated_batches * num_gpus *...
MDEwOkRpc2N1c3Npb244MjI3Ng==
https://github.com/PyTorchLightning/pytorch-lightning/discussions/3706#discussioncomment-238302
How to stop wandblogger uploading the checkpoint?
I want the checkpoint and the logs stay in the same place while only the logs are uploaded to wandb server.
Yes, it will be fixed with #6231
MDEwOkRpc2N1c3Npb24zMzMzODc0
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7177#discussioncomment-649523
How to remove hp_metric initial -1 point and x=0 points?
Hi, I don't think this is a bug but I'm doing something wrong. I want to use my val_dice as hp_metric tabular AND also see the graph on "show metric" (radio button) under the Tensorboard HPARAMS tab: To achieve this I'm logging using self.log('hp_metric', mean_dice) (for the graph) and self.logger.log_hyperparams(para...
I already use your first way but setting default_hp_metric to False makes hp_metric be removed from "hparams" tab (this tab isn't there at all even if I have set some hyper parameters). Adding the final log_hyperparams creates the hparams tab but the graph of hp_metric gets a final value at iteration 0 instead of final...
MDEwOkRpc2N1c3Npb24zMTc5Njcw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/5890#discussioncomment-354902
Error with loading model checkpoint
Hi everyone. I was recently running a lightning model and saved a checkpoint to store the intermediate results. When I try to open the checkpoint, I get an error that positional arguments (used to initialize the lightning module) are not present. This wouldn't be a big deal but one of the positional arguments is the en...
hey @dmandair ! did you call self.save_hyperparameters() inside your LM.__init__? else hyperparameters won't be saved inside the checkpoint and you might need to provide them again using LMModel.load_from_checkpoint(..., encoder=encoder, encoder_out_dim=encoder_out_dim, ...). also note that, if you are passing an nn.Mo...
D_kwDOCqWgoM4APFIR
https://github.com/PyTorchLightning/pytorch-lightning/discussions/12399#discussioncomment-2413496
How to implement channels last memory format callback
Hi there Pytorch docs recommends using channels last when training vision models in mixed precision. To enable, you need to do two changes: Move you model to channel last format: model = model.to(memory_format=torch.channels_last) # Replace with your model. This can be done in on_fit_start callback hook as model.to pe...
You can use any of the following if done inside the LightningModule: batch = self.on_before_batch_transfer(batch, dataloader_idx) batch = self.transfer_batch_to_device(batch, device) batch = self.on_after_batch_transfer(batch, dataloader_idx) If you really need to do it in the Callback, I guess you could use on_train_b...
MDEwOkRpc2N1c3Npb24zMzUwMzU1
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7349#discussioncomment-709247
How to use Accuracy with ignore class?
Please see same question on Stack Overflow. When using Accuracy with a class that should be ignored, meaning it has labels but can never be predicted, the scoring is wrong, because it is calculated with the never predicted labels that should be ignored. How to use Accuracy while ignoring some class? Thanks :)
It is currently not supported in the accuracy metric, but we have an open PR for implementing that exact feature PyTorchLightning/metrics#155 Currently what you can is instead calculate the confusion matrix and then ignore some classes based on that (remember that the true positive/correctly classified are found on the...
MDEwOkRpc2N1c3Npb24zMzExNDI0
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6890#discussioncomment-588405
How to monitor tensorboard logged scalar in modelcheckpoint?
If I save scalar log like this, self.logger.experiment.add_scalars(‘loss/nll’, {‘train’: trainloss, ‘valid’: validloss}) How to monitor valid loss in Modelcheckpoint?
you need to call it with self.log instead (we automate the rest for you), so that we are aware of you logging it :)
MDEwOkRpc2N1c3Npb24zMzM0MTk2
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7184#discussioncomment-648824
`init_process_group` not called when training on multiple-GPUs
Hi, I’m trying to train a model on 2 GPUs. I do this by specifying Trainer(..., gpus=2). ddp_spawn should automatically be selected for the method, but I instead get the following message + error: UserWarning: You requested multiple GPUs but did not specify a backend, e.g. `Trainer( accelerator="dp"|"ddp"|"ddp2")`. Set...
The issue comes from the line File "train.py", line 173, in main print(f"Logs for this experiment are being saved to {trainer.log_dir}") which tries to access trainer.log_dir outside of the trainer scope. trainer.log_dir tries to broadcast the directory but fails as DDP hasn’t been initialized yet. File ".../py...
MDEwOkRpc2N1c3Npb24zNDcxODUx
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8517#discussioncomment-1035094
Backward twice in one training_step
I have 2 losses for my model. And I need the grads of the first loss to compute the second one. The pseudocode in pytorch is like: optimizer.zero_grad() hidden_value = model.part1(input) output = model.part2(hidden_value) loss1 = criterion(output, label) loss1.backward(retain_graph=True) loss2 = criterion2(hidden_value...
I don't think this is really possible in 0.6. This version is too old and manual optimization was introduced to cover your exact use case. I can only recommend the latest version because manual backward underwent many changes and bugfixes so believe it is worth it to invest the time to get that code updated.
MDEwOkRpc2N1c3Npb24zMzk4NjAw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7845#discussioncomment-832499
Why MultiGPU dp seems slower?
❓ Questions and Help Having 2 gpus with DP seems to be slowers than using just 1. Is it normal? My intuition is that if you are using 2 GPUs and the batch is being splitted into 2 batches, this should be faster. But when I tested the same code using 1 vs >1 my epoch time increased Code Minimalist Implementation of a BE...
you should double your batch size. dp still has overhead in communication, so it won't be linear scaling. also try ddp
MDEwOkRpc2N1c3Npb244MjIyNw==
https://github.com/PyTorchLightning/pytorch-lightning/discussions/1005#discussioncomment-238110
ValueError: Expected positive integer total_steps, but got -1
def configure_optimizers(self): optimizer = torch.optim.SGD(self.parameters(), lr=self.lr) print(self.trainer.max_steps) lr_scheduler = { 'scheduler': torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=self.lr, ...
ok i know the answer,
D_kwDOCqWgoM4AOzN_
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11936#discussioncomment-2185929
How to test in low version
There is no train.test () in the lower version. How to test the test data set? my version: 0.4.6
Hi PyTorch Lightning 0.4.6 is extremely old. You should consider upgrading. That being said, you can always test your model as you would in plain pytorch, because the LightningModule is also just a nn.Module: for inp, target in test_dataloader: pred = model(inp) test_loss = loss(pred, target) ...
MDEwOkRpc2N1c3Npb24zNDI2NzQy
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8103#discussioncomment-914275
Pytorch-lightning CPU-only installation
Hello all - just wanted to discuss a use-case with CPU vs GPU PL install. We do normal training on GPUs, but when deploying for prediction we use CPUs and would like to keep the Docker container size as small as possible. It't not clear if it's possible to install pytorch-lightning with CPU-only torch distribution, whi...
Hey, if you install pytorch first (cpu only) and then Lightning it will just use that version.
MDEwOkRpc2N1c3Npb24zNTU1NjYz
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9325#discussioncomment-1281291
help defining new training_step() on a callback
Hi! I need to create a callback that once every N training steps performs the forward pass over the PL module and do some calculations. My first approach has been to simply create a callback and then define a new training_step() within that callback that does my needed calculations. The problem is that this is calculat...
hey @malfonsoarquimea ! Callback.training_step is not a hook so it won't be called automatically. For you use-case you can do something like: class CustomCallback(Callback): def __init__(..., every_n_train_steps): self.every_n_train_steps = every_n_train_steps ... def on_train_batch_end(sel...
D_kwDOCqWgoM4APGyZ
https://github.com/PyTorchLightning/pytorch-lightning/discussions/12442#discussioncomment-2430475
What's the difference between on_fit_start and on_train_start in LightningModule?
What's the difference between on_fit_start and on_train_start hooks in LightningModule?
I think the document here has answered your question very well.
MDEwOkRpc2N1c3Npb24zNDMxMzAw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8142#discussioncomment-930075
Example on training with TPU does not run at all
I am currently try this colab notebook https://colab.research.google.com/github/PytorchLightning/lightning-tutorials/blob/publication/.notebooks/lightning_examples/mnist-tpu-training.ipynb#scrollTo=2772a2e1 provided by PL teams to get some experience with TPU training. But when I try to execute the third cell, there is...
Hi @tungts1101! this error is raised when the PyTorch and PyTorch xla are not of the same versions. You could verify using pip list | grep torch and could install the latest versions for both!
D_kwDOCqWgoM4AN2Gd
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9966#discussioncomment-1670392
LightningCLI: how to configure logger using cmd-line args?
I would like to change the names of the logging directories from the default "version_{n}" to something of my own choosing. How can I do this using command-line arguments to LightningCLI? I know I can set the logger using trainer.logger but setting logger args e.g. trainer.logger.version does not work (unrecognized arg...
See my reply here: #10574 (comment) We'll be adding support for shorthand notation shortly too: #11533
D_kwDOCqWgoM4AOgOw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11491#discussioncomment-1993974
Pytorch Lightning doesn't have CUDA?
🐛 Bug Hello, I'm trying to use Pytorch Lightning in order to speed up my ESR GAN renders on Windows 10. However, when I ran the installation code and attempt to run Cupscale (which I use as a GUI for ESR GAN), I get an error saying "Pytorch compiled without CUDA". Is there a way to choose to install specifically the C...
@TrocelengStudios ho this seems like you do not have CUDA installed, can you run nvidia-smi? cc: @awaelchli
MDEwOkRpc2N1c3Npb24zMzMyNzMx
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7158#discussioncomment-644560
Odd Performance Using Multi-GPU + Azure
I was wondering if anyone has observed odd performance when training multi-GPU models? I’ve developed a script which trains a toy dataset (in this case the cats and dogs model), using a ResNet or EfficientNet. The script works fine locally on the GPU. However, when I move the script to the cloud and train using multipl...
@deepbakes Could this be because of benchmark=True ?
D_kwDOCqWgoM4AOxzH
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11905#discussioncomment-2168073
Disabling find_unused_parameters
When trying to disable find_unused_parameters in the trainer by doing the following, strategy=DDPStrategy(find_unused_parameters=False) Am being thrown an import error for from pytorch_lightning.strategies import DDPStrategy Error: No module named 'pytorch_lightning.strategies'
pytorch_lightning.strategies will be available in v1.6 release and is only available in master at the moment. For now, you can use: from pytorch_lightning.plugins import DDPPlugin trainer = pl.Trainer( ..., strategy=DDPPlugin(find_unused_parameters=False), ) See the stable version of docs (not latest) here: h...
D_kwDOCqWgoM4AOqUS
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11664#discussioncomment-2076602
Clarification on reload_dataloaders_every_epoch
With a basic Datamodule like: class MyDM(pl.lightningDataModule): def __init__(self,): <init some stuff> def setup(self, stage:typing.Optional[str] = None): .... <sort out dataset etc> def train_dataloader(self): .... etc etc model = MyModel() data = MyDM() trainer pl.Trainer(r...
no, it doesn't call setup at every reload, just the corresponding _dataloader hook. You can define the datasets in setup and access them inside dataloader_hooks or can initialize the corresponding dataset inside dataloader_hook as well.
D_kwDOCqWgoM4AO1rw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/12023#discussioncomment-2218752
Pretrain some sections of a model, then initialize those parts when training a full model
I need to pretrain the encode and decode section of an autoencoder first, then later attach a transformer in the middle of the encode and decode section. When I load the weights of the encode and decode section when pretraining it first, while initializing the weights of the transformer section, will I get an error abo...
You can use the strict flag of load_from_checkpoint to avoid the missing layer failure: pytorch-lightning/pytorch_lightning/core/saving.py Lines 94 to 95 in 079fe9b strict: Whether to strictly enforce that the keys ...
MDEwOkRpc2N1c3Npb24zMjY0NDkw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6473#discussioncomment-469683
Optimization in a dual encoder LitModel
Hello, Currently, I am working in a Lit Model, which has two encoders. Each of them has its optimizer, scheduler, and loss, as shown below: import importlib import torch from pytorch_lightning.core.lightning import LightningModule from hydra.utils import instantiate from source.metric.ULMRRMetric import ULMRRMetric ...
I see two ways. I think your example is quite simple so it does not matter which way you choose in the end: 1) Automatic Optimization: def training_step(self, batch, batch_idx, optimizer_idx): x1, x2 = batch["x1"], batch["x2"] if optimizer_idx == 0; x1_repr = self.x1_encoder(x1) ...
D_kwDOCqWgoM4ANwT_
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9807#discussioncomment-1421542
TypeError: __init__() got an unexpected keyword argument 'row_log_interval'
Question: I am trying to run the EpicKitchen codes from https://github.com/epic-kitchens/C1-Action-Recognition-TSN-TRN-TSM I am getting this Typeerror may be related to older and new versions of lightining module and i was not able to resolve it. Error: Traceback (most recent call last): File "src/test.py", line 145, i...
TypeError: init() got an unexpected keyword argument 'row_log_interval' row_log_interval got deprecated and removed. You must have taken this from an old version of the docs. Use log_every_n_steps instead.
MDEwOkRpc2N1c3Npb24zNDA3ODk2
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7943#discussioncomment-863926
KeyError: 'Trying to restore training state but checkpoint contains only the model. This is probably due to `ModelCheckpoint.save_weights_only` being set to `True`.'
this is my code: checkpoint_callback = ModelCheckpoint( monitor='hmean', mode='max', dirpath='../weights', filename='DB-{epoch:02d}-{hmean:.2f}', save_last=True, save_weights_only=True, ) trainer = pl.Trainer( # open this, must drop last benchmark=True, checkpoint_callback=True, ...
if you set save_weights_only=True in ModelCheckpoint then it won't save optimizer/scheduler states in an ideal case. So assigning this checkpoint to resume training won't work because it needs to restore optimizer/scheduler state as well to actually resume it.
D_kwDOCqWgoM4ANuaZ
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9745#discussioncomment-1400848
'NeuralNetwork' object has no attribute 'log'
Hello I am trying to train a neural network using pytorch lightning. I have run into an issue with the trainer when I try to run the program. I am getting the following issue: GPU available: False, used: False TPU available: False, using: 0 TPU cores Traceback (most recent call last): File "/home/PytorchLightningGRU...
did you pass in a lightningmodule instance? class YourModel(pl.LightningModule): <- here? ...
D_kwDOCqWgoM4AN6JX
https://github.com/PyTorchLightning/pytorch-lightning/discussions/10096#discussioncomment-1527358
Training based on iterations
Hi, Could anyone advice me how to set up PyTorch lightning trainer to learn based on iterations instead of epochs? Thank you!
Hi @mshooter , you can use the min_steps and max_steps arguments on the Trainer to do training based on iterations instead of epochs. https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html#max-steps
D_kwDOCqWgoM4APJ2u
https://github.com/PyTorchLightning/pytorch-lightning/discussions/12511#discussioncomment-2463941
self.manual_backward() vs. loss.backward() when optimizing manually
According to the manual_backward() documentation, it takes care of scaling when using mixed precision. In that case, is it correct to assume one can simply and safely use loss.backward() during manual optimization if not using mixed precision?
hey @MGheini It's not just precision but a common hook to support all other strategies like deepspeed/ddp and certain hooks like on_after_backward are called too. So manual_backward is suggested to make sure no-code change is required for eg in case any of the strategies is updated by the user.
D_kwDOCqWgoM4AOa1k
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11318#discussioncomment-1908881
How to show the validation loss in progress bar?
Hi. I'm trying to come up with ways to get my validation loss shown in the progress bar. My model is defined like this: class DummyNet(pl.LightningModule): def __init__(self, batch_size): super().__init__() self.batch_size = batch_size self.fc = nn.Sequential( nn.Dropout(0.5), ...
Hi @FeryET, I believe the below should work as documented in https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html#automatic-logging. self.log(..., prog_bar=True)
D_kwDOCqWgoM4AOd26
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11412#discussioncomment-1945561
Computing expensive metrics less frequently than using validation_step()
Hi I need to compute some metrics that are not quick to compute (eg. Frechet Inception Distance). It is too expensive to compute them every validation epoch. Instead I would like to compute the metric once after every training epoch (or after some arbitrary number of steps). To do so, I need to be able to access the tr...
Solved by using self.trainer.datamodule
MDEwOkRpc2N1c3Npb24zMzUwODY2
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7356#discussioncomment-697581
Issue in fitting model and finding optimal learning rate parameter
following is the error: NotImplementedError: `val_dataloader` must be implemented to be used with the Lightning Trainer LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) <ipyt...
I think it's a problem with pytorch-lightning==1.5.0. Had this problem too with code that worked before recreating my venv and the difference was version 1.5.0 release on 2. Nov. Switched to 1.4.9 and it worked again. Checked 1.5.0rc1 and it did not work either.
D_kwDOCqWgoM4AN-x1
https://github.com/PyTorchLightning/pytorch-lightning/discussions/10341#discussioncomment-1587190
Multiple Validation Sets
Hello, I'm trying to validate my model on multiple subsets of the initial validation set to compare performance. Reading this page I got the idea that returning a list contaning the multiple Dataloaders would be enough. My val_dataloader method became the following: But this isn't working properly. I get the following...
you must be missing the additional dataloader_idx required in the validation_step for multiple dataloaders docs: https://pytorch-lightning.readthedocs.io/en/latest/guides/data.html#multiple-validation-test-predict-dataloaders
D_kwDOCqWgoM4AOUM0
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11135#discussioncomment-1837460
Gradient Clipping with mix precision in case of NaN loss
Greetings. I am getting NaN val loss Cannot log infinite or NaN value to attribute training/val_loss/ with cnnlstm network.I am thinking to use gradient clipping. But the doc say gradient clipping should not be used with mixed precision. If using mixed precision, the gradient_clip_val does not need to be changed as the...
But the doc say gradient clipping should not be used with mixed precision. You totally can, that's saying that any scaling applied by 16bit precision training will be undone before clipping the gradients. Which means you do not need to worry about changing the gradient clipping value with vs without precision=16 i do...
D_kwDOCqWgoM4AOd6S
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11413#discussioncomment-1952003
Lightning CLI is incompatible with models defined by data
Is there an easy way to make a Lightning CLI work with a Lightning Module defined by data? This seems like a very common design pattern. For example (from the docs) it doesn't appear possible to easily convert the following to a Lightning CLI: # init dm AND call the processing manually dm = ImagenetDataModule() dm.prep...
Resolved: see #9473
MDEwOkRpc2N1c3Npb24zNTY2Nzgy
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9452#discussioncomment-1317965
Why is the default implementation for train_dataloader in DataHooks a warning?
The default implementation for these is logging a warning that nothing is implemented. pytorch-lightning/pytorch_lightning/core/hooks.py Line 529 in 963c267 rank_zero_warn("`train_dataloader` must be implemented to be used wit...
I believe this is just legacy code. No real reason. It's in the original implementation (to bolts!) PyTorchLightning/lightning-bolts@797464c Feel free to try changing it :)
MDEwOkRpc2N1c3Npb24zNTAzNDg3
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8734#discussioncomment-1135763
When doing `fit()`, `self.training` in `forward()` keeps turning into False?
Hi all, I tried to train a model with pl. And I just ran the trainer.fit() as below: trainer.fit(model, train_dataloaders=model.train_dataloader(), val_dataloaders=model.val_dataloader()) And I found that model.training == False, when it gets into forward()... Is there any solution or does anybody know ...
does it print self.training = False for all the training steps? maybe you might have checked it during the initial steps where val sanity check happens.
D_kwDOCqWgoM4AOwQY
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11838#discussioncomment-2148203
How to apply uniform length batching(smart batching)?
How to apply uniform length batching(smart batching)? . Hi all! I have a question about applying a smart batching system like the above picture. To implement smart batching system, I write the code like below: Dataset Class class ExampleDataset(Dataset): def __init__(self, datas, tokenizer): super(Example...
you can sort the data by len initially while creating the dataset itself. now just use a sequential sampler to avoid shuffle by just setting shuffle=False inside dataloader. collate_fn looks good, although can be optimized a little bit. apart from that even if you use auto_scale_batch_size, it will work just fine since...
D_kwDOCqWgoM4ANuQ1
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9740#discussioncomment-1398705
How to implement a deep ensemble
I am looking to implement n parallel independent ensembles. My idea is the following: class DeepEnsemble(LightningModule): def __init__(self, cfg): super().__init__(cfg) self.net = nn.ModuleList([configure_network(self.cfg) for _ in range(self.cfg.METHOD.ENSEMBLE)]) def configure_optimizers(sel...
I see two potential options. cache the forward output for a specific batch idx. Check the automatic optimization flow: https://pytorch-lightning.readthedocs.io/en/latest/common/optimizers.html#automatic-optimization Use manual optimization. https://pytorch-lightning.readthedocs.io/en/latest/common/optimizers.html#m...
MDEwOkRpc2N1c3Npb24zNDcwNzE3
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8505#discussioncomment-1035349
Is there an all_gather before training_step_end when using DDP?
From the training_step_end() docs it says: If you later switch to ddp or some other mode, this will still be called so that you don’t have to change your code When using dp or ddp2 the first dimension is equal to the number of GPUs, and it has the per-GPU results like gpu_n_pred = training_step_outputs[n]['pred'] as ...
I'll answer my own question. Short answer is no, there is no barrier / gather before training_step_end() when using DDP. I could be wrong, but it appears these methods just get called using the normal callback mechanism, e.g. PyTorch-Lightning doesn't post-process the output beyond what DP / DDP will do. So in the DP c...
MDEwOkRpc2N1c3Npb24zNDA3MTg3
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7934#discussioncomment-863899
Error importing pytorch lighting
Hi, I am getting this weird error; I was able to run my code before and today I got this: import pytorch_lightning as pl "~/dir/miniconda3/envs/pytorchenv/lib/python3.7/site-packages/pytorch_lightning/init.py", line 66, in from pytorch_lightning import metrics ImportError: cannot import name 'metrics' from 'pytorch_li...
@mshooter Hi, could you try reinstalling it and running it again? I didn't experience the issue with the following command on Google Colab: !pip install pytorch-lightning --upgrade from pytorch_lightning import metrics If the problem persists, could you run the following commands and share the output? $ wget https://ra...
MDEwOkRpc2N1c3Npb24zMjQzMTM4
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6240#discussioncomment-412449
how to use pl to process tfrecords data?
how to use LightningDataModule to process tfrecords data, anyone can give a Tutorial
Depends on what data you have inside TFRecord, but you can see usage: https://github.com/vahidk/tfrecord#reading-tfexample-records-in-pytorch def train_dataloader(): # index_path = None # tfrecord_path = "/tmp/data.tfrecord" # description = {"image": "byte", "label": "float"} dataset = TFRecordDataset(t...
MDEwOkRpc2N1c3Npb24zNDMzNzM4
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8161#discussioncomment-974908
How do I set the steps_per_epoch parameter of a lr scheduler in multi-GPU environment?
What is your question? For some learning rate schedulers, there is a required steps_per_epoch parameter. One example is the OneCycleLR scheduler. On a CPU or single GPU, this parameter should be set to the length of the train dataloader. My question is, how should this parameter be set on a multi-GPU machine using DDP....
After some more investigation, it seems like dividing the dataloader size by the number of GPUs is the correct way. The documentation could be more clear on this, but I'm closing this now.
MDEwOkRpc2N1c3Npb244MjI3MQ==
https://github.com/PyTorchLightning/pytorch-lightning/discussions/2149#discussioncomment-238274
Multi-GPU Training GPU Usage
❓ Multi-GPU Training GPU Usage Before asking: search the issues. search the docs. Hi, I'm using lightning and ddp as backend to do multi-gpu training, with Apex amp (amp_level = 'O1'). The gpu number is 8. I noticed that during training, most of time GPU0's utilization is 0%, while others are almost 100%. But their ...
Your cpu usage seems high. It could be the cpu is the bottleneck here. Try fewer gpus and observe then observe the gpu utilization.
MDEwOkRpc2N1c3Npb244MjI2MA==
https://github.com/PyTorchLightning/pytorch-lightning/discussions/2701#discussioncomment-238229
Is it possible to call dist.all_reduce manually in train_step?
In my code, I would like to synchronize a tensor across all the gpus in train_step, which is a temporary variable. Is it allowed to call torch.distributed.all_reduce in this case? Or there is a specific function in pytorch_lightning that does the job?
Hey @sandylaker! You can use torch.distributed.all_reduce. There is also within the LightningModule this function, however it may be better to expose this within lightning to make it easier to access: x = self.trainer.accelerator.training_type_plugin.reduce(x)
MDEwOkRpc2N1c3Npb24zMzgwMjMw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7693#discussioncomment-785393
Attribute is reset per batch in `dp` mode
❓ Questions and Help Before asking: search the issues. search the docs. What is your question? I don't know whether this is a bug... As shown in the code below, I think the behavior of dp mode is unexpected? (The attribute is reset every batch) When using ddp mode, everything is fine. (The property will be initializ...
oh, this is actually a known problem and comes from DataParallel in PyTorch itself. See #565 and #1649 for reference. @ananyahjha93 has been working on a workaround but it seems to be super non trivial #1895
MDEwOkRpc2N1c3Npb244MjI4OQ==
https://github.com/PyTorchLightning/pytorch-lightning/discussions/3301#discussioncomment-238341
How to correctly apply metrics API in binary use case
How would one correctly apply the Precision metric from v1.2.0 on, with the revised metrics api? I am currently doing something like this: import torch from pytorch_lightning import metrics # example data preds = [0] * 200 + [1] * 30 + [0] * 10 + [1] * 20 targets = [0] * 200 + [1] * 30 + [1] * 10 + [0] * 20 preds = t...
for binary classification, where you are only interested in the positive class you should pass in num_classes=1. Here is your corrected code: def _print_some_metrics(preds, targets, num_classes): precision = metrics.classification.Precision( num_classes=num_classes, is_multiclass=False) recall = metrics...
MDEwOkRpc2N1c3Npb24zMjUwMzc1
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6356#discussioncomment-433217
ddp: how to combine multi-gpus outputs like "training_step_end" which is only used in dp/ddp2?
My question is like title. Thank you!
checkout the example here: https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html#training-with-dataparallel
D_kwDOCqWgoM4AOXqQ
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11265#discussioncomment-1876035
Gradient accumulation + DeepSpeed LR scheduler
How does gradient accumulation interact with DeepSpeed learning rate scheduling (e.g. the per-step warm-up scheduler)? Is the learning rate updated after every iteration, or only after the model weights are ultimately updated?
it considers the accumulation before doing lr_scheduler_step: pytorch-lightning/pytorch_lightning/loops/epoch/training_epoch_loop.py Lines 387 to 390 in 86b177e def update_lr_schedulers(self, interval: str, update_plateau_sched...
D_kwDOCqWgoM4AOrGl
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11686#discussioncomment-2087865
Help understanding data module error
Hi there, I am trying to implement a data module however I keep getting an error that I cannot understand. I normally setup my data module as: class dataModule(pl.LightningDataModule): def __init__(self, batch_size, csv_file, data_dir): super().__init__() self.csv_file = csv_file self.data_d...
The problem is the combination of this line: if stage == 'test' or stage is None: _, _, self.test_dataset = torch.utils.data.random_split(subjList, [train_size, val_size, test_size]) and this line: self.test_set = tio.SubjectsDataset(self.test_dataset, transform=None) as you can see, self.test_dataset is o...
MDEwOkRpc2N1c3Npb24zNDQ2Njgz
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8298#discussioncomment-966766
How to reinit wanbd in a for loop with PL Trainer
I am training 5-fold CV with PyTorch Lightning in a for loop. I am also logging all the results to wandb. I want wanbd to reinitalize the run after each fold, but it seems to continue with the same run and it logs all the results to the same run. I also tried passing kwargs in the WandbLogger as mentioned in the docs h...
@Gladiator07 you could try call wandb.finish() at the end of every run. This should close the wandb process. A new one will be started when you call the next run
MDEwOkRpc2N1c3Npb24zNDgwNTAw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8572#discussioncomment-1627086
load checkpoint model error
this is my model __init__ func def __init__(self, num_classes: int, image_channels: int = 3, drop_rate: int = 0.5, filter_config: tuple = (64, 128, 256, 512, 512), attention=False): this is my load code: m = SegNet(num_classes=1) model = m.load_from_checkpoint('checkpoints/epoch=99-step=312499.ckpt') w...
Dear @morestart, Great question ! You should use save_hyperparameters function, so Lightning can save your init arguments inside the checkpoint for future reload. Here is the associated doc: https://pytorch-lightning.readthedocs.io/en/stable/common/lightning_module.html?highlight=save_hyperparameters#save-hyperparamete...
MDEwOkRpc2N1c3Npb24zNDIxMTQ4
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8050#discussioncomment-896565
How to Log Metrics (eg. Validation Loss, Accuracy) To TensorBoard Hparams?
I am using Pytorch Lightning 1.2.6 to train my models using DDP and TensorBoard is the default logger used by Lightning. My code is setup to log the training and validation loss on each training and validation step respectively. class MyLightningModel(pl.LightningModule): def training_step(self, batch): x,...
I think it is explained very well in this section of the documentation: https://pytorch-lightning.readthedocs.io/en/latest/extensions/logging.html#logging-hyperparameters Basically, you just need to overwrite the hp_metric tag with whatever value you want to show up in the HPARAMS tab in tensorboard.
MDEwOkRpc2N1c3Npb24zMzEyMzgz
https://github.com/PyTorchLightning/pytorch-lightning/discussions/6904#discussioncomment-588486
Earlystopping callback metrics in different devices with single gpu
The device of the metric return by validation_step is GPU, related code is def validation_step(self, batch, batch_idx): x, y = batch if y.device != self.device: y = y.to(self.device) y_hat = self(x) loss = self.loss(y_hat, y) # loss.device is cuda. self.log('valid loss', loss.item()) ...
Looking into it in #8295
MDEwOkRpc2N1c3Npb24zNDQzODk2
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8267#discussioncomment-966632
How to use predict function to return predictions
How to get predictions class OurModel(LightningModule): def __init__(self): super(OurModel,self).__init__() self.layer = MyModelV3() def forward(self,x): return self.layer(x) def train_dataloader(self): return DataLoader(DataReader(train_df)) def training_step(self,batch,...
Hi @talhaanwarch, in order to get predictions from a data loader you need to implement predict_step in your LightningModule (docs here: https://pytorch-lightning.readthedocs.io/en/stable/common/lightning_module.html#predict-step). You would then be able to call Trainer.predict with the dataloader you want use following...
MDEwOkRpc2N1c3Npb24zNDE5NjM3
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8038#discussioncomment-892907
Why accumulate_grad_batches cannot be used with manual optimization?
I've stumbled upon the problem of not being able to use accumulate_grad_batches argument in the Trainer as I was doing manual optimization in my LightningModule to use adversarial loss functions. However, I think it would be possible to implement something that would "store" calls to the step method for the module's op...
Hey @NathanGodey, manual optimization was built to provide full control optimization control to the user while abstracting distributed training and precision. There is no way Lightning can automate properly accumulate grad batches for all the possible use cases and therefore isn't supported. However, you can easily imp...
D_kwDOCqWgoM4AOOv9
https://github.com/PyTorchLightning/pytorch-lightning/discussions/10998#discussioncomment-1784339
training_epoch_end only returning the last train batch
I have a pytorch lightning module that includes the following section: `def training_step(self, batch, batch_idx): losses, tensors = self.shared_step(batch) return losses def validation_step(self, batch, batch_idx): losses, tensors = self.shared_step(batch) return losses, tensors def training_epoch_end(self, l...
This issue was reported in #8603 - are you able to try Lightning 1.4.1, which contains the fix? And for broader discussion on these hooks, and alternatives you have to access the per-step outputs, see #8731
MDEwOkRpc2N1c3Npb24zNTA0NjA3
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8757#discussioncomment-1136286
Hook for Fully Formed Checkpoints
I want to create a hook that uploads checkpoints to cloud storage (e.g. AWS, Azure). I tried using the on_save_checkpoint hook as follows: def on_save_checkpoint(self, trainer: pl.Trainer, pl_module: pl.LightningModule, checkpoint: Dict[str, Any]) -> dict: checkpoint_bytes = io.BytesIO() torch.save(checkpoint, ...
See #11704.
D_kwDOCqWgoM4AOsH5
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11705#discussioncomment-2100641
model inference but self.training is save true
i use self.training param to judge what data return. return x if self.training else (torch.cat(z, 1), x) but when i load my model, i use debug mode find that the self.training is save True. self.model = CustomModel.load_from_checkpoint(model_path) self.model.training = False i use above code change model.training statu...
fine, i know the answer, i have to set model.eval()......
D_kwDOCqWgoM4AOZMw
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11285#discussioncomment-1890115
LightningCLI - instantiate model from config
So lets say i have a model and i'm using the newest CLI API to train it: The config uses sub modules and can look something like: # config.yaml model: class_path: pl_models_2.ModelPL init_args: margin: 0.3 basemodel: class_path: pl_models_2.ModelBackbone init_args: base_model: resnet50 ...
If you want to use that exact config (not stripping out everything except the model) you can do the following: from jsonargparse import ArgumentParser parser = ArgumentParser() parser.add_argument('--model', type=ModelClass) parser.add_argument('--data', type=dict) # to ignore data config = parser.parse_path('config.y...
D_kwDOCqWgoM4AN_cS
https://github.com/PyTorchLightning/pytorch-lightning/discussions/10363#discussioncomment-1595769
what's the difference between `load_from_checkpoint ` and `resume_from_checkpoint`
I'm confused about two API: Module.load_from_checkpoint trainer.resume_from_checkpoint
resume_from_checkpoint is used to resume the training using the checkpointed state_dicts. It will reload model's state_dict, optmizer's and schedulers's state_dicts, training state as well in a general case. use-case: to restart the training load_from_checkpoint just reloads the model's state_dict and return the model ...
D_kwDOCqWgoM4AOdFs
https://github.com/PyTorchLightning/pytorch-lightning/discussions/11378#discussioncomment-1936061
What is the best practice to share a massive CPU tensor over multiple processes in pytorch-lightning DDP mode (read-only + single machine)?
Hi everyone, I wonder what is the best practice to share a massive CPU tensor over multiple processes in pytorch-lightning DDP mode (read-only + single machine)? I think torch.Storage.from_file with share=True may suit my needs, but I can’t find a way to save storage and read it as a tensor. (see here for details) I al...
I found that torch.Storage.from_file suits my needs and it can reduce the memory usage in my Lightning DDP program. For the way to create a storage file, see here.
MDEwOkRpc2N1c3Npb24zNDg4Mjgx
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8611#discussioncomment-1129295
Training/Validation split in minimal example
If you still can't find what you need: What is your question? I think it's unclear how the training data is split into a training and validation split in the minimal example. (https://williamfalcon.github.io/pytorch-lightning/LightningModule/RequiredTrainerInterface/#minimal-example) Does this example use all training ...
Is there some magic background process which compares the training and validation data loaders and does splitting? I skimmed through the code and couldn't find anything. No, I don't think this is happening. The dataset in the minimal example is the MNIST dataset, which only has two splits (train and test). In this exa...
MDEwOkRpc2N1c3Npb24yNzkyNTAz
https://github.com/PyTorchLightning/pytorch-lightning/discussions/5815#discussioncomment-339795
How to set experiment name such that it can be some unique name instead of version_0, ... etc.
I'm currently running a lot of experiments and in order to track all of them in tensorboard I have to rename each experiment folder by hand (e.g. lightning_logs/version_0 -> lightning_logs/{unique_informative_exp_name}). However, it would be better if I could pass the as an argument to Trainer. I searched documentation...
You can pass Trainer a custom logger with the version specified. from pytorch_lightning.loggers import TensorBoardLogger logger = TensorBoardLogger("default_root_dir", version="your_version", name="my_model") trainer = Trainer(logger=logger) Here is the api of TensorBoardLogger
MDEwOkRpc2N1c3Npb24zNTQ0NzQ3
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9185#discussioncomment-1249610
Running Average of my accuracy, losses etc.
What is your question? I want my tqdm logger to show me a history of my training on the terminal. Right now, when a epoch ends, all data for it is scrubbed from the command line and the new epoch data is shown. Also I want to see the running accuracy of my network and a running average of my loss on the tqdm bar. How s...
Use tensorboard! For running averages you have to implement the logic in training step
MDEwOkRpc2N1c3Npb24yNzkyNTM0
https://github.com/PyTorchLightning/pytorch-lightning/discussions/5819#discussioncomment-339807
How to collect batched predictions?
Hello :) Currently I use trainer.predict(model=..., dataloaders=...) which returns the results of predict_step(...) in a list where each element in the list corresponds to one batch input to the predict_step function which I already implemented. I am looking for an predict_epoch_end kind of function to collect to batc...
Issue to track #9380
MDEwOkRpc2N1c3Npb24zNTYyMjU2
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9379#discussioncomment-1298537
How save deepspeed stage 3 model with pickle or torch
Hi, I'm trying to save a model trained using deepspeed stage 2 using this code: trainer = pl.Trainer( gpus=4, plugins=DeepSpeedPlugin( stage=3, cpu_offload=True, partition_activations=True,), precision=16, accelerator="ddp", ) trainer.fit(model, train_datalo...
After some debugging with a user, I've come up with a final script to show how you can use the convert_zero_checkpoint_to_fp32_state_dict to generate a single file that can be loaded using pickle, or lightning. import os import torch from torch.utils.data import DataLoader, Dataset from pytorch_lightning import Light...
MDEwOkRpc2N1c3Npb24zNTIxMjUy
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8910#discussioncomment-1834003
GPU usage does not remain high for lightweight models when loaded CIFAR-10 as a custom dataset
I am experimenting with the following repository. Keiku/PyTorch-Lightning-CIFAR10: "Not too complicated" training code for CIFAR-10 by PyTorch Lightning I have implemented two methods, one is to load CIFAR-10 from torchvision and the other is to load CIFAR-10 as a custom dataset. Also, I have implemented two models: a ...
I got a replay from ptrblck. https://discuss.pytorch.org/t/gpu-usage-does-not-remain-high-for-lightweight-models-when-loaded-cifar-10-as-a-custom-dataset/125738
MDEwOkRpc2N1c3Npb24zNDQ1MDEz
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8274#discussioncomment-964122
how to load dataset only once on the same machine?
My dataset is large, with total CPU memory usage of 20 GB. I train on 2 nodes with 8 GPU. And I use slurm to train it. But I found that each process will consume 20 GB memory, which is equivelence to 80 GB each node. That's not what I want. I want a node to consume only 20GB in total. Is there a way to do that? class D...
Since your data is in one single binary file, it won't be possible to reduce the memory footprint. Each ddp process is independent from the others, there is no shared memory. You will have to save each dataset sample individually, so each process can access a subset of these samples through the dataloader and sampler.
MDEwOkRpc2N1c3Npb24zNDI3OTg1
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8112#discussioncomment-914252
Update Adam learning rate after 10 epochs
hello, how can i update the learning rate of adam optimizer after 10 epochs ? my code is like this self.lr_decay_epoch = [15,] if epoch in self.lr_decay_epoch: self.lr = self.lr * 0.1 self.optimizer = Adam(filter(lambda p: p.requires_grad, self.net.parameters()), lr=self.lr, weight_decay=self.wd)
you can use LambdaLR where lambda function can be something like: lambda epoch: return lr * (0.1 if epoch in self.lr_decay_epoch else 1)
D_kwDOCqWgoM4ANx-L
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9853#discussioncomment-1442799
ModelCheckpoint creating unexpected subfolders
Hi! I have found a weird behavior when using ModelCheckpoint, if I have a metric that I want to save in my filename and it has a "/" on it it will create nested directories. For example checkpoint_callback = ModelCheckpoint( monitor='val/acc', dirpath=checkpoints_dir, filename='checkpoint_{epoch...
hi, can you provide also your model sample, in particular, the metrics section I guess that the problem is with / as it is interpreted as a normal folder path, as you can see that 'val/acc' is not replaced by a number either... 🐰
MDEwOkRpc2N1c3Npb24zNDQ3MDYz
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8300#discussioncomment-973830
CNN dimension error
Hi there! I am trying to build a very basic CNN for a binary classification task however I am getting an odd dimensionality issue. My CNN is: class convNet(nn.Module): def __init__(self): super().__init__() self.conv2d_1 = nn.Conv2d(193, 193, kernel_size=3) self.conv2d_2 = nn.Conv2d...
To me it seems like you have forgotten the batch dimension. 2D convolutions expect input to have shape [N, C, H, W] where C=193, H=229 and W=193 (is it correct that you have the same amount of channels as the width?). If you only want to feed in a single image you can do sample.unsqueeze(0) to add the extra batch dimen...
MDEwOkRpc2N1c3Npb24zNDQwNDk0
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8238#discussioncomment-956752
Run Trainer.fit multiple times under DDP mode
Hi, I have a machine learning architecture project that requires modifying the network structure multiple times. I used PytorchLigtning codes to implement it. The overall structure is as followed. The model definition, I ignore the training_step, 'validation_step' for clearly demonstration. def ToyModel(pl.LightningMod...
can you try it with ddp_spawn since ddp creates sub-scripts i.e it will execute your complete script on a specific device.
D_kwDOCqWgoM4APFNC
https://github.com/PyTorchLightning/pytorch-lightning/discussions/12401#discussioncomment-2413512
How to call torch.distributed.get_rank() in model building phase
I implemented pytorch lightning-based learning as follows. dm = build_datamodule(config) model = build_model(config) trainer = Trainer( ... accelerator="ddp", ... ) trainer.fit(model, dm) In this situation, in order to set different model parameters for each gpu process, distributed.get_rank() must be called at the s...
Hello, you are right that this currently isn't supported. I am working on adding this feature as part of this issue: #11922 Could you confirm that the issue and proposed solution meet your needs?
D_kwDOCqWgoM4AO1fM
https://github.com/PyTorchLightning/pytorch-lightning/discussions/12017#discussioncomment-2215108
DDP NCCL freezes in docker AWS Jupyter
❓ Questions and Help Problem Jupyter terminal freezes, and connection to AWS node closes. The problem is reproducible with any Lightning example. What have you tried? python pl_examples/basic_examples/image_classifier.py --gpus 4 --accelerator ddp What's your environment? Linux using docker image Lightning 1.0.4 pytorc...
The solution was to use export NCCL_SOCKET_IFNAME=lo
MDEwOkRpc2N1c3Npb244MjI5OA==
https://github.com/PyTorchLightning/pytorch-lightning/discussions/4518#discussioncomment-238373
Select GPU from cli
The CLI has a flag --gpus In a system with more than 1 GPU, is there a way to select the GPU you want to run on from CLI? I tried --gpus [1] to select cuda:1 but it doesn't work. Also the auto gpu selection didn't work for me. It tried to put the job on cuda:0, but cuda:0 didn't have enough memory to run it. in the en...
I'm assuming you are referring to the LightningCLI In that case, just do python yourscript.py --trainer.gpus=[1]
MDEwOkRpc2N1c3Npb24zMzU4MTIx
https://github.com/PyTorchLightning/pytorch-lightning/discussions/7461#discussioncomment-723023
RuntimeError: Trying to backward through the graph a second time
I'm migrating my repository to pytorch-lightning and I get the following error: RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling .backward() or autograd.grad() the first time. The CNNLSTM model seems to b...
Hi @Keiku This error happens if you try to call backward on something twice in a row without calling optimizer.step. Are you able to share your LightningModule code? It looks like the code in your repo just uses vanilla pytorch. Thanks 😃
MDEwOkRpc2N1c3Npb24zNDc3MDE4
https://github.com/PyTorchLightning/pytorch-lightning/discussions/8549#discussioncomment-1050102
Exporting PyTorch Lightning model to ONNX format not working
I am using Jupyter Lab to run. It has pre-installed tf2.3_py3.6 kernel installed in it. It has 2 GPUS in it. PyTorch Lightning Version (e.g., 1.3.0): '1.4.6' PyTorch Version (e.g., 1.8): '1.6.0+cu101' Python version: 3.6 OS (e.g., Linux): system='Linux' CUDA/cuDNN version: 11.2 How you installed PyTorch (conda, pip, so...
Hi A google search reveals some help on this issue here: pytorch/pytorch#31591 Citing the thread there As the error message indicates, the tracer detected that the output of your model didn't have any relationship to the input. If we look closer at your code, we see that loss=0 and labels=None. def forward(self, in...
D_kwDOCqWgoM4AN49x
https://github.com/PyTorchLightning/pytorch-lightning/discussions/10063#discussioncomment-1512926
What are ones options for manually defining the parallelization?
(Q1) Does PyTorch Lightning enable parallelization across multiple dimension or does it only allow data parallelism? The FlexFlow implements the parallelism across 4 different dimensions ("SOAP": the sample, operator, attribute and parameter dimensions). (Q2) Over which of these does PyTorch-Lightning do parallelizatio...
Dear @roman955b, 1 ) Currently, Lightning automatically implement distributed data parallelism. However, we are currently working on making manual parallelization for users who want deeper control of the parallelisation schema. 2 ) Lightning supports only (S, P) with DeepSpeed, FSDP integrations. 3 ) Yes, we are curren...
D_kwDOCqWgoM4ANzLv
https://github.com/PyTorchLightning/pytorch-lightning/discussions/9881#discussioncomment-1457902