question stringlengths 9 229 | context stringlengths 0 17.6k | answer stringlengths 5 3.54k | id stringlengths 18 28 | url stringlengths 94 97 |
|---|---|---|---|---|
How to tell Lightning which data to load onto the GPU (for 3rd party compatibility) | Hi !
I'm currently working on a segmentation network using PyTorch Lightning and MONAI.
Context
In my LightningDataModule, I preprocess my images by applying transforms (e.g., to resample them) before feeding them to my DataLoaders. For the transforms, I use a 3rd party framework called MONAI. It stores the context in... | Hi,
your LightningModule has a hook def transfer_batch_to_device(self, batch: Any, device: torch.device, dataloader_idx: int) -> Any: that you can override and which should be a perfect fit for that. Just make sure to only use the provided device. | MDEwOkRpc2N1c3Npb24zNDA1OTg5 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7919#discussioncomment-854277 |
Not able to Generate Predictions with Trainer.predict() | Hi, I'm new to PyTorch Lightning, used it for the first time and kind of liked it. However, I am facing this one problem, Implemented a classification task for which I trained the model with Huggingface pretrained model as base and classification head on top. The model is training successfully and giving decent validat... | since your model is an instance of your LightningModule it cannot rely on model.forward to generate the predictions because predict_step by default calls LightningModule.predict.
you need to either override predict_step
def predict_step(...):
return self.model(...)
or define forward method in your lightning module
... | D_kwDOCqWgoM4AOL5N | https://github.com/PyTorchLightning/pytorch-lightning/discussions/10897#discussioncomment-1736809 |
Hook for Fully Formed Checkpoints | I would like to create a hook that automatically uploads checkpoints to the cloud (e.g., AWS, Azure) when they're created. I tried using on_save_checkpoint roughly like this:
def on_save_checkpoint(self, trainer: pl.Trainer, pl_module: pl.LightningModule, checkpoint: Dict[str, Any]) -> dict:
checkpoint_bytes = io.B... | hey @dcharatan !
I'd rather suggest using the remote filesystems. You can also specify the remote path inside ModelCheckpoint.
or use CheckpointIO plugin. | D_kwDOCqWgoM4AOsH4 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11704#discussioncomment-2099335 |
Custom training loop for LightningModule | Hello,
I was wondering if it is possible to control the trainloop behavior of a module (beyond overriding training_step()). I want to manually override the .grad value of each parameter by myself.
For example, let's say I have this routine:
m_0 = MyModel()
loader_1 = getTrainLoader(1)
loader_2 = getTrainLoader(2)
load... | Currently there's no easy way for users to manage the dataloaders themselves, but you can perform the optimization (and manipulate the gradients) by setting automatic_optimization=False
see: https://pytorch-lightning.readthedocs.io/en/latest/common/optimizers.html#manual-optimization | MDEwOkRpc2N1c3Npb24zMjYwODA5 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6456#discussioncomment-469755 |
share steps among test and validation steps | I'm implementing the same functionality for validation_step and test_step.
Currently, I have implemented it by calling to a shared function (val_and_test_step)
def val_and_test_step(self, data_batch, batch_nb):
output = shared_functionality(data_batch, batch_nb)
return output
def validation_ste... | Dear @ItamarKanter,
I believe this great and pretty pythonic !
You could do this to make it slightly cleaner.
class Model(LightningModule)
def common_step(self, batch, batch_idx, stage):
logits = self(batch[0])
loss = self.compute_loss(logits, batch[1])
self.log(f"{state}_loss", loss)
... | MDEwOkRpc2N1c3Npb24zNDIwNDgw | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8046#discussioncomment-896590 |
How to access validation step outputs of complete epoch in a `on_validation_epoch_end` hook for a custom callback ? | I want to implement a custom callback which calculates a custom metric and needs all of the outputs from the complete epoch. Is there any way to pass all the outputs to on_validation_epoch_end hook of the callback ?
Here's the pseudo-code of the setup
class FeedBackPrize(pl.LightningModule):
def __init__(
s... | hey @Gladiator07!
you can either override on_validation_batch_end hook and cache the outputs in some variable of the callback use that.
class CustomCallback(Callback):
def __init__(self):
self.val_outs = []
def on_validation_batch_end(self, trainer, pl_module, outputs, ...):
self.val_outs.append... | D_kwDOCqWgoM4AOp8R | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11659#discussioncomment-2077410 |
using EMA with model checkpoints | I'm trying to incorporate the pytorch_ema library into the PL training loop. I found one topic relating to using pytorch_ema in lightning in this discussion thread, but how would this work if i want to save a model checkpoint based on the EMA weights? for example if i want to save the model weights using just pytorch, ... | you can replace the model state_dict inside the checkpoint
class LitModel(LightningModule):
...
def on_save_checkpoint(self, checkpoint):
with ema.average_parameters():
checkpoint['state_dict'] = self.state_dict() | D_kwDOCqWgoM4AOYvd | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11276#discussioncomment-1892335 |
How to handle pretrained models without training them | Hey gang!
I have written an Encoder model and a decoder model and I want to train them separately.
class Decoder(pl.LightningModule):
def __init__(self, encoder_model):#visualize_latent):
super().__init__()
self.encoder_model = encoder_model
However, when I give my Decoder an Encoder hyperparameter... | Ok, I found out from other forums that one should use .freeze(), in this case: self.encoder_model.freeze() | MDEwOkRpc2N1c3Npb24zNTA2MDQ1 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8778#discussioncomment-1140803 |
AttributeError: 'Trainer' object has no attribute 'run_evaluation' | I am getting the below error when running trainer.fit:
AttributeError: 'Trainer' object has no attribute 'run_evaluation'
Full traceback:
Traceback (most recent call last):
File "sdr_main.py", line 81, in <module>
main()
File "sdr_main.py", line 28, in main
main_train(model_class_pointer, hyperparams,parser... | hey !
this was removed in the previous release.
You can try:
trainer.validating = True
trainer.reset_val_dataloader()
trainer.val_loop.run()
trainer.training = True | D_kwDOCqWgoM4AO4EP | https://github.com/PyTorchLightning/pytorch-lightning/discussions/12097#discussioncomment-2249714 |
Is it okay to feed optimizer to `configure_optimizers` | Hi all, is it okay to feed the optimizer that's been initialized outside this code to pl.LightningModule?
def Model(pl.LightningModule):
def __init__(optimizer):
self.optimizer = optimizer
def configure_optimizers(self) -> Any: ... | yes, I think you can, but not a good practice, we recommend.
just curious, why are you feeding it like that? | D_kwDOCqWgoM4AOuWg | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11783#discussioncomment-2125220 |
Accessing available values to monitor when saving checkpoints | I would like to save the top-10 checkpionts along training. By checking documentations, setting save_top_k, monitor and mode options in ModelCheckpoint jointly seem to do the job.
But I am not sure what are the parameters available for the this callback to monitor. Are they logged values saved during training_step() or... | yes, that's correct. | D_kwDOCqWgoM4AOP1k | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11032#discussioncomment-1788844 |
Unable to load pretrained weight into custom model in Pytorch Lightning | I have created an issue on this. Moderators, please delete this discussion | Will be discussed in #11420. | D_kwDOCqWgoM4AOeB9 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11419#discussioncomment-1947543 |
How to disable the automatic reduce/mean while using dp? | Hello everyone,
I have upgraded pytorch-lightning to 1.2.6 recently, the behavior of dp seems different from 1.2.0. To be specific, the returned values of validation_step() are automatically reduced before sent to validation_epoch_end(). However, the metrics I use need the original predictions of each sample instead of... | I notice that the validation_step_end() and test_step_end() functions in dp.py script are:
def validation_step_end(output):
return self.reduce(output)
def test_step_end(output):
return self.reduce(output)
Thus, overwrite these two methods as follows will disable the automatic reduce in evaluation and te... | MDEwOkRpc2N1c3Npb24zMzIwNDE5 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7009#discussioncomment-623585 |
New error in Trainer started appearing recently in a previously running code. | When I create a Trainer and run Trainer.fit() I am now getting the following error:
raise TypeError("cannot assign '{}' as child module '{}' "
TypeError: cannot assign 'int' as child module 'precision' (torch.nn.Module or None expected)
This is a new error and this code was just working earlier. Do yall know what coul... | Do you have an attribute precision defined in your lightning module? If so, this is an improper override of the lightning module which is leading to this error:
pytorch-lightning/pytorch_lightning/core/lightning.py
Lines 102 to 103
in
49a4a36
... | D_kwDOCqWgoM4AO89y | https://github.com/PyTorchLightning/pytorch-lightning/discussions/12250#discussioncomment-2318947 |
Multiple models, one dataloader? | I have a training regime that is disk-speed bound, because instances are loaded from disk.
I would like to train multiple models with one dataloader. That way, I can do model selection over many models, but reduce the number of disk reads. Is this possible? | Dear @turian,
Yes, it is possible.
You could do something like this.
class MultiModels(LightningModule):
def __init__(self, models: List[nn.Module]):
self.models = models
def compute_loss(self, model, batch):
loss = ...
return loss
def training_step(self, batch, batch_idx):
... | MDEwOkRpc2N1c3Npb24zNDc5ODU4 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8565#discussioncomment-1086321 |
How to pass arrays to callbacks? | In a previous version of pytorch lightning I could return a dictionary in the method validation_epoch_end and then the content of the dictionary would automatically populate trainer.callback_metrics. I can then use this in my callbacks.
However, if I try this in 1.4.8, trainer.callback_metrics is an empty dictionary.
D... | At the end, what I do is to place the dictionary as a new member (e.g. self.extra_data). Then, from the callback I can access it pl_module.extra_data. I guess it is not clean, but it works. | D_kwDOCqWgoM4ANuM0 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/9736#discussioncomment-1442071 |
Saving one single inference example per validation stage | I would like to save one single inference example per validation stage. To this end I came up with:
class Model(pl.LightningModule):
...
def validation_epoch_end(self, validation_step_outputs):
# self.trainer.log_dir is not set during fast_dev_test
if self.trainer.log_dir is not None:
... | here:
x, y = (d.unsqueeze(0) for d in self.trainer.datamodule.valid_set.dataset_pair())
I think this is something not part of the dataloader, so PL won't move to the device automatically.
you can do:
x = x.to(self.device)
y_hat = self.model(x)
... | D_kwDOCqWgoM4AN8UH | https://github.com/PyTorchLightning/pytorch-lightning/discussions/10223#discussioncomment-1554605 |
Optimizers for nested modules | class MyNet(pl.LightningModule):
def __init__(self):
self.m1 = MyMod1()
self.m2 = MyMod2()
If I implement different configure_optimizers for different submodules MyMod(also pl.LightningModule), is it correct that parameters in each MyMod will be updated by their own optimizers returned by configure_... | When you have nested LightningModules, their configure_optimizers will never be called unless you explicitly call it in the top-level configure_optimizers.
That being said, if you call, merge and return the optimizers created there, these optimizers should only contain parameters from the respective submodule (if imple... | MDEwOkRpc2N1c3Npb24zNDkwMDQ4 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8637#discussioncomment-1090099 |
Restore the best model | What would be the most lightning way to restore the best model? Either directly after training (in the same script) or for later use (in another script)?
Thanks in advance ! | You can use the checkpont callback to save the only the best model as described here: https://williamfalcon.github.io/pytorch-lightning/Trainer/Checkpointing/ (note that that doc needs to be updated. Use save_top_k=1 instead of save_best_only)
You can then use the load_from_checkpoint method to restore your checkpoint:... | MDEwOkRpc2N1c3Npb24yNzkyMzkw | https://github.com/PyTorchLightning/pytorch-lightning/discussions/5812#discussioncomment-339782 |
Unfreezing layers during training? | Freezing layers at the beginning of training works, however unfreezing in on_epoch_start() during training causes the gradient to explode. Without the unfreezing part (or without freezing at all), the model trains fine with no gradient issues.
I'm using DDP + Apex O2 and the loss scaling will keep going down to 0 where... | you can unfreeze whenever. if gradients explode it's for another reason | MDEwOkRpc2N1c3Npb24yNzkyNDgy | https://github.com/PyTorchLightning/pytorch-lightning/discussions/5814#discussioncomment-339793 |
Getting Validation Accuracy with validation_epoch_end | Hello i managed to implement training accuracy per epoch with training_epoch_end but i want to do the same with validation accuracy with validation_epoch_end but i get an error of "too many indices for tensor of dimension 0" when train.fit() . I am using pytorch-lightning==1.2.8 . Thanks in advance.
Error is:
<ipython... | I am a fool , i forgot to add return {"loss": loss, "predictions": outputs, "labels": labels} in the validation step. | MDEwOkRpc2N1c3Npb24zNDQwNTU3 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8240#discussioncomment-950465 |
slow new epoch start with setting ddp, num_workers, gpus | ❓ Questions and Help
Before asking:
search the issues.
search the docs.
What is your question?
I am training MNIST with below code. 1 GPU training is ok.
But it shows slow start of new epoch when num_workers is a large number and the number of gpus > 2.
Even dataloading itself is slower than with 1gpu.
Code
import t... | I found that the slow deprecationwarnings shown above are due to the torchvision library. I changed to a simple dataset and the slow start disappeared until now. | MDEwOkRpc2N1c3Npb244MjI0Ng== | https://github.com/PyTorchLightning/pytorch-lightning/discussions/1884#discussioncomment-238164 |
Does .predict() also use the best weights? | On https://pytorch-lightning.readthedocs.io/en/latest/starter/converting.html, it says that ".test() loads the best checkpoint automatically". Is that also the case for .predict()? | yes, by default it does load the best checkpoint if you don't provide the model, you can set it too if you want!
trainer.predict(ckpt_path='best') | D_kwDOCqWgoM4AOJpq | https://github.com/PyTorchLightning/pytorch-lightning/discussions/10795#discussioncomment-1712263 |
How to get the perfect reproducibility | Hi, I'm currently trying to finetune a pretrained BERT model for intent classification using Huggingface's Transformers library and Pytorch Lightning.
The structure is simple where a linear classifier is simply put on the BERT encoder.
I want to get the same result at the same seed setting, but although the whole setti... | Can you try seeding again right before the trainer.test call?
Not saying you should need to but to know if that makes any difference
I think your best bet is to try to create a reproducible snippet. You can get started with the https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/bug_report_mod... | MDEwOkRpc2N1c3Npb24zMzU1MTUx | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7423#discussioncomment-709164 |
Changing the computed gradients before backprop | I want to add noise to the gradients in pytorch lightning . Specifically, something similar to this paper: https://arxiv.org/pdf/1511.06807.pdf . Basically, I would compute the gradients and before the call to backward, i want to add noise.
What is the best way to achieve this in pytorch lightning? | Dear @sebastiangonsal,
From your paper, it seems you might want to call backward which computes the gradients and then add some noise right ?
Therefore, you could override the before_optimizer_step and add noise to all the params.grad.
class TestModel(LightningModule):
def on_before_optimizer_step(self):
f... | MDEwOkRpc2N1c3Npb24zNTE0MTA0 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8866#discussioncomment-1205543 |
Should I configure FP16, optimizers, batch_size in DeepSpeed config of Pytorch-Lightning? | My deepspeed_zero2_config.json:
{
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": true
},
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,... | yes, you don't need to set them inside config since this is done by Lightning already here if you set them in trainer and lightning module: https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/strategies/deepspeed.py | D_kwDOCqWgoM4APH-L | https://github.com/PyTorchLightning/pytorch-lightning/discussions/12465#discussioncomment-2443070 |
Global parameters yaml file | I am using CLI to train my model. Instead of specifying parameters directly, I provide a yaml file with variables defined.
Since I'm using several loggers, they have a common name parameter. So in order to start a new experiment I have to change this parameter in each logger. This raises a question is there a way to cr... | @Serega6678 You can try jsonnet format config file which supports global variables. https://jsonargparse.readthedocs.io/en/stable/#jsonargparse.core.ArgumentParser
To use jsonnet format, you can init CLI by passing {"parser_mode": "jsonnet"} to parser_kwargs. | MDEwOkRpc2N1c3Npb24zNDA3NzMw | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7940#discussioncomment-857633 |
Torch accuracy and sklearn accuracy is v different | This is the code
def test_step(self,batch,batch_idx):
image,label=batch
pred = self(image)
loss=self.criterion(pred.flatten(),label.float()) #calculate loss
acc=self.metrics(pred.flatten(),label)#calculate accuracy
pred=torch.sigmoid(pred)
return {'loss':loss,'acc':ac... | This is the solution
def test_step(self,batch,batch_idx):
image,label=batch
pred = self(image)
return {'label':label,'pred':pred}
def test_epoch_end(self, outputs):
label=torch.cat([x["label"] for x in outputs])
pred=torch.cat([x["pred"] for x in outputs])
acc=self.metrics... | D_kwDOCqWgoM4APAR6 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/12311#discussioncomment-2492937 |
Bug in SLURMConnector? | nvm
cheers | Cheers! 🍻 | MDEwOkRpc2N1c3Npb24zMjUxMDg5 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6369#discussioncomment-525503 |
Logging accuracy with batch accumulation | I wanted to ask how pytorch handles accuracy (and maybe even loss) logging when we have something like pl.Trainer(accumulate_grad_batches=ACCUMULATIONS).
My training looks like this:
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self(x)
loss = F.cross_entropy(y_hat, y, weig... | @sachinruk Class based metrics have been revamped! Please checkout the documentation for the new interface.
While the metrics package does not directly integrate with the accumulate_grad_batches argument (yet), you should be able to do something like this now:
def training_step(self, batch, batch_idx):
x, y = batch... | MDEwOkRpc2N1c3Npb24yNzkyMjEx | https://github.com/PyTorchLightning/pytorch-lightning/discussions/5805#discussioncomment-339757 |
How to not create lightning_logs when using a external logger like wandb ? | I would like my wandb logger to just place their data under wandb dir, and checkpointcallback to save ckpts under dir_path I specified.
And I don't want pl to create lightning_logs and files under it, but I can't set logger=False b/c I use a logger. Is there any suggestion ? | You can set the save_dir in WandbLogger, something like
logger = WandbLogger(save_dir="wandb", ...)
Trainer(logger=logger, ...)
This should work (haven't tested it).
Then your logs and checkpoints will save to two different locations. | MDEwOkRpc2N1c3Npb24zMjkyOTQz | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6685#discussioncomment-568960 |
How to apply a nn.Module (i.e. CNN) across an axis (i.e. Video input) in a parallelizable way | Hi, I’m trying to apply CNN to each image in a video. Currently, my implementation uses a for loop and torch.cat where I take each image and apply the CNN module in the loop. But clearly, this is sequential and I don’t see why it can’t be parallelized in theory since all images are independent from each other.
However,... | You can simply convert your (batch_size, seq_len, channel, height, width) tensor into an (batch_size*seq_len, channel, height, width) tensor, run your model and then reshape your output back:
batch_size, seq_len, channel, height, width = 5, 10, 3, 28, 28 # just random picked
input = torch.randn(batch_size, seq_len, cha... | MDEwOkRpc2N1c3Npb24zMjMzMTgw | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6135#discussioncomment-401729 |
How to flag certain modules as non-deterministic | Hey,
Question: How can I set a module/layer in my model-class to always be non-deterministic (irrespective of the deterministic flag in pl.Trainer())?
Context: I use pl to train a simple AutoEncoder that uses bilinear upscaling in the decoder part. For debugging, I use the deterministic flag of the pl.Trainer(). Howeve... | Hi @dsethz!
The error comes from PyTorch, but not from Lightning, and I think it's not (shouldn't be) feasible even in pure PyTorch because the flag is for reproducibility and if you allow randomness in certain layers, you can't reproduce the same result anymore.
https://pytorch.org/docs/stable/notes/randomness.html | D_kwDOCqWgoM4AO0KS | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11963#discussioncomment-2340961 |
Remove parameters from autograd backward hook | Hello,
I am trying to remove some layers from DistributedDataParallel to prevent them being synchronized between devices.
I spent last 6 hours googling, and I have found out, that there's a attribute _ddp_params_and_buffers_to_ignore which can be set to module that is passed to DistributedDataParallel constructor. I've... | Okay, It was my mistake, I deeply apologize for wasting your time there. The layer indeeds gets removed from the DistributedDataParallel (or rather not even getting there).
But I've found another error when trying to set the _ddp_params_and_buffers_to_ignore inside the LightningModule, so I've created issue here - #118... | D_kwDOCqWgoM4AOwH- | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11835#discussioncomment-2150353 |
Datamodule without Trainer (for inference) | In my usage, LightningDatamodule is currently encapsulating batch collation, moving to device, and batch transformations (via on_after_batch_transfer).
However, when I want to do inference on a bunch of inputs, I want the same steps to happen. What is the recommended way to achieve this? The problem is that Trainer dri... | Why would you not want to use the Trainer?
You can now use trainer.predict for inference (will be in beta after the 1.3 release) | MDEwOkRpc2N1c3Npb24zMjcwMTQ5 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6502#discussioncomment-637641 |
Access datamodule in custom callbacks | I want to create a custom Callback class where I can access certain attributes from my DataModule and log/save them before the start of the train step.
I am little confused on how to do this. Can anyone help me out with a quick snippet? Thanks! | Something like this should work :]
from pytorch_lightning.callbacks import Callback
class MyCallback(Callback):
def __init__(self, ...):
...
# hook for doing something with your datamodule before training step
def on_train_batch_start(self, trainer, *args, **kwargs):
dm = trainer.datamodule... | MDEwOkRpc2N1c3Npb24zMjU1MDkx | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6412#discussioncomment-451794 |
Saving checkpoint, hparams & tfevents after training to separate folder | Thanks to all the contributors of PyTorch Lightning for a fantastic product!
I want to save a checkpoint, hparams & tfevents after training finishes. I have written this callback:
class AfterTrainCheckpoint(pl.Callback):
"""
Callback for saving the checkpoint weights, hparams and tf.events after training finish... | hey @dispoth !!
I'd say use on_fit_end instead, since the last checkpoint in the model checkpoint is saved in this hook, so it won't guarantee to have that ckpt when your callback calls it.
you can copy the log files directly? the are available inside trainer.log_dir.
yes, they will be available during both on_train_e... | D_kwDOCqWgoM4AOuUN | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11779#discussioncomment-2124174 |
How to “lightninfy” the official PyTorch sentiment analysis tutorial? | Hi, I'm trying to refactor the official NLP (sentiment analysis) tutorial, using Lightning in order to take advantage of things like early stopping etc.
I'm moving first steps, and the main hurdle is the creation of a Lightning module, and in particular coding the training_step.
What I came up so far is
class LitTextCl... | @davidefiocco Hi, I think you're trying to instantiate the criterion class with output and cls. You need to instantiate it in advance:
- self.criterion = criterion
+ self.criterion = criterion() | MDEwOkRpc2N1c3Npb24zMjQyMjUy | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6226#discussioncomment-412814 |
Compute Loss After Sharing Tensor Across GPUs | I’m currently attempting to make a Multi-GPU-supported CLIP training script, but am hitting a wall. I need to compute two matrices that are composed of whole batch statistics before I can compute loss. Namely, I need to compute the image and text embeddings of an entire batch. Only then can I compute the sub batch loss... | The LightningModule method all_gather(Tensor) solved it all! | MDEwOkRpc2N1c3Npb24zMzcxOTM0 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7602#discussioncomment-761147 |
Data augmentation and reload_dataloaders_every_epoch | Hi!
I'm training my neural network with Pytorch Lightning and MONAI (a PyTorch-based framework for deep learning in healthcare imaging). Because my training dataset is small, I need to perform data augmentation using random transforms.
Context
I use MONAI's CacheDataset (basically, a PyTorch Dataset with cache mechanis... | Did Lightning add a cache mechanism to load the data once?
No, the flag just means that we call LightningModule.train_dataloader() every epoch if enabled, thus creating a new DataLoader instance.
Must I use the reload_dataloader_every_epoch flag to do data augmentation or else my random transforms will only be applie... | MDEwOkRpc2N1c3Npb24zNDcwNjIz | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8504#discussioncomment-1035364 |
How to analysis the time cost of each part | What is your question?
Hi, I'm trying to implement my project with your framework, however, I'd like to count the time each part costs to make full use of GPUs, but it's puzzling that the time count by myself is not the same as tqdm does. So could you give me some advice about what happened? From process bar, the time ... | tqdm time is a running average. you have to let it warm up for a bit before it converges to the correct time. | MDEwOkRpc2N1c3Npb244MjIwNw== | https://github.com/PyTorchLightning/pytorch-lightning/discussions/112#discussioncomment-238010 |
Where is EarlyStopping searching for metrics? | Where is EarlyStopping search for metrics?
Code
def validation_end(self, outputs):
...
metrics = {
'val_acc': val_acc,
'val_loss': val_loss
}
...
output = OrderedDict({
'val_acc': torch.tensor(metrics['val_acc']),
'val_loss': torch.ten... | If I understand correctly it is a known issue. Please look at #490. #492 fixes this in master. | MDEwOkRpc2N1c3Npb24yNzkyNTU3 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/5822#discussioncomment-339817 |
How can I save and restore the trained model when I call fit() at pytorch_lightning every time? | Hi, everyone!
I want to load model from checkpoint when start a training, and save it to disk when finished every epoch automatically,
Is there any nice way to do that correctly?
Shall we modify the Trianer code, or just use a special hook? | For loading the model, are you looking to load just the weights? If so, take a look at LightningModule.load_from_checkpoint: https://pytorch-lightning.readthedocs.io/en/latest/common/weights_loading.html#checkpoint-loading
Otherwise, if you want to load the whole training state (for example, including the optimizer s... | D_kwDOCqWgoM4ANo9F | https://github.com/PyTorchLightning/pytorch-lightning/discussions/9551#discussioncomment-1336681 |
trainer.check | Hi pytorch-lightning friends! :) I'd like to start a discussion about trainer.check api.
Basically, this API should check that all the user-define classes (models, data, callbacks, ...) are programmatically sound. I propose to use inspect to check for function correctness, here's my PR proposal at #3244 | Locking in favor of #6029 | MDEwOkRpc2N1c3Npb24xNjMyMDI5 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/5407#discussioncomment-510814 |
Manually averaging metrics when logging | I have a metric from torchmetric as follows:
Accuracy(
num_classes=self.model.out_channels,
average='none',
ignore_index=self.ignore_index
)
Obviously I can not log this, however I don't want to set average to any aggregation. I want to log its mean in training_step but want to preserve the class wise met... | From this comment I had the idea that:
if the .compute()method is called the internal state is reset.
However, this behavior has changed, now calling compute() does not reset the state of the metrics. See PR #5409 | D_kwDOCqWgoM4APOIS | https://github.com/PyTorchLightning/pytorch-lightning/discussions/12636#discussioncomment-2516539 |
Control log_dict's on_epoch log name | Hey!
I tried to change to using the new way of logging through MetricCollection and self.log_dict instead of logging every metric through self.log on training step and test_epoch_end. However, each metric is then logged as [metric_name]_epoch_[epoch_number] which creates a new graph for every epoch instead of allowing ... | Hi @FluidSense,
Could you try just updating the MetricCollection during the _step method and then log in _epoch_end method. Something like:
def training_step(self, batch, batch_idx):
logits = self(x)
self.train_metrics.update(logits, y)
def train_epoch_end(self, outputs):
self.log_dict(self.train_metrics.c... | MDEwOkRpc2N1c3Npb24zMzM4MDM3 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7209#discussioncomment-660597 |
Train Discriminator less than Generator | I want to train my discriminator ones per 10 iterations but couldn't figure out how to implement it with lightning. Do you have any advice on this? | Check out the optimization docs. There are a few examples that may help you. https://pytorch-lightning.readthedocs.io/en/latest/common/optimizers.html#step-optimizers-at-arbitrary-intervals | MDEwOkRpc2N1c3Npb24zMjczNjgz | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6526#discussioncomment-607335 |
embedding manual control location CPU vs GPU | I would like to create an embedding that does not fit in the GPU memory
but can fit in the CPU memory.
Select the subset for a batch, send it to the GPU at the start of mini-batch.
GPU_tensor = embedding(idx)
Then at the end of training update the CPU embedding from the GPU embedding.
I am using
pl.Trainer( gpus=[0,1],... | Duplicate of #6725 | MDEwOkRpc2N1c3Npb24zMjk3NjM1 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6726#discussioncomment-637645 |
TypeError: setup() got an unexpected keyword argument 'stage' | Hi,
When calling train.fit(model) I get the following TypeError. I got a couple of more similar errors but could find solutions to them (mainly due to the newer version of pl), but I could not find any fix for following error:
File "train.py", line 540, in <module>
main(args)
File "train.py", line 506, in main
... | Can you share your LightningModule code? Are you overriding the setup function? If so, are you overriding it with this signature?
pytorch-lightning/pytorch_lightning/core/hooks.py
Line 395
in
03bb389
def setup(self, stage: Opt... | MDEwOkRpc2N1c3Npb24zMzkyNDAw | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7800#discussioncomment-813966 |
Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. | Hi, I started noticing the following warning message after setting up a new conda environment with Pytorch 1.8.1, which is an update from my previous environment that uses Pytorch 1.7.0.
Epoch 0: 0%|
[W reducer.cpp:1050] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unus... | Hi @athenawisdoms the docs here cover how you can disable find_unused_parameters and speed up your DDP training
https://pytorch-lightning.readthedocs.io/en/latest/benchmarking/performance.html#when-using-ddp-set-find-unused-parameters-false | MDEwOkRpc2N1c3Npb24zMzAwMjAw | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6761#discussioncomment-551638 |
Accessing DataModule or DataLoaders within model hooks | Hey, as the title says, I want to access the DataModule or the DataLoader inside the on fit start hook. Is this possible and how can I do it? To be more specifc I want to access my model, when I have access to my DataModule, to get a batch of data, then use it to apply some pruning algorithm on my model. | self.datamodule or self.trainer.train_dataloader in the LightningModule | MDEwOkRpc2N1c3Npb24zNDI4MDcz | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8114#discussioncomment-914199 |
[RFC] Thoughts on `on_init_start` and `on_init_end` hooks | These hooks are called when trainer initialization begins and ends, before the model has been set, essentially allowing the user to modify the Trainer constructor. Should we be giving the user this much control over Trainer constructor? Are there scenarios where this is needed? Or can we deprecate these hooks?
... | @carmocca @tchaton @awaelchli do you know how these hooks are used? Have you seen any examples of these being used by the community? These hooks go way way back, but I can't think of when they'd be needed given the user "owns" the Trainer initialization. It's also unclear when on_init_start actually happens: does that ... | D_kwDOCqWgoM4AOHL0 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/10677#discussioncomment-1734637 |
How to put all but some vars to GPU | By default, in Lightning everything that is returned by a dataset is collated by the data loader and shipped to the same device.
However, I am frequently in the situation where I have let's say x, y which are tensors and something like y_semantic which is in principle related to y but of higher data type, say a diction... | Dear @Haydnspass,
You have several ways to do this:
Create a custom Data / Batch Object and implement the .to function to move only what is required.
Simpler: Override LightningModule.transfer_batch_to_device hook and add your own logic to move only x, y to the right device.
Best,
T.C | MDEwOkRpc2N1c3Npb24zMzgyODA2 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7725#discussioncomment-786711 |
Getting error after completion of 1st epoch | I am training an image classifier on tpu and am getting errors after execution of the first epoch.
https://colab.research.google.com/drive/1Lgz0mF6UiLirsDltPQH5HtctmVvb7gHm?usp=sharing | I don't see an error in the provided link.
Is this still relevant? | MDEwOkRpc2N1c3Npb24zNDUxOTU2 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8339#discussioncomment-1040381 |
fast_dev_run does not execute pl.LightningModule.test_step() | I may be misunderstanding something about the trainer argument fast_dev_run. When I provide fast_dev_run=1 and I add a print statement in my LightningModule's test_step function, the print statement does not appear. In addition, I can see a progress bar for my training set and validation set, but no progress bar appear... | fit only runs training & validation, not testing.
trainer.test runs the test_step | D_kwDOCqWgoM4AO6Tv | https://github.com/PyTorchLightning/pytorch-lightning/discussions/12168#discussioncomment-2275238 |
About the Weight Initialization in PL | Hi,
I am tring to use BERT for a project. The pretrained BERT model is part of my model. I am wondering how will PL initialize the model weights. Will it overwrite the pretrained BERT weights?
Thanks. | lightning doesn’t do any magic like this under the hood. you control all the weights and what gets initiated | MDEwOkRpc2N1c3Npb24yNzkyNTEy | https://github.com/PyTorchLightning/pytorch-lightning/discussions/5816#discussioncomment-339800 |
Option for disable tf32 | Hi, is there a way in trainer to disable tf32 for ampere architecture? It's motivated by this discussion:https://discuss.pytorch.org/t/numerical-error-on-a100-gpus/148032/2
cc @justusschock @kaushikb11 @awaelchli @Borda @rohitgr7 | Hi @dnnspark! Simply setting the flags in your script doesn't work?
torch.backends.cuda.matmul.allow_tf32 = False
torch.backends.cudnn.allow_tf32 = False | D_kwDOCqWgoM4APM3c | https://github.com/PyTorchLightning/pytorch-lightning/discussions/12601#discussioncomment-2498126 |
Clarification on reload_dataloaders_every_epoch | I have a PyTorch Lightning DataModule instance that defines train_dataloader, val_dataloader, and test_dataloader.
Currently using a custom callback to reload the train_dataloader that will resample the data.
I saw that there is a Trainer flag called reload_dataloaders_every_epoch and soon to be reload_dataloaders_ever... | Only the train and validation dataloaders:
pytorch-lightning/pytorch_lightning/trainer/training_loop.py
Lines 168 to 170
in
e4f3a8d
# reset train dataloader
if epoch != 0 and self.train... | MDEwOkRpc2N1c3Npb24zMjg2MjM2 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6635#discussioncomment-637654 |
CUDA OOM during validation of first epoch | hi all,
My model validation code (see below) appears to leak memory which leads to a rapid increase in GPU memory usage and, eventually, to an OOM error right before the validation loop is about to complete (about 90% done or so). CUDA memory usage hovers around 8-9GB during training, then increases rapidly to ca. 15+G... | Dear @mishooax,
You are returning the batch from the validation_step, which would be stored. As it is currently on the GPU, after X batches, you would get a OOM.
Unless you need the batch on epoch end, I would recommend to not return anything from the validation_step. | D_kwDOCqWgoM4AONtA | https://github.com/PyTorchLightning/pytorch-lightning/discussions/10959#discussioncomment-1764585 |
Does Pytorch-Lightning have a multiprocessing (or Joblib) module? | ❓ Questions and Help
What is your question?
I have been googling around but can't seem to find if there is a multiprocessing module available in Pytorch-Lightning, just like how Pytorch has a torch.multiprocessing module.
Does anyone know if Pytorch-Lightning has this (or a Joblib similar) module? I am looking for a Py... | PL is just PyTorch under the hood, you can use torch on joblib directly...
in case you want train distributed in CPU only you can use ddp_cpu backend 🐰 | MDEwOkRpc2N1c3Npb244MjI1Nw== | https://github.com/PyTorchLightning/pytorch-lightning/discussions/2720#discussioncomment-238225 |
When use multiple optimizer, TypeError: unsupported operand type(s) for /: 'NoneType' and 'int' | I want build a Super-Resolution Network with multiple optimizer.
The code is below,
def configure_optimizers(self):
d_optimizer = torch.optim.Adam([{'params': self.parameters()}], lr=self.lr, betas=(0.5, 0.9))
g_optimizer = torch.optim.Adam([{'params': self.parameters()}], lr=self.lr, betas=(0.5, 0.... | Hi @choieq, training_step needs to return one of:
Tensor - The loss tensor
dict - A dictionary. Can include any keys, but must include the key 'loss'
None - Training will skip to the next batch
https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html#training-step | MDEwOkRpc2N1c3Npb24zMjk2NjUz | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6708#discussioncomment-564722 |
Saving and loading HF transformer model fine tuned with PL? | I am fine-tuning hugging face transformer models, essentially exactly as shown in the following example found in the pytorch lightning docs:
https://pytorch-lightning.readthedocs.io/en/latest/notebooks/lightning_examples/text-transformers.html
Where we instantiate the LightningModule doing something like this:
class GL... | Dear @brijow,
You should be using the second approach. An even better one would be to rely on ModelCheckpoint to save the checkpoints and provide Trainer(resume_from_checkpoint=...) for reloading all the states.
Best,
T.C | MDEwOkRpc2N1c3Npb24zNTE3NDMw | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8893#discussioncomment-1188532 |
AttributeError: Can't get attribute 'DataReader' on <module '__main__' (built-in)> | Here is DataReader
class DataReader(torch.utils.data.Dataset):
def __init__(self, df):
super(DataReader,self).__init__()
self.df = df
def __len__(self):
'Denotes the total number of samples'
return len(self.df)
def __getitem__(self, index):
'Generates one sample of data'
... | do i need to write whole code under `if __name__ == "__main__"` or just
`trainer.fit(model)`
…
On Mon, Aug 16, 2021 at 2:00 PM thomas chaton ***@***.***> wrote:
Did you try to add
if *name* == "*main*"
to your script ?
—
You are receiving this because you authored the thread.
Reply to this email directly, vie... | MDEwOkRpc2N1c3Npb24zNTE4OTIx | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8898#discussioncomment-1193114 |
"Error: 'MyDataModule' object is not iterable" when calling trainer.predict(model, data) | Hi!
I've trained a model successfully. I now want to have a look at the model predictions. I've overridden the predict_dataloader method in my DataModule:
def predict_dataloader(self):
pred_loader = torch.utils.data.DataLoader(
self.val_ds, batch_size=1, num_workers=4)
return pred_loader
Th... | Sorry! We did not add support for trainer.predict(model, datamodule). We'll do it asap!
You need to do trainer.predict(model, datamodule=datamodule) | MDEwOkRpc2N1c3Npb24zMzU1MjUz | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7425#discussioncomment-709205 |
How to customize training loop? | I want to marry lightning and https://pytorch-geometric.readthedocs.io/en/latest/ or in particular https://pytorch-geometric-temporal.readthedocs.io/en/latest/
When following the basic examples on their website such as for the ChickenpoxDatasetLoader() a RecurrentGCN is constructed. For me being a total newbie for ligh... | just found https://github.com/benedekrozemberczki/pytorch_geometric_temporal/blob/master/examples/lightning_example.py - seems to be the answer to my question - almost, except it is not covering how to handle the iteration over the temporal snapshots. | MDEwOkRpc2N1c3Npb24zMzY1OTU1 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7549#discussioncomment-744017 |
ValueError('signal only works in main thread') | Has anyone else run into this error:
ValueError('signal only works in main thread')
I'm running a hyper parameter sweep using Weights and Biases's framework.
Running on a GPU on Google Colab which causes all launched runs to fail. Running it locally (Mac OS) prompts 'signal only works in main thread' to be printed to s... | @max0x7ba @borisdayma The issue has been with #10610 | D_kwDOCqWgoM4ANqDI | https://github.com/PyTorchLightning/pytorch-lightning/discussions/9589#discussioncomment-1669385 |
forward() takes 1 positional argument but 2 were given | when i try to write a model, i got forward() takes 1 positional argument but 2 were given error, this is my code, i want to know the wrong plcace, thanks!!
i guess the error is in UpSample place, but i don't know why...
class DownSample(nn.Module):
def __init__(self, in_planes: int, out_planes: int, kernel_size: i... | Hi, have a look at the full stack trace so you know which of the forward methods of these different nn.Modules is meant.
have you verified that
u(x)
works? | MDEwOkRpc2N1c3Npb24zNDI1OTU0 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8091#discussioncomment-918111 |
Logging RL results and tracking them with ModelCheckpoint(monitor=...) | I am using Pytorch Lightning in an RL setting and want to save a model when it hits a new max average reward. I am using the Tensorboard logger where I return my neural network loss in the training_step() using:
logs = {"policy_loss": pred_loss}
return {'loss':pred_loss, 'log':logs}
And then I am saving my RL environm... | I have multiple comments that I did not verify yet but they might help
If I'm not mistaken, self.log only works within a selection of hooks currently. I suggest you try to move the relevant code to training_epoch_end where self.log should work correctly.
set the monitor key in the ModelCheckpoint(monitor=) explicitly.... | MDEwOkRpc2N1c3Npb24zMTc5NTk3 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/5883#discussioncomment-354870 |
multiple on_train_epoch_start callbacks but only one on_train_epoch_end? | I thought the number of on_train_epoch_start and on_train_epoch_end should be equal to the number of epochs. But when I passed the following callback function:
class MyPrintingCallback(Callback):
def on_train_epoch_start(self, trainer, pl_module):
print('Train epoch start for epoch: ', pl_module.current_ep... | It turns out that the issue is because I was chaining many data loaders with itertools.chain() which was called only once by lightning. | D_kwDOCqWgoM4AN637 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/10140#discussioncomment-1536062 |
import pytorch_lightning as pl does not work on colab | I can install pl in colab by
!pip install pytorch-lightning==1.2.2 --quiet
but I cannot import it by
import pytorch_lightning as pl
I am thankful if you help me with this issue. | Please upgrade to version 1.2.3 (released yesterday) where this issue was solved. | MDEwOkRpc2N1c3Npb24zMjU1NDQ5 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6425#discussioncomment-462173 |
checkpoint every module in a different ckpt file | Hi!
I am currently working on a project where I would like to checkpoint my model in separated pieces.
My model has a backbone composed by:
a backbone, which is also composed by 3 modules
several heads, each one being a module
I would like to save one ckpt with the backbone and one ckpt per head. I understand that I s... | I'd suggest using checkpoint_io plugin for your use-case. | D_kwDOCqWgoM4AOKiQ | https://github.com/PyTorchLightning/pytorch-lightning/discussions/10840#discussioncomment-1722497 |
Scheduler.step() only called on the end of validation | What I did:
def configure_optimizers(self):
optimizer = torch.optim.AdamW(
self.parameters(),
lr=self.hparams.learning_rate,
eps=1e-5,
)
scheduler = torch.optim.lr_scheduler.OneCycleLR(
optimizer,
max_lr=self.hparams.learning_rate,
epochs=self.trainer.max_epo... | That is the default behaviour of learning rate schedulers, that they step at the end of the training epoch.
Can I ask you what you are trying to achieve?
If you want the learning rate scheduler to step after each batch, you can read more about what the output of configure_optimizers should look like here: https://pytor... | MDEwOkRpc2N1c3Npb24zMjU4OTQz | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6450#discussioncomment-462160 |
How to do this test in a lightning way? | My model has the property that I can prepare the test data in multiple different ways, which results in a set of equally plausible predictions for each data point (one prediction for each way of preparing the test data). By combining these predictions, it is possible to slightly boost overall performance on the test se... | Maybe the prediction api can help you (currently beta, will be released in version 1.3).
You can have multiple predict dataloaders (your different test data). If you do
predictions = trainer.predict(model, predict_dataloaders=[data1, data2, data3, ...])
and it returns the predictions grouped by the dataloader index. Th... | MDEwOkRpc2N1c3Npb24zMzE0MDUx | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6938#discussioncomment-647200 |
Confusion in training_step_end() API | Hi! I am playing around with pytorch-lightning.
Problem
I tried to use 2 gpus and manually merge training loss described in Lightning in 2 steps.
But when I call training_step_end(), it just gives me only one gpu's loss, not all gpus loss.
Question
Do I have to reduce loss myself in training_step_end()?
My code
import... | It's mentioned in the doc that this configuration works only for DP or DDP2, but in your code, you are using DDP so there will only be 1 loss item since gradient sync happens within DDP so each device has its own loss and backward call and won't require manual reduction of loss across devices. | D_kwDOCqWgoM4ANrAK | https://github.com/PyTorchLightning/pytorch-lightning/discussions/9617#discussioncomment-1882531 |
Run specific code only once (which generates randomized values) before starting DDP | Hi. I have a function which generates a set of random values (hyperparameters) which are then used to create my model. I want to run this function only once, then use it to create my model and then start ddp training on this model.
However, with the current setup, when I start ddp, the randomize function gets called ag... | Hey @Gateway2745,
You could do this
from pytorch_lightning.utilities.cli import LightningCLI
from unittest import mock
import optuna
config_path = ...
class MyModel(LightningModule):
def __init__(self, num_layers):
...
def objective(trial):
num_layers = trial.suggest_uniform('num_layers', 10, 100)
... | MDEwOkRpc2N1c3Npb24zNTQwNzY5 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/9134#discussioncomment-1240962 |
how to use Apex DistributedDataParallel with Lightining? | I was wondering if there's a way to use apex.parallel.DistributedDataParallel instead of pytorch native DistributedDataParallel. (I am trying to reproduce a paper that used Apex DDP and apex mixed precision and i am getting lower results using pytorch native one) | Here is a quick draft of what you could try:
from pytorch_lightning.plugins.training_type import DDPPlugin
from apex.parallel import DistributedDataParallel
class ApexDDPPlugin(DDPPlugin):
def _setup_model(self, model: Module):
return DistributedDataParallel(module=model, device_ids=self.determine_ddp_dev... | D_kwDOCqWgoM4AOMiz | https://github.com/PyTorchLightning/pytorch-lightning/discussions/10922#discussioncomment-1751106 |
Confusion about NeptuneLogger.save_dir implementation | In the docstring it says the save_dir is None, but then why does it return a path? Should we change either the docstring, or the implementation here?
pytorch-lightning/pytorch_lightning/loggers/neptune.py
Lines 516 to 524
in
9d8faec
... | Hi @daniellepintz
Prince Canuma here, a Data Scientist at Neptune.ai
I will let the engineering team know about this,
By default, Neptune will create a '.neptune' folder inside the current working directory. In the case of #6867 it changes the model checkpoint path to be '.neptune' folder in case the user doesn't defin... | D_kwDOCqWgoM4AOtvt | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11766#discussioncomment-2274385 |
Combine outputs in test epochs when using DDP | I'm training a model across two GPUs on patient data (id). In my test steps, I output dictionaries, which contain the id, as well as all the metrics. I store these (a list with a dict per id) at the end of the test epoch, so I can later on statistically evaluate model performances.
I'm experiencing a problem with the t... | all_gather is different from all_reduce. It doesn't do any math operation here.
sort of like:
all_gather -> collect outputs from all devices
all_reduce -> in general, collect outputs from all devices and reduce (apply a math op)
all_gather isn't working for you? | D_kwDOCqWgoM4AOSKC | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11086#discussioncomment-1823377 |
Lighting Module Loaded From Checkpoint Generates Different Output Each Time | I'm trying to gain some confidence in a model that seems to be training fine.
As a simple sanity check I'm trying to make sure I can load then test a checkpoint with the same input, expecting to be able to produce the same output each and every time (I'm using the same input and checkpoint each time so I expect the out... | Turns out that I was doing something a little different in the actual code than:
my_module = MyLightningModule.load_from_checkpoint(ckpt_path)
When I do this exactly, things work as expected :) | D_kwDOCqWgoM4APFBW | https://github.com/PyTorchLightning/pytorch-lightning/discussions/12397#discussioncomment-2408892 |
__about__.py "version" field automatically updated (unwanted behavior) | When editing code in VSCode, the pytorch_lightning/__about__.py file keeps getting automatically updated
__version__ = "1.5.0dev" -> __version__ = "20210827"
like in 5f4f3c5#r697656347
Does anyone know how to stop this from happening? Thanks! | After a few days of my investigation, it turned out that it's not from any extensions but from our script in .github/:
pytorch-lightning/.github/prepare-nightly_version.py
Line 13
in
83ce1bf
print(f"prepare init '{_PATH_INFO}'... | MDEwOkRpc2N1c3Npb24zNTQ0NDA4 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/9181#discussioncomment-1500203 |
How set number of epochs | How do I set the number of epochs to train?
What have you tried?
Looking for documentation.
Looking for examples. | Up-to-date link:
https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html#max-epochs | MDEwOkRpc2N1c3Npb24yNzkyNTMw | https://github.com/PyTorchLightning/pytorch-lightning/discussions/5818#discussioncomment-459751 |
Proper way to log things when using DDP | Hi, I was wondering what is the proper way of logging metrics when using DDP. I noticed that if I want to print something inside validation_epoch_end it will be printed twice when using 2 GPUs. I was expecting validation_epoch_end to be called only on rank 0 and to receive the outputs from all GPUs, but I am not sure t... | Hi all,
Sorry we have not got back to you in time, let me try to answer some of your questions:
Is validation_epoch_end only called on rank 0?
No, it is called by all processes
What does the sync_dist flag do:
Here is the essential code:
pytorch-lightning/pytorch_lightning/core/step_result.py
... | MDEwOkRpc2N1c3Npb24zMjcwMTE3 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6501#discussioncomment-553152 |
When are buffers moved to gpu? | I have an issue with a weighted mse function that I instantiate in the setup, with a buffer as parameter. Something like this:
@torch.jit.script
def weighted_mse_func(weights, y, y_hat):
# weighted regression loss
reg_loss = torch.dot(weights,torch.mean(F.mse_loss(y_hat, y, reduction='none'), dim=0))
return... | It seems that pytorch doesn't move buffers parameters in-place (like it is done for parameters), this results in references to buffers being useless if they are moved from one device to another. This issue is discussed in pytorch/pytorch#43815. | D_kwDOCqWgoM4AO7Ye | https://github.com/PyTorchLightning/pytorch-lightning/discussions/12207#discussioncomment-2311707 |
How to have a silent lr_find() | Is there a way to make lr_find not print anything while searching? | Hey @grudloff,
I don't believe this is supported. You could either capture the logs or contribute the feature :)
Best,
T.C | D_kwDOCqWgoM4ANrmA | https://github.com/PyTorchLightning/pytorch-lightning/discussions/9635#discussioncomment-1368974 |
Loss Module with inner Network | I have a loss module which is part of my lightning module with its own inner pretrained vgg network.
The problem comes when I am trying to use the checkpoint (that is saved automatically) to resume training or to test my model.
Then I get an error unexpected key(s) in state_dict pytorch lightning which points to the k... | Dear @vasl12,
You have multiple options:
pass strict=False
drop the key before saving the weights. Use on_save_checkpoint hook to access the checkpoint and drop the associated key.
re-create the missing module in your model
Best,
T.C | MDEwOkRpc2N1c3Npb24zNDYzNjQy | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8445#discussioncomment-1020834 |
Checkpoints not getting saved | I'm using the below callback
`checkpoint_callback = pl.callbacks.ModelCheckpoint(
dirpath=other_arguments.output_dir,
monitor="val_loss",
save_top_k=other_arguments.save_top_k,
save_last=other_arguments.save_last,
mode='min'
)
train_params = dict(
accumulate_grad_batches=other_arguments.gradient_accumulation_steps,... | Pass checkpoint_callback to the callbacks argument. checkpoint_callback is a boolean only flag to indicate whether checkpointing should be enabled or not. | MDEwOkRpc2N1c3Npb24zMzg1NzEw | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7745#discussioncomment-795356 |
How to set Checkpoints to be used in the automatically generated `version_N` directories? | If the TensorBoard logger is set up as shown
logger = TensorBoardLogger(name="MyModel")
checkpoint_callback = ModelCheckpoint(
filename="{epoch}-{step}-{val_loss:.2f}",
monitor="val_loss",
save_top_k=5,
)
trainer = pl.Trainer(
default_root_dir=ROOT_DIR,
callbacks=[checkpoint_callback],
logger=[l... | Hi
For this you need to set the "default_root_dir" in the Trainer, and set the save_dir of the Logger to the same.
This works for me (latest PL version):
from argparse import ArgumentParser
import torch
from torch.nn import functional as F
import pytorch_lightning as pl
from pl_examples.basic_examples.mnist_datamodul... | MDEwOkRpc2N1c3Npb24zMzA1OTk3 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/6821#discussioncomment-568783 |
Question about the log system | Hello, I have some question about the self.log function and batch_size during the trainer.
If I have two GPUs, and I want to train my model with batch_size 16 per GPU and I use DDP, so what's the number of batch_size in Datamodule and what's the number of batch_size in self.log, If I want to calculate my metrics correc... | hey @exiawsh
it should stay as batch_size for a single device only. With DDP if you set batch_size=7, then each device gets the batch of batch_size=7, and effective batch_size increases with the number of devices. Now if you want to log by accumulating metrics across devices, you need to set sync_dist=True. Check out t... | D_kwDOCqWgoM4AOqpi | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11670#discussioncomment-2080711 |
Why is my gpu-util low? | I use one node and 4 gpus for training. And I use dali dataloader, I don't know why my gpu util is low, and training is also slow. About 1:30 per epoch, I train for 200 epoches, which will cost 5 hours. It's slower than the project mmclassification, which only cost 3.5 hours. Compared to mmclassification project which ... | When you compare the two implementations, make sure to leave out as many changing variables as possible. For example, since you train with DDP, run it only on 2 GPUs so that you can be sure it's not bottlenecked by CPU. I don't know the Dali data loader very well, but I doubt that they can guarantee a throughput increa... | MDEwOkRpc2N1c3Npb24zMzI2NTUw | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7082#discussioncomment-626240 |
Should the total epoch size be less when using multi-gpu DDP? | If I have 100 training examples and 100 validation examples, and I run on a single gpu with a batch size of 10, the tqdm bar will show 20 epoch iterations. If I run on 2 gpus with ddp and the same batch size, the tqdm bar will still show 20 epoch iterations, but isnt the effective batch size now 20 instead of 10 becaus... | Hi @jipson7 ,
First of all: You're right, that's how it should be.
We tried to reproduce this, but for us this produced the following (correct) output. Do you have a minimal reproduction example?
Epoch 0: 100%|███████████████████████████████████████████████████████| 10/10 [00:00<00:00, 17.23it/s, loss=-43.6, v_num=272]... | MDEwOkRpc2N1c3Npb24zMzMzNzUx | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7175#discussioncomment-648416 |
Patience reset in EarlyStopping once loss has improved | Hey,
im wondering whether the patience parameter is or can be reset once we have started improving again. Reading the docs it sounds like the patience parameter is the absolute number of steps the loss is allowed to not decrease. Is it possible to have it such that the patience counter is reset to its original value wh... | This is exactly how patience is supposed to work. As you can see here, the counter resets to 0 upon improvement. | MDEwOkRpc2N1c3Npb24zMzk5MTEw | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7849#discussioncomment-832492 |
Error with predict() | Hi @awaelchli and thanks for your time, as you asked in pull requests, i am pinging you here
For other who see this, it's a discussion about Trainer.predict method where it is running BatchNorm Layers, code is below:
https://colab.research.google.com/drive/1jujP4F_prSmbRz-F_wGfWPTKGOmY5DPE?usp=sharing
What is the probl... | Predict takes a dataloader, not a tensor. It still "works" because the trainer just iterates through the batch dimension, but then you get an error later because the input lost the batch dimension, and batch norm doesn't work with batch size 1. | MDEwOkRpc2N1c3Npb24zMzI1MzIx | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7068#discussioncomment-623687 |
Loading checkpoint for LightningModule that defines a system | Systems
The style guide encourages to use systems, like this one
class LitModel(LightningModule):
def __init__(self, encoder: nn.Module = None, decoder: nn.Module = None):
super().__init__()
self.encoder = encoder
self.decoder = decoder
I have problems loading the checkpoint for such modules... | When you reload you need to specify the missing arguments for instantiation:
model = SystemModel.load_from_checkpoint(ckpt_path, encoder=..., decoder=...) | MDEwOkRpc2N1c3Npb24zMzk1MDQ3 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7818#discussioncomment-829954 |
Enabling dropout during trainer.predict | I want to enable dropout during .predict and tried implementing the following:
model.eval()
for m in model.modules():
if m.__class__.__name__.startswith('Dropout'):
m.train()
...
trainer.predict(
model, dataloaders=data_loader, return_predictions=True
)
It seems like .predict is over... | hey @35ajstern !
you can enable this inside predict_step itself. Check this out: https://github.com/PyTorchLightning/pytorch-lightning/blob/f35e2210e240b443fd4dafed8fe2e30ee7d579ea/docs/source/common/production_inference.rst#prediction-api
this is part of a PR, will be available in the docs once merged. | D_kwDOCqWgoM4AOsPR | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11710#discussioncomment-2102555 |
Model with best validation accuracy | Is there a way to save the model with the best validation accuracy when using early stopping? I believe right now, the model weights are the weights from the latest snapshot; but i am looking for a way to access the model with the best performance on validation. | by on validation you mean while calling trainer.validate or validation happening within trainer.fit call? | D_kwDOCqWgoM4AN6mz | https://github.com/PyTorchLightning/pytorch-lightning/discussions/10126#discussioncomment-1531522 |
How can one use an external optimizer with LightningCLI? | I would like to use Adafactor as my optimizer with LightningCLI. I've tried the method described in the documentation for custom optimizers but it didn't work. Can anybody tell me how they would train a model with this optimizer using LightningCLI? | Hi! I got it to work in the meantime. I added this to the main file where I call CLI:
import transformers
from pytorch_lightning.utilities.cli import OPTIMIZER_REGISTRY
@OPTIMIZER_REGISTRY
class Adafactor(transformers.Adafactor):
def __init__(self, *args: Any, **kwargs: Any) -> None:
super().__init__(*args... | D_kwDOCqWgoM4AOPZa | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11016#discussioncomment-1784783 |
Custom scheduler | I would like to provide my own learning rate scheduler. What I would love is do is doing this in the lightning style, e.g. implementing some hooks:
class MyScheduler(pl.LightningScheduler):
...
def on_step(self...):
...
def on_epoch(self...):
...
Is something like this possible? How do other... | can you please elaborate on the usage more? like what do you want to do inside on_step and on_epoch methods? | D_kwDOCqWgoM4ANwn_ | https://github.com/PyTorchLightning/pytorch-lightning/discussions/9817#discussioncomment-1428743 |
on_train_epoch_end() runs int he middle of epochs | Hi
I'm trying to calculate some metrics and generate some images to save at the end of each epoch.
I put this code in the on_train_epoch_end() (I also tried using a custom callback) but the function seems to be called in the middle of the epochs, approximately 3-4 times per epoch.
Surely this isn't intended behaviour? ... | @inigoval thanks for reporting this. Could you provide a snippet code so that we can reproduce ? | MDEwOkRpc2N1c3Npb24zMzUxNTEz | https://github.com/PyTorchLightning/pytorch-lightning/discussions/7375#discussioncomment-697922 |
What's the difference between `on_step` and `on_epoch` of `pl_module.log` | https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html?highlight=on_epoch%20#pytorch_lightning.core.lightning.LightningModule.log.params.on_epoch
I'm using horovod to train the model. I wonder if on_step and on_epoch average the metrics across all GPUs automatically. In other words, do we need... | Dear @marsggbo,
When using self.log(..., on_step=True), this will compute the metric per step locally as synchronisation adds performance hit.
When using self.log(..., on_step=True, sync_dist=True), this will compute the metric per step across GPUS.
When using self.log(..., on_epoch=True), this will compute the metrics... | MDEwOkRpc2N1c3Npb24zNTA5MDI4 | https://github.com/PyTorchLightning/pytorch-lightning/discussions/8806#discussioncomment-1149287 |
How to access the strategy of the trainer | Hi, I am trying to make my code invariant to the choice of strategies by being able to compute the global batch size which depends on the strategy. For example, for DDP it is N * batch_size with N being the number of processes.
The use case I can think of is using the global batch size to initialize the optimizer.
trai... | just out of curiosity, what sort of scheduler/optimizer are you initializing using the global_batch_size? | D_kwDOCqWgoM4AOYR- | https://github.com/PyTorchLightning/pytorch-lightning/discussions/11272#discussioncomment-1880949 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.