title stringlengths 5 164 | labels list | bodyText stringlengths 0 46.7k |
|---|---|---|
Support multiple loggers at once | [
"feature",
"help wanted"
] | π Feature
Users might want to use more than 1 loggers at once |
Checkpointing Names | [
"bug",
"docs"
] | π Documentation
The names I am getting from checkpointing seem different than this doc page:
https://pytorch-lightning.readthedocs.io/en/latest/checkpointing.html
Filepath seems to do something different. (directory?) |
global_step advanced between accumulations if gradient_accumulation > 1 | [
"bug"
] | π Bug
If gradient_accumulation is > 1 and a custom scheduler is used that updated the LR based on steps (instead of default epochs) than global step is incorrect since it is advancing at every batch part (depending on gradient_accumulation value) instead only after all parts of the batch times gradient_accumulation.
T... |
Demo Notebook Error: | [
"bug"
] | π Bug
trainer.fit(gan_model)
ValueError: Your LightningModule defines 2 optimizers but training_step is missing the "optimizer_idx" argument.
To Reproduce
Steps to reproduce the behavior:
Go to this link.
Run the cell
Scroll down .
See error
PyTorch Version (e.g., 1.0): 1.4.0
OS (e.g., Linux): Linux
How you insta... |
Tensorboard logging should use `num_grad_updates` not `batch_idx` | [
"bug"
] | When accumulate_grad_batches > 1, the x-axis in tensorboard should be number of gradient updates, not number of batches that have been processed. |
advanced profiler description fails for python 3.6 | [
"bug"
] | π Bug
Python 3.6 doesn't have the pstats.SortKey.CUMULATIVE enum so the profiler description breaks.
To Reproduce
Steps to reproduce the behavior:
Use Python 3.6, pass in the AdvancedProfiler, get report at end of a training run.
profiler = AdvancedProfiler(line_count_restriction=10)
trainer = Trainer(profiler=profile... |
Cross validation feature | [
"feature",
"help wanted",
"good first issue",
"discussion"
] | π Feature
Cross-Validation is a crucial model validation techniques for assessing how the model generalizes on new data.
Motivation
Research papers usually require cross-validation. From my point of view, this kind of feature would simplify the work of researches.
Pitch
I want to pass a parameter to the Trainer object... |
Handle abstract loader that doesn't have a dataset member | [
"bug"
] | Feature / Bug (?)
I have an abstract loader that chains multiple pytorch loaders when training on video sequences. Each small loader contains one sequence and the chained loader just use itertools.chain to chain them together.
I cannot put all data in a single loader because it does not make sense to read images from t... |
Epoch end checkpoint restarts previous epoch | [
"bug"
] | π Bug
If restarting the training and reloading the model, the epoch that the checkpoint had just completed is restarted rather than beginning the next.
Expected behavior
When a checkpoint upon epoch end is saved, restarting it should resume its state and start the next epoch. |
Model checkpointing to just save the latest model | [
"feature",
"help wanted"
] | π Feature
There is currently no automated option of always only keeping the latest saved model - there is always a metric benchmark used that needs to be exceeded. This is fine as a default but there should be a 'keep-it-simple' option of always just saving whatever is the latest.
Motivation
The concept of top-k is so... |
[Pyright] Cannot access member 'X' for type 'None' | [
"feature",
"help wanted",
"good first issue"
] | Pyright raises 89 errors on master about Cannot access member 'X' for type 'None'. Here is an example in evaluation_loop.py:
# track outputs for collation
dl_outputs.append(output)
# batch done
if test:
self.test_progress_bar.update(1)... |
test fails with `_lazy_train_dataloader` | [
"bug",
"help wanted",
"good first issue"
] | π Bug
The tests are time-to-time randomly failing across all platforms (but mainly macOS and Win) with following message:
_____________________________ test_lbfgs_cpu_model _____________________________
self = LightningTestModel(
(c_d1): Linear(in_features=784, out_features=1000, bias=True)
(c_d1_bn): BatchNorm1d... |
About fine-tuning and explaining the result | [
"question"
] | β Questions and Help
Hi,
Thank you for the contribution and framework. Would you mind if you lead me how to freeze all the layers except classification layer during the fine tuning? And also would you help me how to explain the result of the fine tuned model such as which words of the "sentence" made it labelled as pos... |
Incorrect Signature of `training_end`, `validation_end`, `test_end` in `Experimet Reporing` | [
"docs"
] | π Documentation
Documentation here defines the signature of the functions as follows:
training_end(self, outputs)
test_end(self, outputs)
validation_end(self, outputs)
While in the Experiment Reporting section of docs, the signature of the functions has been misreported as:
training_end(self, batch, batch_idx)
test_en... |
How do I test before any training? | [
"question"
] | β Questions and Help
I am now migrating some of my previous works into lightning. I wish to see if it is able to reproduce my previous results or not. But the doc implies that all the testing has to be performed after training or after loading the previous lightning training state, which I do not have either.
So How ca... |
Example testing | [
"feature",
"help wanted",
"good first issue",
"won't fix",
"ci"
] | π Feature
Find a way how to test examples in CI
Motivation
Due to some API modification, our examples may become invalid and we would not notice it until a bug report...
Pitch
The best way is running examples as a part of CI, on the other hand, it may be quite a time consuming |
suggestion | [
"feature",
"help wanted",
"question"
] | not needed |
simplification over all kinds of "path" settings | [
"feature",
"good first issue",
"won't fix",
"docs"
] | I'm lost in all kinds of "path" settings, could it be more simpler?
Across the source code and examples, I find that there are many types of "path" config, such as the path in ModelCheckpoint, logger, and default_save_path in trainer.
Could you please explain these path configs in more detail? For example, if we set de... |
Fix .test() on ddp | [
"bug",
"help wanted"
] | This might be broken on notebooks only.
#875 solves a few problems with .test()
However, ddp + .test might be broken on notebooks because of the "spawn" option. (likely #747). |
Unify usage of multiple callbacks | [
"feature",
"help wanted",
"discussion"
] | π Feature
Simplified API, with callbacks... as e.g. Keras did, pass just list of callbacks to be executed and Trainer will call then when needed instead of having them specified
pytorch-lightning/pytorch_lightning/trainer/trainer.py
Lines 65 to 66
in
b104052
... |
MisconfigurationException GPUs on updating lightning | [
"bug",
"help wanted"
] | Not sure what's going on here. I updated to the latest pytorch version:
Can anyone help?
I also included my environment packages.
Traceback (most recent call last):
File "train_gpu.py", line 232, in <module>
main_local(hparam_trial)
File "train_gpu.py", line 106, in main_local
trainer = Trainer(logger = ... |
[dp/ddp mode]Enable checking which process I'm in | [
"feature",
"help wanted"
] | π Feature
Motivation
The motivation is that in dp/ddp mode, one print statement in training_step ends up with multiple printed lines (4 lines because I'm using 4 GPUs)
Pitch
I hope that there's a self.rank to let the user check which process they're in.
So they may choose to print for only rank 0, or print the rank... |
Update torchvision to 0.5.0 | [
"feature",
"help wanted"
] | π Feature
Update torchvision to 0.5.0 (pip install downgrades to 0.4.2)
ERROR: torchvision 0.4.2 has requirement torch==1.3.1, but you'll have torch 1.4.0 which is incompatible.
Installing collected packages: tqdm, torchvision, oauthlib, requests-oauthlib, pyasn1, rsa, pyasn1-modules, cachetools, google-auth, google-... |
Improve wandb tests | [
"bug",
"help wanted",
"good first issue"
] | The tests of the wandb-logger are not as comprehensive as for the rest of the loggers. Currently, they just test if constructing a logger works.
Testing wandb in the same way as the other loggers works:
def test_wandb_logger(tmpdir):
"""Verify that basic functionality of wandb logger works."""
tutils.reset_seed... |
Relax hparams in model saving/loading | [
"question"
] | I've managed to train a model using pl.fit(model) and have the .ckpt file. Now, I'm trying to load the .ckpt file so that I can do inference on a single image:
model = CoolSystem()
to_infer = torch.load('checkpoints/try_ckpt_epoch_1_v0.ckpt')
model.load_from_checkpoint(to_infer) # ------------- error is thrown at this ... |
Test pass shouldn't require both test_step and test_end | [
"bug",
"help wanted",
"good first issue"
] | π Bug
trainer.test(...) requires implementation of both test_step and test_end, but the warning says you only need to implement either or both.
pytorch-lightning/pytorch_lightning/trainer/evaluation_loop.py
Line 291
in
56dddf9
... |
Log training metrics for each epoch | [
"question",
"priority: 0"
] | Currently, I am able to log training metrics to Tensorboard using:
import pytorch_lightning as pl
from pytorch_lightning.loggers import TensorBoardLogger
logger = TensorBoardLogger(save_dir=save_dir, name="my_model")
[...]
trainer = pl.Trainer(logger=logger)
This logs training metrics (loss, for instance) after eac... |
Unrecognized `val_loss` metric | [
"bug",
"help wanted"
] | RuntimeWarnings due to unrecognized val_loss metric
pytorch_lightning callbacks are unable to recognize val_loss from validation_step()
To Reproduce
Run CoolModel from
Steps to reproduce the behavior:
Go to Minimal-Example
Run upto trainer.fit()
Scroll down to the end of an epoch
See error -
`/opt/conda/lib/python3.6... |
How to properly load checkpoint for testing? | [
"bug",
"question"
] | I've trained a system as follows:
model = CoolSystem(hparams)
checkpoint_callback = pl.callbacks.ModelCheckpoint(
filepath=os.path.join(os.getcwd(), 'checkpoints'),
verbose=True,
monitor='val_acc',
mode='max',
prefix='try',
save_top_k=-1,
period=1... |
Add "epoch" options to basic templates | [
"feature",
"help wanted"
] | π Feature
Add "epochs" option to parser of 'basic_examples/lightning_module_template.py'
Motivation
Thanks to 'basic_examples/lightning_module_template.py', I could build my deep learning model. Some beginners like me might build their model from this basic template. However, there are no options to manipulate epoch... |
Automate choosing sampler | [
"feature",
"help wanted"
] | π Feature
Let's automate choosing the sampler.
Case 1 (DDP, training):
Default to distributedSampler
sampler = DistributedSampler(dataset)
Case 2 (training):
sampler = RandomSampler(dataset)
Case 3 (val, test):
sampler = SequentialSampler(dataset)
Case 4 (tpu, train, val, test):
xm.DistributedSampler(dataset)
Mot... |
Add prepare_data call for downloading datasets | [
"feature",
"help wanted"
] | π Feature
Data download and loading data should be separate because it causes issues on distributed environments.
Add
def prepare_data(self):
# download and prep the data
def train_dataloader(self):
# create dataloader |
Remove the prefix 'version_' in tensorboard logging | [
"feature",
"help wanted"
] | π Feature
Motivation
In tensorboard there's an explicit '/', and I think there's no need for this 'version_'.
it makes the whole stuff too long.
Pitch
I can bring a PR for this.
Alternatives
Additional context |
logger is NoneType hence doesn't have any experiment or other functionality in a lightning module | [
"bug",
"help wanted"
] | π Bug
When trying to use the logging abilities of lightning, I hit a wall, the default and tensorboard loggers both seem to stay uninitialized when calling trainer.fit(model), resulting in crashes everytime I try to log something.
To Reproduce
Create a lightning module as such
class SimpleRegressor(pl.LightningModule)... |
Support both Namespace and dict for hyperparameter saving/loading. | [
"feature",
"help wanted",
"good first issue"
] | π Feature
Let the user pass a dict to LightningModule so that after model saving it can be restored using load_from_checkpoint or load_from_metrics.
Motivation
Currently, there is nothing that prevents the user from passing in hyperparameters to LightningModule via dictionary (or even somthing else). However, the mode... |
Improve callback system | [
"feature",
"help wanted"
] | Following the new callback system here are a few things we might want to do proposed by people who participated in #889 and #896.
Consolidate model hooks and callbacks system. Make the calls at the same moment: #889 (comment)
Fix on_train_begin() being called in the early stopping callbacks: #889 (comment)
Ass... |
Support IterableDatasets for validation and test, not just train set [blocked by #953] | [
"feature",
"help wanted"
] | π Feature
Currently Lightning supports IterableDatasets only in the training set (see code). This makes them second-class citizens compared to the map-style datasets, and supporting them seems a low hanging fruit.
Motivation
This enables having larger test sets that may not fit into a machine's memory (they could be v... |
Automatically pick available GPU | [
"feature",
"help wanted",
"discussion"
] | Thanks for this great library!
π Feature
I would like to change the behavior of this code:
trainer = pl.Trainer(
... snip ...,
gpus=1,
)
Currently, when setting gpus to an integer n, the first n GPUs are automatically used.
I would like to change the behavior such that when multiple GPUs are av... |
refactor len(datasets) call. | [
"feature",
"help wanted"
] | π Feature
Let's minimize len(dataset) calls and do it as late in the training as we can (ie: ideally right before any training loop). This way, we can open up the path to support iterable datasets more cleanly.
Motivation
Getting the length prematurely calls datasets at the wrong time often causing double loads.
This ... |
How to implement pre-training? | [
"question"
] | β Questions and Help
What is your question?
What would be the best way to implement pre-training? With pre-training, I mean that I want to freeze some layers for a couple of epochs before training all of them. At the moment, I do it in two runs. In the first one, I freeze the layers, train the model, and save its weigh... |
Process runs on more GPUs than specified | [
"help wanted"
] | I have a single 8-GPU machine with a faulty GPU0.
I'm running imagenet_example.py on 7 GPUs on this machine by specifying gpus=[1,2,3,4,5,6,7] in the Trainer i.e. I do not want to use GPU0
However, when i run nvidia-smi, I see the Trainer's pid shows on all 8 GPUs, just with lower memory on GPU0 (see output below). I a... |
Logging the current learning rate | [
"question"
] | I'd like to write the current learning rate to a Logger. I imagine this would require a call to scheduler.get_lr() but I'm not sure how I can access the scheduler object and where to place the get_lr call
TIA! |
Example of gradient accumulation documentation is not correct | [
"docs"
] | π Documentation
Example of gradient accumulation documentation is not correct :
from pytorch_lightning import Trainer
from pytorch_lightning.callbacks import GradientAccumulationScheduler
# at epoch 5 start accumulating every 2 batches
accumulator = GradientAccumulationScheduler(scheduling: {5: 2})
Trainer(accumulate... |
Graceful keyboard doesn't work as expected | [
"bug",
"help wanted"
] | @jeremyjordan
Not sure the fix is what you were looking for? Make sure to also log the message only from proc_0. |
Passing dataloader to trainer.fit() doesn't work with tpu (and maybe ddp) | [
"bug",
"help wanted"
] | π Bug
Receive a
AttributeError: Can't pickle local object 'Trainer.__set_fit_dataloaders.<locals>.patch_train_dataloader'
error when passing the dataloader directly to trainer.fit(model, train_loader)
To Reproduce
Steps to reproduce the behavior:
Try to call trainer.fit(model, train_loader) in TPU mode.
(I suspect th... |
Checkpooint Callback not called when training GAN from documentation | [
"question"
] | I tried to add a model checkpoint callback to the GAN example from the documentation (https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/domain_templates/gan.py), but it is failing silently; no checkpoints are being written. I can't find anything in the checkpoint docs that indicate why it i... |
Evaluation on subsampled training set during training | [
"question"
] | β Questions and Help
What is your question?
We can evaluate on the validation set to get validation accuracy etc. Is it possible to evaluate on a subsampled version of the training set as well, for example, the training accuracy? Is there an easy wa to do it with PL?
Sorry for the simple question, but I couldn't find t... |
Support storing hparams as a dict | [
"feature",
"help wanted",
"good first issue"
] | π Feature
Right now, we assume model.hparams is an argparse.Namespace. We've had a number of requests to support hparams as a simple dict. Let's do it. |
Failing test: test_running_test_pretrained_model_ddp | [
"bug",
"help wanted",
"priority: 0"
] | I think this is another problem stemming from the fact that we don't have a way to pass data back from torch.multiprocessing.spawn. Needs more investigation.
def test_running_test_pretrained_model_ddp(tmpdir):
"""Verify `test()` on pretrained model."""
...
# run test set
new_trainer = Tr... |
Warnings and import errors when generating docs | [
"bug",
"help wanted"
] | π Bug
There are many warnings when generating the docs, many are import failures.
Because of them, the Trainer docs disappeared, see here: https://pytorch-lightning.readthedocs.io/en/latest/trainer.html
To Reproduce
Steps to reproduce the behavior:
cd docs
make clean
make html
WARNING: autodoc: failed to import modu... |
WandbLogger cannot be used with 'ddp' | [
"bug",
"help wanted",
"logger"
] | π Bug
wandb modifies init such that a child process calling init returns None if the master process has called init. This seems to cause a bug with ddp, and results in rank zero having experiment = None, which crashes the program.
To Reproduce
Can be reproduced with the basic MNIST gpu template, simply add a WandbLogg... |
Logger emits exception when there's `None` in hparams | [
"bug",
"help wanted"
] | To Reproduce
My hparams:
{
'n': [8000],
'k': [30],
'batch_size': 512,
'data_dir': '/Users/kyoungrok/Resilio Sync/Dataset/2019 TREC/passage_ranking/dataset',
'max_nb_epochs': 500,
'learning_rate': 0.0001,
'nodes': 1,
'distributed_backend': None,
'eval_test_set': False,
'check_val_every_n_epoch': 1,
'accumulat... |
TypeError: __init__() got an unexpected keyword argument 'precision' | [
"bug",
"help wanted"
] | π Bug
I followed the guide to "use 16bit precision" from the documentation
But when I do:
trainer = Trainer(amp_level='O1', precision=16, gpus=1)
trainer.fit(model)
I get the error message:
TypeError: __init__() got an unexpected keyword argument 'precision'
I am using lightning version: 0.6.0 |
Simplification: Merge load_from_metrics and load_from_checkpoint | [
"feature",
"help wanted",
"good first issue"
] | π Feature
The two ways of loading a LightningModule from checkpoint only differ in one argument, the tags_csv.
Motivation
The code is almost identical for both and the purpose is the same. If we merge these two into one function, it would simplify the API.
Pitch
Combine
load_from_metrics(cls, weights_path, tags_csv, ... |
GPT2-large on Colab TPU seems to time out | [
"bug",
"help wanted"
] | π Bug
When training gpt2-large on a colab tpu, gpt2-large doesn't work
To Reproduce
See the colab notebook: https://colab.research.google.com/drive/1An6D3wh_H4dbmlEUHYOXZYxkH6S7VKu9
This is the relevant part of the stack trace:
INFO:root:training on 8 TPU cores
2020-03-02 00:43:14.794597: I tensorflow/compiler/xla/xla... |
Precision=16 with TPUs bug | [
"bug",
"help wanted"
] | π Bug
Setting precision=16 when training with a TPU throws an error
To Reproduce
see colab: https://colab.research.google.com/drive/1s-ZDIqzgKQ1Byf-Lw58RZ8LGgmdB6qjB
Relavent stack trace:
Exception in device=TPU:0: str expected, not int
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.