There is a newer version of the record available.

Published April 8, 2020 | Version 0.7.2
Software Open

PyTorchLightning/pytorch-lightning: NO API changes - Many bug fixes, added flexibility, parity tests with pytorch and more

  • 1. Facebook AI Research
  • 2. CTU in Prague
  • 3. Target
  • 4. Peking University, @24OI
  • 5. University of Southampton
  • 6. @facebookresearch
  • 7. Indian Institute of Technology Mandi
  • 8. IvLabs, VNIT
  • 9. Tokyo Denki University
  • 10. RWTH Aachen University & University Hospital Düsseldorf
  • 11. University of McGill
  • 12. Jaguar Land Rover
  • 13. Pontificia Universidad Católica
  • 14. Melbor AI
  • 15. @Apple

Description

[0.7.2] - 2020-04-07 Added
  • Added same step loggers' metrics aggregation (#1278)
  • Added parity test between a vanilla MNIST model and lightning model (#1284)
  • Added parity test between a vanilla RNN model and lightning model (#1351)
  • Added Reinforcement Learning - Deep Q-network (DQN) lightning example (#1232)
  • Added support for hierarchical dict (#1152)
  • Added TrainsLogger class (#1122)
  • Added type hints to pytorch_lightning.core (#946)
  • Added support for IterableDataset in validation and testing (#1104)
  • Added support for non-primitive types in hparams for TensorboardLogger (#1130)
  • Added a check that stops the training when loss or weights contain NaN or inf values. (#1097)
  • Updated references to self.forward() to instead use the __call__ interface. (#1211)
  • Added support for IterableDataset when val_check_interval=1.0 (default), this will trigger validation at the end of each epoch. (#1283)
  • Added summary method to Profilers. (#1259)
  • Added informative errors if user defined dataloader has zero length (#1280)
  • Added testing for python 3.8 (#915)
  • Added a training_epoch_end method which is the mirror of validation_epoch_end. (#1357)
  • Added model configuration checking (#1199)
  • Added support for optimizer frequencies through LightningModule.configure_optimizers() (#1269)
  • Added option to run without an optimizer by returning None from configure_optimizers. (#1279)
  • Added a warning when the number of data loader workers is small. (#1378)
Changed
  • Changed (renamed and refatored) TensorRunningMean -> TensorRunningAccum: running accumulations were generalized. (#1278)
  • Changed progress_bar_refresh_rate trainer flag to disable progress bar when set to 0. (#1108)
  • Enhanced load_from_checkpoint to also forward params to the model (#1307)
  • Updated references to self.forward() to instead use the __call__ interface. (#1211)
  • Changed default behaviour of configure_optimizers to use no optimizer rather than Adam. (#1279)
  • Allow to upload models on W&B (#1339)
  • On DP and DDP2 unsqueeze is automated now (#1319)
  • Did not always create a DataLoader during reinstantiation, but the same type as before (if subclass of DataLoader) (#1346)
  • Did not interfere with a default sampler (#1318)
  • Remove default Adam optimizer (#1317)
  • Give warnings for unimplemented required lightning methods (#1317)
  • Enhanced load_from_checkpoint to also forward params to the model (#1307)
  • Made evaluate method private >> Trainer._evaluate(...). (#1260)
  • Simplify the PL examples structure (shallower and more readable) (#1247)
  • Changed min max gpu memory to be on their own plots (#1358)
  • Remove .item which causes sync issues (#1254)
  • Changed smoothing in TQDM to decrease variability of time remaining between training / eval (#1194)
  • Change default logger to dedicated one (#1064)
Deprecated
  • Deprecated Trainer argument print_nan_grads (#1097)
  • Deprecated Trainer argument show_progress_bar (#1108)
Removed
  • Removed duplicated module pytorch_lightning.utilities.arg_parse for loading CLI arguments (#1167)
  • Removed wandb logger's finalize method (#1193)
  • Dropped torchvision dependency in tests and added own MNIST dataset class instead (#986)
Fixed
  • Fixed model_checkpoint when saving all models (#1359)
  • Trainer.add_argparse_args classmethod fixed. Now it adds a type for the arguments (#1147)
  • Fixed bug related to type cheking of ReduceLROnPlateau lr schedulers(#1114)
  • Fixed a bug to ensure lightning checkpoints to be backward compatible (#1132)
  • Fixed a bug that created an extra dataloader with active reload_dataloaders_every_epoch (#1181
  • Fixed all warnings and errors in the docs build process (#1191)
  • Fixed an issue where val_percent_check=0 would not disable validation (#1251)
  • Fixed average of incomplete TensorRunningMean (#1309)
  • Fixed WandbLogger.watch with wandb.init() (#1311)
  • Fixed an issue with early stopping that would prevent it from monitoring training metrics when validation is disabled / not implemented (#1235).
  • Fixed a bug that would cause trainer.test() to run on the validation set when overloading validation_epoch_end and test_end (#1353).
  • Fixed WandbLogger.watch - use of the watch method without importing wandb (#1311)
  • Fixed WandbLogger to be used with 'ddp' - allow reinits in sub-processes (#1149, #1360)
  • Made training_epoch_end behave like validation_epoch_end (#1357)
  • Fixed fast_dev_run running validation twice (#1365)
  • Fixed pickle error from quick patch __code__ (#1352)
  • Fixed memory leak on GPU0 (#1094, #1349)
  • Fixed checkpointing interval (#1272)
  • Fixed validation and training loops run the partial dataset (#1192)
  • Fixed running on_validation_end only on main process in DDP (#1125)
  • Fixed load_spawn_weights only in proc rank 0 (#1385)
  • Fixes use_amp issue (#1145)
  • Fixes using deprecated use_amp attribute (#1145)
  • Fixed Tensorboard logger error: lightning_logs directory not exists in multi-node DDP on nodes with rank != 0 (#1375).
  • Fixed Unimplemented backend XLA error on TPU (#1387)

Files

PyTorchLightning/pytorch-lightning-0.7.2.zip

Files (5.8 MB)

Name Size Download all
md5:bb2d4cdc7bc6374ac38df2c9db9a04cf
5.8 MB Preview Download

Additional details