Skip to content

Release notes for optim + foreach in 2.8.0 #10

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,6 @@ The categories below are as follows:
### docs
### devs
### Untopiced
- [ca] default on in CI, with fallback for tests in test/compiled_autograd_skips/ ([#155480](https://github.com/pytorch/pytorch/pull/155480))
### not user facing
- pyfmt lint more torch/utils files ([#155812](https://github.com/pytorch/pytorch/pull/155812))
### security
29 changes: 17 additions & 12 deletions 2.8.0/todo/result_optim.md → 2.8.0/done/result_optim.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,26 +28,31 @@ The categories below are as follows:
### deprecation
### new features
### improvements
- Add TensorLR variant for fused Adagrad on CPU ([#153078](https://github.com/pytorch/pytorch/pull/153078))
- Convert Tensor lr to 0-dim as needed for the optimizer to normally work ([#145674](https://github.com/pytorch/pytorch/pull/145674))
- Add lr_lambda type check in MultiplicativeLR ([#151973](https://github.com/pytorch/pytorch/pull/151973))

### bug fixes
- Fix `lr_scheduler` unexpectedly calls `step()` when init argument last_epoch is larger than -1 ([#149312](https://github.com/pytorch/pytorch/pull/149312))
- Fix CosineAnnealingWarmRestarts reset T_cur ([#151289](https://github.com/pytorch/pytorch/pull/151289))

### performance
### docs
- Add scripts to generate plots of LRSchedulers ([#149189](https://github.com/pytorch/pytorch/pull/149189))
- Include other accelerators in capturable docstr for optimizers ([#149770](https://github.com/pytorch/pytorch/pull/149770))
- Document that dampening is skipped in SGD momentum first step ([#152833](https://github.com/pytorch/pytorch/pull/152833))
- Fix doc cosineannealinglr 152081 ([#152936](https://github.com/pytorch/pytorch/pull/152936))
- Update SGD documentation to match implementation and document that dampening is skipped in SGD first step ([#149884](https://github.com/pytorch/pytorch/pull/149884), [#152833](https://github.com/pytorch/pytorch/pull/152833))
- Fix doc for CosineAnnealingLR ([#152936](https://github.com/pytorch/pytorch/pull/152936))
- Fix incorrect citation of authors in documentation ([#145209](https://github.com/pytorch/pytorch/pull/145209))
- Add `load_state_dict` hint doc about invoke order work with lr_scheduler ([#149942](https://github.com/pytorch/pytorch/pull/149942))

### devs
### Untopiced
- Convert Tensor lr to 0-dim as needed for the optimizer to normally work ([#145674](https://github.com/pytorch/pytorch/pull/145674))
- Clean up duplicated code in lr_scheduler ([#150984](https://github.com/pytorch/pytorch/pull/150984))
- Improve decorator typing for Optimizer subclasses ([#153374](https://github.com/pytorch/pytorch/pull/153374))
- Optimize typing in `lr_scheduler.py` ([#151219](https://github.com/pytorch/pytorch/pull/151219))
- Fix CosineAnnealingWarmRestarts reset T_cur ([#151289](https://github.com/pytorch/pytorch/pull/151289))
- Add lr_lambda type check in MultiplicativeLR ([#151973](https://github.com/pytorch/pytorch/pull/151973))
- Update SGD documentation to match implementation ([#149884](https://github.com/pytorch/pytorch/pull/149884))
- Fix incorrect citation of authors in documentation ([#145209](https://github.com/pytorch/pytorch/pull/145209))
- Fix the type hint of `step()` with default value ([#153367](https://github.com/pytorch/pytorch/pull/153367))
- [BE]: Improve decorator typing for Optimizer subclasses ([#153374](https://github.com/pytorch/pytorch/pull/153374))
- Add TensorLR variant for fused Adagrad on CPU ([#153078](https://github.com/pytorch/pytorch/pull/153078))
- Add `load_state_dict` hint doc about invoke order work with lr_scheduler ([#149942](https://github.com/pytorch/pytorch/pull/149942))

### Untopiced

### not user facing
- Clean up duplicated code in lr_scheduler ([#150984](https://github.com/pytorch/pytorch/pull/150984))

### security
Original file line number Diff line number Diff line change
Expand Up @@ -34,5 +34,5 @@ The categories below are as follows:
### devs
### Untopiced
### not user facing
- [BE]: Fix typing None override other optimizers ([#153386](https://github.com/pytorch/pytorch/pull/153386))
- Fix typing None override other optimizers ([#153386](https://github.com/pytorch/pytorch/pull/153386))
### security
3 changes: 3 additions & 0 deletions 2.8.0/miscategorized.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,7 @@ Welcome to the Pool of Miscategorized commits.
Add any commits that were miscategorized for your domain below.
Handle any commits that actually do belong to your domain and remove them from this list.

## Compiled Autograd
- [ca] default on in CI, with fallback for tests in test/compiled_autograd_skips/ ([#155480](https://github.com/pytorch/pytorch/pull/155480))

## not user facing