Skip to content

Commit

Permalink
Fix docstring for log_interval to differentiate between on-policy…
Browse files Browse the repository at this point in the history
…/off-policy logging frequency (DLR-RM#1855)

* Fix docstring for log_interval inside the learn method in the base class.

* Updated changelog.

* Update docstring

---------

Co-authored-by: Antonin RAFFIN <[email protected]>
  • Loading branch information
rushitnshah and araffin authored Mar 4, 2024
1 parent 56f20e4 commit f375cc3
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 2 deletions.
3 changes: 2 additions & 1 deletion docs/misc/changelog.rst
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,7 @@ Documentation:
- Added video link to "Practical Tips for Reliable Reinforcement Learning" video
- Added ``render_mode="human"`` in the README example (@marekm4)
- Fixed docstring signature for sum_independent_dims (@stagoverflow)
- Updated docstring description for ``log_interval`` in the base class (@rushitnshah).

Release 2.2.1 (2023-11-17)
--------------------------
Expand Down Expand Up @@ -1566,4 +1567,4 @@ And all the contributors:
@anand-bala @hughperkins @sidney-tio @AlexPasqua @dominicgkerr @Akhilez @Rocamonde @tobirohrer @ZikangXiong @ReHoss
@DavyMorgan @luizapozzobon @Bonifatius94 @theSquaredError @harveybellini @DavyMorgan @FieteO @jonasreiher @npit @WeberSamuel @troiganto
@lutogniew @lbergmann1 @lukashass @BertrandDecoster @pseudo-rnd-thoughts @stefanbschneider @kyle-he @PatrickHelm @corentinlger
@marekm4 @stagoverflow
@marekm4 @stagoverflow @rushitnshah
5 changes: 4 additions & 1 deletion stable_baselines3/common/base_class.py
Original file line number Diff line number Diff line change
Expand Up @@ -523,7 +523,10 @@ def learn(
:param total_timesteps: The total number of samples (env steps) to train on
:param callback: callback(s) called at every step with state of the algorithm.
:param log_interval: The number of episodes before logging.
:param log_interval: for on-policy algos (e.g., PPO, A2C, ...) this is the number of
training iterations (i.e., log_interval * n_steps * n_envs timesteps) before logging;
for off-policy algos (e.g., TD3, SAC, ...) this is the number of episodes before
logging.
:param tb_log_name: the name of the run for TensorBoard logging
:param reset_num_timesteps: whether or not to reset the current timestep number (used in logging)
:param progress_bar: Display a progress bar using tqdm and rich.
Expand Down

0 comments on commit f375cc3

Please sign in to comment.