Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backend PyTorch: Add L1 and L1+L2 regularizers #1905

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

vl-dud
Copy link
Contributor

@vl-dud vl-dud commented Dec 3, 2024

Continuation of the PR #1884

@lululxvi
Copy link
Owner

lululxvi commented Dec 8, 2024

We are unifying the regularization for tensorflow and paddle, see #1894 . Do you think we can also unify pytorch regularization in a more unified code?

@vl-dud
Copy link
Contributor Author

vl-dud commented Dec 8, 2024

Unfortunately, this can be a bit difficult in pytorch. The implementation options seem unnecessarily complicated to me.

deepxde/model.py Outdated Show resolved Hide resolved
@vl-dud
Copy link
Contributor Author

vl-dud commented Dec 19, 2024

I didn't notice earlier that you implemented the NysNewtonCG optimizer. Should I add L1 regularization in train_step_nncg?

@lululxvi
Copy link
Owner

I didn't notice earlier that you implemented the NysNewtonCG optimizer. Should I add L1 regularization in train_step_nncg?

Not this PR.

@lululxvi
Copy link
Owner

Sorry, it has been a while for this PR. Could you remind me what is the purpose of this PR?

@vl-dud
Copy link
Contributor Author

vl-dud commented Jan 20, 2025

Sorry, it has been a while for this PR. Could you remind me what is the purpose of this PR?

This is a pytorch implementation of L1 regularization.

deepxde/model.py Outdated
)

def train_step(inputs, targets, auxiliary_vars):
def closure():
losses = outputs_losses_train(inputs, targets, auxiliary_vars)[1]
total_loss = torch.sum(losses)
if l1_factor:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The loss should be computed in outputs_losses, so it will be recorded and output.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please check it now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants