-
Notifications
You must be signed in to change notification settings - Fork 947
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SDXL seems to not train self_attn layers in Text Encoders #1952
Comments
I've been primarily using locon for full finetunes or unet, and lokr for attention as they've yielded the best results without the need for regularization. |
@AbstractEyes Hello, is it really a problem what I'm observing? The TE still gets trained. Maybe I can fix it myself. |
Thank you for reporting. I will check it sooner. |
Should fix kohya-ss#1952 I added alternative name for CLIPAttention. I have no idea why this name changed. Now it should accept both names.
Should fix kohya-ss#1952 I added alternative name for CLIPAttention. I have no idea why this name changed. Now it should accept both names.
Hello, I noticed that the recent version only trains MLP in Text Encoders, whereas existing LoRAs or LoRAs trained with GUI version of kohya-ss (that uses older version) seem to train all layers. Is it a mistake on my side? I couldn't find any option to control it.
This is what usually gets trained:
This is what I see in my attempts to use newest version of sdxl_train_network.py:
The text was updated successfully, but these errors were encountered: