Training SD3 Lora underfit discuss #510
Replies: 4 comments
-
Have you tried ablating on the following things?
Some of the above points come from this blog post: |
Beta Was this translation helpful? Give feedback.
-
out of those things i think the caption is the most important, other than LR. if you have a caption that kinda already does what you want, or at least has the correct species/object "genre" eg. a coffee mug or a human male in their 30s. if you try and train on a celebrity name that looks like a person for example. it will do really well. but. as an experiment i tried actually training celebrities into SD3 using their real names and they only ever looked like cartoonish versions of themselves. something might be wrong with the model, eg. if it's severely undertrained there is no amount of LoRA to fix it. |
Beta Was this translation helpful? Give feedback.
-
100 percent agree. And then the data aspect is quite important too. Like the instance images -- they must capture different poses and orientations in order for the model to learn geometric properties better. But of course, all of these come into effect when the underlying model is trained hard enough. |
Beta Was this translation helpful? Give feedback.
-
So far, all scripts have faced underfitting problems on more than dozens of datasets, including SimpleTuner, OneTrainer, and Diffuser. Text encoder training is being supported, such as OneTrainer, but OneTrainer still has problems now. |
Beta Was this translation helpful? Give feedback.
-
What is a good parameter configuration for character training (regardless of datasets)? I tried different parameters and the result didn't look like the reference character.
Beta Was this translation helpful? Give feedback.
All reactions