Recommended setup for Flux-dev full fine-tune #978
Replies: 1 comment 2 replies
-
i'd be curious to know why you're doing full-rank instead of an adapter. especially since LoKr scales to millions of images, there's really no need to go down that path? it's a distilled model, the difficulties in tuning it extend beyond the mere number of trainable parameters |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey everyone,
I'm setting up a full fine-tune for Flux-dev using SimpleTuner, and I'd like to know if there's a tried-and-tested config for this. I've found many conflicting claims regarding both settings and hardware choices, and considering how resource-intensive Flux is, I'd rather not waste too many attempts before settling on a recipe that works.
Is there anyone around who would care to share example setups that worked in their case?
More generally: any tip learned from experience is welcome.
Beta Was this translation helpful? Give feedback.
All reactions