Skip to content

prompt_embeds_scale in FluxPriorReduxPipeline seems to have no effect. #11642

Open
@Meatfucker

Description

@Meatfucker

Describe the bug

When using the FluxPriorReduxPipeline the prompt_embeds_scale and pooled_prompt_embeds_scale seem to have no effect on the final generation.

Reproduction

async def get_redux_embeds(image, prompt, strength):
redux_repo = "black-forest-labs/FLUX.1-Redux-dev"
text_encoder, tokenizer, text_encoder_2, tokenizer_2 = await get_text_encoders()
redux_pipeline = FluxPriorReduxPipeline.from_pretrained(redux_repo,
text_encoder=text_encoder,
tokenizer=tokenizer,
text_encoder_2=text_encoder_2,
tokenizer_2=tokenizer_2,
torch_dtype=dtype).to("cuda")
redux_embeds, redux_pooled_embeds = redux_pipeline(image=image,
prompt=prompt,
prompt_2=prompt,
prompt_embeds_scale=strength,
pooled_prompt_embeds_scale=strength,
return_dict=False)
redux_pipeline.to("cpu")
del redux_pipeline, text_encoder, tokenizer, text_encoder_2, tokenizer_2
torch.cuda.empty_cache()
gc.collect()

return redux_embeds, redux_pooled_embeds

async def get_text_encoders():
model_name = "black-forest-labs/FLUX.1-dev"
revision = "refs/pr/3"
text_encoder = CLIPTextModel.from_pretrained("openai/clip-vit-large-patch14", torch_dtype=dtype)
tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14", torch_dtype=dtype)
text_encoder_2 = T5EncoderModel.from_pretrained(model_name, subfolder="text_encoder_2", torch_dtype=dtype,
revision=revision)
tokenizer_2 = T5TokenizerFast.from_pretrained(model_name, subfolder="tokenizer_2", torch_dtype=dtype,
revision=revision)
return text_encoder, tokenizer, text_encoder_2, tokenizer_2

Logs

System Info

  • 🤗 Diffusers version: 0.33.1
  • Platform: Linux-6.8.0-60-generic-x86_64-with-glibc2.39
  • Running on Google Colab?: No
  • Python version: 3.12.3
  • PyTorch version (GPU?): 2.7.0+cu126 (True)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Huggingface_hub version: 0.31.2
  • Transformers version: 4.51.3
  • Accelerate version: 1.7.0
  • PEFT version: 0.15.2
  • Bitsandbytes version: 0.45.5
  • Safetensors version: 0.5.3
  • xFormers version: not installed
  • Accelerator: NVIDIA GeForce RTX 3090, 24576 MiB
  • Using GPU in script?:
  • Using distributed or parallel set-up in script?:

Who can help?

@yiyixuxu @DN6

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions