Skip to content

Unofficial implementation of Palette: Image-to-Image Diffusion Models by Pytorch

License

Notifications You must be signed in to change notification settings

liamstaras/Palette-Image-to-Image-Diffusion-Models

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

51 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Palette: Image-to-Image Diffusion Models

Paper | Project

Brief

This project is an unofficial implementation of Palette: Image-to-Image Diffusion Models as a Python library. The engine is taken directly from Liangwei Jiang's repository [Palette-Image-to-Image-Diffusion-Models] (https://github.com/Janspiry/Palette-Image-to-Image-Diffusion-Models). This runs on Pytorch, and the code is mainly inherited from the author's other repositories:Image-Super-Resolution-via-Iterative-Refinement and the template seed project distributed-pytorch-template.

Some details of the upstream implementation:

  • We adapted the U-Net architecture used in Guided-Diffusion, which give a substantial boost to sample quality.
  • We used the attention mechanism in low-resolution features (16×16) like vanilla DDPM.
  • We encode the $\gamma$ rather than $t$ in Palette and embed it with affine transformation.
  • We fix the variance $Σ_\theta(x_t, t)$ to a constant during the inference as described in Palette.

See the sources for more theoretical and implementational details of the models.

Usage

Build the package with

python -m build

and install the produced wheel (in the dist directory). Run training with

python -m palette train -c /path/to/config

Acknowledgements

All credit for the implementation of the engine goes to Liangwei Jiang. That work is based on the following theoretical literature:

About

Unofficial implementation of Palette: Image-to-Image Diffusion Models by Pytorch

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%