Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
ablattmann authored Dec 21, 2021
1 parent e66308c commit 17e64e3
Showing 1 changed file with 4 additions and 16 deletions.
20 changes: 4 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,28 +70,16 @@ We provide a first script for sampling from our unconditional models. Start it v
CUDA_VISIBLE_DEVICES=<GPU_ID> python scripts/sample_diffusion.py -r models/ldm/<model_spec>/model.ckpt -l <logdir> -n <\#samples> --batch_size <batch_size> -c <\#ddim steps> -e <\#eta>
```

# Inpainting
![inpainting](assets/inpainting.png)

Download the pre-trained weights
```
wget XXX
```
## Coming Soon...

and sample with
```
python scripts/inpaint.py --indir data/inpainting_examples/ --outdir outputs/inpainting_results
```
`indir` should contain images `*.png` and masks `<image_fname>_mask.png` like
the examples provided in `data/inpainting_examples`.


## Comin Soon...
![inpainting](assets/inpainting.png)

* Code for training LDMs and the corresponding compression models.
* Inference scripts for conditional LDMs for various conditioning modalities.
* In the meantime, you can play with our colab notebook https://colab.research.google.com/drive/1xqzUi2iXQXDqXBHQGP9Mqt2YrYW6cx-J?usp=sharing
* We will also release some further pretrained models.


## Comments

- Our codebase for the diffusion models builds heavily on [OpenAI's codebase](https://github.com/openai/guided-diffusion)
Expand Down

0 comments on commit 17e64e3

Please sign in to comment.