Skip to content

Commit 0fa13b8

Browse files
authored
Merge pull request #3 from jikkuatwork/main
Improve README for readability
2 parents b05f6c7 + 92b8cb9 commit 0fa13b8

File tree

1 file changed

+12
-10
lines changed

1 file changed

+12
-10
lines changed

README.md

Lines changed: 12 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,6 @@
1-
# Prompt-to-Prompt: *Latent Diffusion* and *Stable Diffusion* implementation
1+
# Prompt-to-Prompt
2+
3+
> *Latent Diffusion* and *Stable Diffusion* Implementation
24
35
![teaser](docs/teaser.png)
46
### [Project Page](https://prompt-to-prompt.github.io)   [Paper](https://prompt-to-prompt.github.io/ptp_files/Prompt-to-Prompt_preprint.pdf)
@@ -13,16 +15,12 @@ The code was tested on a Tesla V100 16GB but should work on other cards with at
1315

1416
## Quickstart
1517

16-
In order to get started, we recommend taking a look at our notebooks: **prompt-to-prompt_ldm** and **prompt-to-prompt_stable**.
17-
The notebooks contain end-to-end examples of usage of prompt-to-prompt on top of *Latent Diffusion* and *Stable Diffusion* respectively. Take a look at these notebooks to learn how to use the different types of prompt edits and understand the API.
18-
19-
20-
18+
In order to get started, we recommend taking a look at our notebooks: [**prompt-to-prompt_ldm**][p2p-ldm] and [**prompt-to-prompt_stable**][p2p-stable]. The notebooks contain end-to-end examples of usage of prompt-to-prompt on top of *Latent Diffusion* and *Stable Diffusion* respectively. Take a look at these notebooks to learn how to use the different types of prompt edits and understand the API.
2119

2220
## Prompt Edits
2321

2422
In our notebooks, we perform our main logic by implementing the abstract class `AttentionControl` object, of the following form:
25-
```
23+
``` python
2624
class AttentionControl(abc.ABC):
2725
@abc.abstractmethod
2826
def forward (self, attn, is_cross: bool, place_in_unet: str):
@@ -32,7 +30,8 @@ class AttentionControl(abc.ABC):
3230
The `forward` method is called in each attention layer of the diffusion model during the image generation, and we use it to modify the weights of the attention. Our method (See Section 3 of our [paper](https://arxiv.org/abs/2208.01626)) edits images with the procedure above, and each different prompt edit type modifies the weights of the attention in a different manner.
3331

3432
The general flow of our code is as follows, with variations based on the attention control type:
35-
```
33+
34+
``` python
3635
prompts = ["A painting of a squirrel eating a burger", ...]
3736
controller = AttentionControl(prompts, ...)
3837
run_and_display(prompts, controller, ...)
@@ -48,8 +47,8 @@ In this case, the user adds new tokens to the prompt, e.g., editing the prompt `
4847
In this case, the user changes the weight of certain tokens in the prompt, e.g., for the prompt `"A photo of a poppy field at night"`, strengthen or weaken the extent to which the word `night` affects the resulting image. For this we define the class `AttentionReweight`.
4948

5049

51-
## Attention Control Options
52-
* `cross_replace_steps`: specifies the fraction of steps to edit the cross attention maps. Can also be set to a dictionary `[str:float]` which specifies fractions for different words in the prompt.
50+
## Attention Control Options
51+
* `cross_replace_steps`: specifies the fraction of steps to edit the cross attention maps. Can also be set to a dictionary `[str:float]` which specifies fractions for different words in the prompt.
5352
* `self_replace_steps`: specifies the fraction of steps to replace the self attention masp.
5453
* `local_blend` (optional): `LocalBlend` object which is used to make local edits. `LocalBlend` is initialized with the words from each prompt that correspond with the region in the image we want to edit.
5554
* `equalizer`: used for attention Re-weighting only. A vector of coefficients to multiply each cross-attention weight
@@ -68,3 +67,6 @@ In this case, the user changes the weight of certain tokens in the prompt, e.g.,
6867
## Disclaimer
6968

7069
This is not an officially supported Google product.
70+
71+
[p2p-ldm]: "./prompt-to-prompt_ldm.ipynb"
72+
[p2p-stable]: "./prompt-to-prompt_stable.ipynb"

0 commit comments

Comments
 (0)