Substance Diffusion

an AI plug-in for generating images and depth maps inside Adobe Substance Designer.

This example makes use of two different prompts: 'realistic graffiti on a concrete wall, close up, urban style, lettering, colorful' and 'old school street graffiti with a stylized face, realistic, lots of details, front view, concrete wall'. As you can see, it turned out pretty well and adding some extra features to an albedo channel isn’t much of a hassle. While it may not fully align with the Substance Designer philosophy of being completely procedural, it’s definitely more procedural than just googling free-to-use graffiti images and blending them with albedo afterwards. Plus, now that we've got Adobe Firefly in Photoshop, I'm pretty sure it's only a matter of time before Substance Designer gets something similar.

image
image

Substance Diffusion is built on top of Stable Diffusion (hence the name 🙂), Automatic 1111, and ControlNet, so if you’ve ever worked with these, you should feel right at home. Of course, the features of Automatic 1111 and ControlNet are overkill for texture generation, so the UI has been simplified for more intuitive use. Let’s go over the features from top to bottom:

1. Model (Dreamshaper in my case) — This simply allows you to use any supported AI model from Hugging Face or similar sources.

2. Prompts — kind of self-explanatory: write what you’d like to see and what you wouldn’t.

3. Seed — using -1 means the image will be generated with a random seed. If you like the result but want to slightly adjust it, use the ‘fix seed’ button.

4. Generate — creates an image based on your prompt. Depth Map — extracts a depth map. ControlNet gives you a choice between several depth models; the plugin uses Depth Midas since it’s the fastest.

5. Samples — the more you set, the greater the amount of details you’ll get. However, this isn’t a golden rule, and if you’ve never tried AI generation before, I suggest checking this article. CGF Scale — indicates how much of an impact your prompts have on the result.

The UI side is implemented in Python and PySide (as Substance Designer's API uses it), so the plugin window supports all the features that native windows do, such as un/docking and un/pinning. To minimize the number of buttons, a drag-and-drop feature has been added. Once you're happy with the result, you can simply drag the image into your graph network.

image
image