We conducted training on the following 5 datasets using the DDPM sampler with an image size of 64*64. we also enabled conditional, using the gelu activation function, linear learning function and setting learning rate to 3e-4. The datasets are cifar10, NEUDET, NRSD-MN, WOOD and Animate face in 300 epochs.
The results are shown in the following as:
Base on the 64×64 model to generate 160×160 (every size) images (Industrial surface defect generation only)
[Not recommended] Of course, based on the 64×64 U-Net model, we generate 160×160 NEU-DET images in the generate.py file (single output, each image occupies 21GB of GPU memory). Attention this [issues]! If it's an image with defect textures where the features are not clear, generating a large size directly might not have these issues, such as in NRSD or NEU datasets. However, if the image contains a background with specific distinctive features, you may need to use super-resolution or resizing to increase the size, for example, in Cifar10, CelebA-HQ, etc. If you really need large-sized images, you can directly train with large pixel images if there is enough GPU memory. Detailed images are as follows:
For more results generated by different samplers, see the examples in Test Generation.



































