27

Preface

Since the introduction of Stable Diffusion 1.5 by StabilityAI, the ML community has eagerly embraced the open-source model. In August, we introduced the 'Segmind Distilled Stable Diffusion' series with the compact SD-Small and SD-Tiny models. We open-sourced the weights and code for distillation training, and the models were inspired by groundbreaking research presented in the paper "On Architectural Compression of Text-to-Image Diffusion Models". These models had 35% and 55% fewer parameters than the base model, respectively, while maintaining comparable image fidelity.

With the introduction of SDXL 1.0 in July, we saw the community moving to the new architecture due to its superior image quality and better prompt coherence. In our effort to make generative AI models faster and more affordable, we began working on a distilled version of SDXL 1.0. We were successful in distilling the SDXL 1.0 to half it's size. Read on to learn more about our SSD-1B model.

Blog post: https://blog.segmind.com/introducing-segmind-ssd-1b/

Model: https://huggingface.co/segmind/SSD-1B

Demo: https://huggingface.co/spaces/segmind/Segmind-Stable-Diffusion

you are viewing a single comment's thread
view the rest of the comments
[-] Even_Adder@lemmy.dbzer0.com 1 points 1 year ago
this post was submitted on 24 Oct 2023
27 points (100.0% liked)

Stable Diffusion

4309 readers
1 users here now

Discuss matters related to our favourite AI Art generation technology

Also see

Other communities

founded 1 year ago
MODERATORS