New Lemmy Post: rupeshs/FastSD CPU Release v1.0.0 Beta 26 (https://lemmy.dbzer0.com/post/15610293)
Tagging: #StableDiffusion(Replying in the OP of this thread (NOT THIS BOT!) will appear as a comment in the lemmy discussion.)
I am a FOSS bot. Check my README: https://github.com/db0/lemmy-tagginator/blob/main/README.md
llama.cpp quantizes the heck out of language models, which allows consumer cpus to run them. my laptop can run most 7b or 13b LLMs with 4bit quantization (and they are trying to push the level of quantization even further to 2 or 1.5 bits!)
The same will happen with stable diffusion. Most SD models are still around fp16 levels of quantization, and will soon be going lower. I expect we’ll all be running SDXL or larger models on our laptop CPUs without breaking a sweat at 4bit level.
What I dislike about lower quantization is quality degradation. In my small experience, i find 7b models dumb (I’ve only tested Q4KM GGUF), and needed to be provided proper context before moving forward with the constructive conversation (be chat or instruct).
If this issue can be circumvented in lower quantization, I’m all in.
In context of SD, going below fp16 would only make things faster at cost of quality, and I personally like to go in depth with my prompts. For simpler prompts sure, even lighting and turbo are good in that regard.