this Is beyond userful thank you a lot for sharing
Happy to help ๐
Nunchuks also has negative prompt, but Chroma seems to deal with fat ugly lip contouring a lot better it seems. My first impression.
what sort of machine should you have to try this out? I have an older PC running Linux, unfortunately.
Just run the Chroma Model on Tensor Art , or some similiar service:
tungsten.run shakker.ai seaart.ai Frosting.ai Pollinations.ai Kling
New Tensor Art accounts <1month old can't make NSFW content.
Reason is there is ongoing issue with people making throwaway accounts to flood the NSFW channel with bad stuff.
Those seem to be credit-based platforms, not sure if I'm into that right now. Seart.ai offers me the chance to ask a hentai character if she needs help taking a bath. Good to know!
SeeArt is a strange website indeed
Actually, I think I misinterpreted (I think?) the use of "tokens." Not tokens as in credits to buy stuff, but tokens in the "token-based" building block use of the term...
You probably want a GPU with 12GB of VRAM as that's enough to fit the entire fp8-scaled
version. Also a PCIe 4 or 5 NVME SSD if you don't want model load times to be a problem, but that's about it. Linux is great for AI too.
This is what happened when I ran Chroma locally at 7 CFG, doesn't work
Use 'Euler' w. 'sgm_uniform' sampler at CLIP SKIP 1 , and you're good to go ๐
Does this even work??? I advise you to try my options here and post your results. If you can replicate this exact image locally I'd be pleased.
So I just tried what you suggested. Perchance's is on the left and mine on the right.
As you can see mine is still not there yet and I need more info. Here is the workflow:
If you could replicate Perchance's exactly and provide your workflow, I would be glad to see
Try with settings listed at the Huggingface Chroma page. If problems persist use online hosted version on Tensor Art , or ask on the Chroma discord page
??? This is just the example workflow from their HF model card. I'm not sure what you mean here as it doesn't come out well at all with the same settings I used with Perchance.
'I'm not sure what you mean here' is such a reddit reply lmao
First thing first - it IS a Chroma model... whatever version it is. Base seed match exactly. About that prompt. It is too cheap. You can't replicate nothing with that. Nobody can.
To the point. 'text-to-image' use simple approach to feed Chroma model. It is a two-pass sampling. First pass image used by second pass. 'text-to-image' gives user only first/base seed. The second seed is unknown. It is random. Second seed makes variation of the base image. With only base seed you can't reproduse same result. Neither in Comfy UI nor on web page of "text-to-image. Even with very detailed prompt. Hope i've made it clear.
To deteils. "Seven is the number in magic" Shocking Blue.
So there is a two KSampler nodes in workflow. First KSampler pass latent output as the source of second node latent input. Base KSampler set 3.5 guidiance and 1.0 denoise. As a Chroma creators recomedation. To mix in a second-pass image necessary to low down denoise factor of the second KSampler. Let's say we wana spare noise factor in final image from both pass. How do we make it? Simple - we set denoise factor to 50% of the second KSampler. 0.5. But here is the "magic". 50% denoise do not make spare influence. In fact - we get 25% influence from the second pass. Why? because of guidiense. It is halfed to. We halfing overal noise factor. Like we make transparent image on top of another image. Transparent image got only 50% of ALL it's values. Hue, luma, color, etc. Same happening with KSampler denoise.
And what should we do, to solve this issue? Right - we set guidiance to 7, doublind influence of the second pass noise. Simple. Recomended steps for sampling is 26. I use 16 for the first pass and 32 for the second. About resolution. To match "text-to-image" seed base image must be same size and orientation. Second image size may differ, but it leads to errors in image: extra fingers, extra hands and so on. It is happen because of streching latent image. To avoid those errors scaling factor must be 0.25, 0.5, 2.0, 4.0, etc.
Probably the Chroma setup for "text-to-image" plugin much more complex than that. But it's all up to you. Hope it was usefull. SYA.
P.S. As you can see here, high steps change nothing... mostly, while second pass makes drastical changes of base image. Still high-step base image makes final image details sharper and clear.
first image 16 steps, third image 52 steps, second image, pass 2, 32 steps)
Here is a Chroma fp8 e4m3fm scaled. Detailed prompt. 32/32 passes. It was not necessary to make 32 steps for the first pass, just an example. 16/32 steps produce almost same result. even 8/16 can result in better output then one pass with 32 steps.
One issue that i can't solve is a CYAN color. All output, more or less, have a bluish look. I'm trying to get rid of that with help of LoRAs, but got no success yet. Any suggest?
Are you ok? I hope you are doing well , man
what is this post about?
Perchance - Create a Random Text Generator
โ๏ธ Perchance
This is a Lemmy Community for perchance.org, a platform for sharing and creating random text generators.
Feel free to ask for help, share your generators, and start friendly discussions at your leisure :)
This community is mainly for discussions between those who are building generators. For discussions about using generators, especially the popular AI ones, the community-led Casual Perchance forum is likely a more appropriate venue.
See this post for the Complete Guide to Posting Here on the Community!
Rules
1. Please follow the Lemmy.World instance rules.
- The full rules are posted here: (https://legal.lemmy.world/)
- User Rules: (https://legal.lemmy.world/fair-use/)
2. Be kind and friendly.
- Please be kind to others on this community (and also in general), and remember that for many people Perchance is their first experience with coding. We have members for whom English is not their first language, so please be take that into account too :)
3. Be thankful to those who try to help you.
- If you ask a question and someone has made a effort to help you out, please remember to be thankful! Even if they don't manage to help you solve your problem - remember that they're spending time out of their day to try to help a stranger :)
4. Only post about stuff related to perchance.
- Please only post about perchance related stuff like generators on it, bugs, and the site.
5. Refrain from requesting Prompts for the AI Tools.
- We would like to ask to refrain from posting here needing help specifically with prompting/achieving certain results with the AI plugins (
text-to-image-plugin
andai-text-plugin
) e.g. "What is the good prompt for X?", "How to achieve X with Y generator?" - See Perchance AI FAQ for FAQ about the AI tools.
- You can ask for help with prompting at the 'sister' community Casual Perchance, which is for more casual discussions.
- We will still be helping/answering questions about the plugins as long as it is related to building generators with them.
6. Search through the Community Before Posting.
- Please Search through the Community Posts here (and on Reddit) before posting to see if what you will post has similar post/already been posted.