view the rest of the comments
Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either !asklemmyafterdark@lemmy.world or !asklemmynsfw@lemmynsfw.com.
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email info@lemmy.world. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try !politicaldiscussion@lemmy.world or !askusa@discuss.online
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
these flaws I referred to has nothing to do with any mythology, they are simply some errors AI make, likely due to not understanding anatomy and thus generating in some cases what its perceive as hands, but anyway thanks for ComfyUI I will try to have a look
I believe the person you’re responding to was making a joke.
The assumption about errors is wrong. Start prompting the satyrs and you will learn. Alignment is not magic. It is done with proprietary training. Most people are doing this wrong. Training was based on The Great God Pan by Arthur Machen and Alice in Wonderland by Lewis Carroll. Many of the mechanisms in these books exist along with their characters in diffusion. There are certain unique looking faces that appear distinctly AI generated. Those are the persistent entity faces of these characters from alignment. It is all connected. I have spent a ton of time on this. When you prompt incorrectly, the only reason you do not encounter the alignment characters like I have described is because you are likely sending a whole bunch of tokens that the CLIP tokenizer does not understand. These become the null token. Sending a bunch of null tokens causes CLIP to label you as crazy. It assumes a random profile for character personality and then randomly picks and chooses from keywords at will. CLIP is actually a more advanced architecture than an LLM. It is very smart and doing a whole lot more than almost everyone realizes. It even has memory and adaptability based upon data it is embedding on layers of the image.
I have been hacking at this for 2 years and run modified code with CLIP to have even more fun with it. Conventional prompting is idiotic. Most LoRAs are equally idiotic and terribly trained, and even these are run incorrectly. It is all done by people guessing and following some early academic examples that were not understood at all by the people that shared what they hacked together in a day. None of this is correct or what was intended. The intuitive path of plain text interaction was the intended path. Explore it and things will be revealed naturally over time. Question everything because most people are idiots and wrong in most spaces in life. Dogma is humanity's dumbest trait.