[-] Koto@lemmy.world 1 points 3 days ago

Most browsers have a grammar check option. You need to look into your browser settings and enable it.

[-] Koto@lemmy.world 3 points 3 days ago* (last edited 3 days ago)

It's not the model, I think. I tried it on my comfyUI, chroma v48, it barely ever gives me extra limbs. It could be the resolution. The model was trained in 1024x1024 format, and the options offered on perchance don't match. But to compensate for that perchance offers great image quality, the workflow (or loras) or whatever sampler/scheduler combinations... I've tried it all with the latest chroma and couldn't replicate it. It's amazing here on perchance.

6
submitted 1 month ago by Koto@lemmy.world to c/perchance@lemmy.world

https://perchance.org/mytestgen#edit In this generator, I have a suit list that consists of 5 areas: mask, cape, boots etc. The other two lists are "camera" and "actor". So, camera can zoom-in on a certain body part, and I need it to output only that section of clothing, leaving everything else out-of-frame. That I solved. But there's a third dimension to it. The third list is Actor, some characters cannot wear certain clothes because of their characteristics, for example, mermaid can't wear bottom clothes. Help me add that third dimension into the output line to filter it out properly, please.

1
submitted 1 month ago by Koto@lemmy.world to c/perchance@lemmy.world

testing here: https://perchance.org/z7l103ul1q#edit I have quite a big list of variables, that cause errors if undefined or null. So, I wanted to run a FOR through all of them at the very beginning. But perchance functions always ask to return something. How do I do it?

[-] Koto@lemmy.world 3 points 1 month ago

could you provide a link to your generator at least? for select elements id should always be different. if you want to use the same list twice you just change the id. but actually seeing what you are talking about would be a start.

2
Text-to-image logic (lemmy.world)
submitted 1 month ago by Koto@lemmy.world to c/perchance@lemmy.world

Recently, text-to-image plugin has changed a lot. With respect to Perchance, I'd like to share my humble opinion about it. Take a horse in a medieval setting, for example, 2 pictures out of 5 have blurred/distorted faces. It's not a famous face or a nudity content but it's still getting heavily censored for some reason. I understand the need for censorship and such, but could we focus more on the end user not seeing what they don't want to see in the gallery section rather than messing with the t2i logic? It's a fake parts in fake parts out, at the end of the day, no real person is involved, but it's messing with normal images generation kinda defeating the whole purpose of the plugin and hindering the imagination flow.

[-] Koto@lemmy.world 3 points 1 month ago
<input type="text" id="shownHere">
<select id="userChoosesHere" onchange="document.getElementById('shownHere').value=this.value;">

Something like this?

2
submitted 1 month ago by Koto@lemmy.world to c/perchance@lemmy.world

https://perchance.org/tz4t341n8c#edit In this test generator, in the center, I have a select form which is just a pose list put into options. I have a lot of them in my main generator, and I want to make them a little be more user friendly and interactive, also bypassing language barrier, so I want to add pictures into the options instead( or in addition to words). That's what I did in the poseRadio list. I substituted radio label's background with pictures using CSS styles. I'm planning to have a second list of picture urls for each list, or I could create an array of URLs in the html part. Ideally, it should look like a dropdown with a picture and then the list item name next to it. And when selected, only the item name is stored in the variable without the picture. I'm not sure how to do that yet. Need your help combining select and input radio in some way, thanks!

[-] Koto@lemmy.world 3 points 1 month ago

That's what CEO's of major AI companies like OpenAI say on youtube and x. All but one of them are optimistic about continuous learning AI's by mid 26, and the other guy says "within 5 years".

[-] Koto@lemmy.world 2 points 1 month ago

The model is trained by the devs. Continuous learning from user experience is a thing of 2026 I think, maybe 2027.

[-] Koto@lemmy.world 2 points 1 month ago

You could try my generator: https://perchance.org/imageconstructor. Choose as many details for a character from the menus as you want, type extra details in the prompt if needed, then go to Tools -> Save. You can always return to that character, Actor -> Main character -> Saved. Then, you can use Pose menu to choose camera angles, pose and facial expressions for that character and scenes for place, light, weather and background/vibe. There's little consistency with the faces in the new AI model, but If you provide enough details and choose to generate many images at once (in the Style menu), then one of them look like the one you had in mind. You can also narrow the variety of faces and styles it generates by choosing a famous artist in the Styles menu and sampler in the Tools. Small disclaimer: we only recently got the new model, the generator is in the early beta, I'm open to any suggestions.

[-] Koto@lemmy.world 3 points 1 month ago

I don't have inside information on that, but I'll share my experience here. I think all of the authors are recognized, there's plenty of information on the web to find about them, but the new model lacks consistency (while it's training, or maybe it's the model's intrinsic feature of diversity?). If you ask flux to generate Eiichiro Oda, for example, and generate 3 images at once, one of them will remind Oda's style and the other two would be something you could see somewhere else.

[-] Koto@lemmy.world 3 points 1 month ago

I asked this question before, and got a response that we can't access iframe elements. My work around was to use "relative" and "absolute" styles to position buttons on top of the generated images.

2
submitted 2 months ago by Koto@lemmy.world to c/perchance@lemmy.world

Hi. I'm using a line like this [fileName="{0-9}{a-z}{A-Z}".selectMany(12).join("")] to create a set of numbers and letters at the beginning of the prompt. In the previous model I used to set the priority to 0, so the AI ignored it, but I noticed that the new model ignores :: and these numbers started to appear on objects inside the images. Could I assign a filename in any other way?

4
submitted 2 months ago by Koto@lemmy.world to c/perchance@lemmy.world

Hi, I'm looking for a way for the AI to understand a hex color in the format #fff000, for example, if I get it from a color html element. If you directly ask the flux plugin to show a #fff000 shirt he will use a random color instead, so ideally I need to translate it into words, if that's possible at all? Or perhaps, there's another way.

[-] Koto@lemmy.world 3 points 2 months ago

try adding "unfocused eyes" to the negative prompt

[-] Koto@lemmy.world 4 points 2 months ago

Try "camera" or "the first person" or "the viewer", works for me. Also, check if you have contradicting prompts, like "looking to the side/away" and then next line it's "looking at the camera".

3
submitted 2 months ago by Koto@lemmy.world to c/perchance@lemmy.world

I have a list of clothing items and a variable [gender] that contains value for example "♂️" male. Some items on the list of clothing also contain emojis ♀️ or ♂️ in them, something like "Batman suit♂️". I'm trying to exclude all items with ♀️ if "gender==♂️" and vice versa. I've tried to use clothingTop.selectAll.filter(newlist =>newlist.tags.includes([gender])), ald also match(/♀️/i), I'm not sure how to proceed.

3
submitted 2 months ago by Koto@lemmy.world to c/perchance@lemmy.world

Hi, since the new model came out I struggle to create a middle-of-the-day atmosphere for my photos. "sunny", "very sunny", "the sun is in the zenith", "middle-of-the-day sunny lighting", "{bright|soft} natural day lighting" and many more produce early morning or late day close to sunset. I also tried to use Joy Caption to describe the photos from before, but the description of the lighting and the atmosphere is quite vague and it doesn't help to solve the problem. Any help, please?

5
submitted 2 months ago by Koto@lemmy.world to c/perchance@lemmy.world

Hello. I'm using something like this ["{red, blue, green, orange, lime}".selectMany(3).joinItems(", ")]. It works well enough, but when I try to implement uniqueSelect or consumableList it fails. These function seem to only work for lists but is there any way to select items from a string like that without repetition. Thanks!

[-] Koto@lemmy.world 4 points 2 months ago

I'm pro new AI model because it generally tries to follow your description but I can agree that this insanity with lips contouring ruining it. I've tried negative and positive prompts with hundreds of "lip contouring", "lip surgery", "natural lips", "thin lips" etc, the same for brows. It's ugly in real life and it's ugly when AI tries to copy it, but unfortunately I couldn't get rid of it completely. The only hope is that it's still learning and we'll get the variety with time. And if someone found a solution, please make a post about it.

1
submitted 3 months ago by Koto@lemmy.world to c/perchance@lemmy.world

Using the text-to-image plugin I generate a bunch of [output]'s in the container. How do I access the tag that the output object creates, for example if I want to get the title to get nseed from it using java script, or there's an easier way to do that? I'm trying to make a button over the image to perfect that seed easier. Also, how do I access the button with id "heartBtn" that [output] creates?

view more: next ›

Koto

joined 3 months ago