It is possible (and quite standard) to add additional training data to a model after deployment. It takes a long time, and a considerable amount of GPU power. It is also the only way we are likely to get good celebrity support again, because the vast majority of models are now shipped without training on celebrity data sets anymore.
First, you will need a powerful PC and GPU. AI is very resource hungry and takes a lot of processing power to be very good. This is especially true if you want it to "remember" things.
You immediately lost your argument when you stated that you are using the exact same prompts. The notes about the new model clearly state that you will need to adjust your prompts for the new model. There have also been numerous examples, both here and in the Discord server of people getting extremely consistent results by simply prompting correctly. The old model isn't coming back, get over it. Adapt and learn to use the new model. Or, at least put as much effort into learning the new model as you have put into complaining.
Refer to these notes if you want some more info.
What you are referring to is known as "training". Training is the process of showing the AI model thousands upon thousands of images. These images come from curated datasets. Each image is accompanied by some text that describes in clear terms exactly what is in each image. The AI model analyzes each image, compares the image to the text, then stores the data for later use. The more images and text, the more the model will understand. The more the model understands, the better it will be able to use the data it has.
Currently, the model is in training. As training continues you will see the image generator steadily improve until it understands what the old model did.
I'm willing to bet the training data for those tags simply hasn't completed and/or hasn't started yet.
Quote from the notes you can find by clicking the little link that appears when you are generating an image:
This new image model is still training. It's currently worse than the previous one at some things. It'll steadily improve in quality and gain new knowledge over the next few months.
MONTHS
Training takes a long time and a lot of the GPU power on the hardware this model runs on (part of the speed issue). It is getting better as more training data completes, but please be patient.
The site? Image gen is just a plugin on the site. It's a site for creating generators, which are small programs and pages for creating various randomized content. Image gen is a plugin that can be imported into a generator.
I don't understand how changing one plugin has ruined the whole site.
What's wrong? Did something with generator creation break? I'll admit, I haven't been here very long, but it all seems pretty cool to me.
From what I can gather, the dev doesn't like interacting very much. This is something that they do in their spare time as a fulltime dev. They created the perchance platform first and foremost as a way for people to learn code and web development. The AI was always a side feature.
There has also been a lot of grumbling in the perchance community (both here, in the comments of various generators, on reddit, and on the Discord server) about the quality of the previous image generator (plus plenty of feedback via the feedback tool, I'm sure), so it isn't really a surprise that he/she finally made the switch to a new model after stating that they planned to in more than one post here in this forum.
In addition, I didn't see anyone (before the change in model) praising the old one... only complaints about the quality, resolution, coherence, etc. Now that a new model is in place, suddenly many people have lots good things to say about the old one.
Bottom line, this is a completely free, uncensored, private, and unlimited service that the dev was kind enough to gift to us. I'm certainly not going to complain when I don't even know what the final outcome is going to be yet.
You know, a lot of the trouble could have been avoided if people would simply read the notes about the change and have a little patience. It says right in there that the model is still training, so it literally doesn't have all the data needed to make images similar to the old one, yet. It's starting at base settings and parameters, so expect a lot of strange variance and "sameness".
For those wondering why we can't have the old one as an option still, the answer is simple: memory (RAM). These image models don't spin up instantly, so they need to remain loaded in memory in order to field requests. To have the old one running at the same time would mean effectively doubling the resources required when the dev is already running on a shoestring budget supported entirely by ad revenue. It's either one running or the other. In order to train the new model, it must be running, which means the old one can't run at the same time.
I would assume it will be as soon as they are able. Last I heard, they were having some issues getting costs down and proper optimizations in place for the text-to-image model. They also said that it could be ready as soon as April, assuming everything went smoothly with the text-to-image upgrade.
Essentially, sit tight. It will be done when it's done.
Weren't you the one that made the post that this is in response to?
If I remember correctly, your generator is inside an iframe, and pretty heavily isolated. Most likely, the mouse gesture is being caught by the main outer document and not being passed to the inner iframe. It should be possible to tie into
window
and manually forward the events to the inner frame, but I'm not yet knowledgeable enough in JavaScript to give a definitive answer.