view the rest of the comments
Femcel Memes
Welcome to femcel memes. A place where anybody can post memes that fit the vibe.
Warning: We have a tendency to post things that may at times come from a self-deprecating perspective or things that are funny coming from another queer person. This space will always be a safe place for transfems, non-binary people, people with a feminine gender expression (GNC or otherwise) or anybody else in the LGBT Community to come together and share about our experiences but we truly feel that laughing about the sometimes silly and embarrassing parts the queer experience can help bring us together. We never mean offense or harm in anything posted but rather they are satirical takes coming from queer people.
A note about 'Egging': Our community is mostly made up of transfem individuals, and as such most memes posted will be posted with the intention of having a transfem perspective. However, regardless of gender identity, all feminine presenting individuals are welcome here. Whether that means you're NB, GNC, transmasc, or any other identity, you are welcome here. It is not our intention or goal to invalidate these identities. If something makes you uncomfortable, please feel free to report the post and I will address your concerns on an individual level. For more information regarding the problems with 'Egg-culture', please see Here.
Love Y'all and thank you for following this community
I've bwem waning to try it out considering it has unified memory. What model are you using and what are you running with? I would be thinking something like a small qwen on llama.cpp
I have an OLED, so, slightly better specs in a few fairly minor ways than an LED.
I am using Bazzite, managing the LLMs inside of Alpaca, which is a flatpak, so it works easily with Bazzite's 'containerize everything' approach.
And uh yep, I'm running Qwen3, the... I think its 8B param variant.
I actually told it its HW and SW environment, told it to generate a context prompt so it just always knows that, then asked it to optimize its own settings... and it did come up with settings that make it run a either a bit generally better, or in altetnate sort of modes... I just made a variant 'Focused' and variant 'Contemplative', first one for mechanistic, step 1 2 3 type thinking, secons one for larger conceptualization questions.
Though I think I need to tweak the contemplative variant to be a biiiit less imaginative, it tends to hallucinate and conrtadict itself a bit too much.
I've also been able to like, tell it to read an updated websitr with more modern syntax for GDScript, and tell ot to make itself a context prompt that tells it about it, and then it roughly just 'knows' that... I think the training data is 1 to 2 years out of date now, so occasional little patchwork fixes like that seem to work?