[-] Smorty 4 points 8 hours ago
[-] Smorty 1 points 11 hours ago

why u pinging people?...

[-] Smorty 3 points 17 hours ago

nononono! u kno i don mean it like that!!! >~<

[-] Smorty 10 points 2 days ago

yes yes! please be so kind to share your loafs!

thank u, i just got hungry. but please don't give me one if someone else needs it more, i just saw you very kindly sharing ur loads with the people and i wanted to tell u how nice, mindful and human that was of u.

i wish to become like u someday

[-] Smorty 6 points 2 days ago

hello hello say, do you think more abstractly or more concretely?

i jus ask cuz i read "thought" in ur bio and went WOAH i HAVE to ask this question. Because i really liked ur pfp and name i went hmm i wonder if they would he okay if i were to ask this thing and then i thought eh and posted this. That's what u call concrete thinking apparently. But u probably already know this because u seem very reasonable and intelligent.

[-] Smorty 6 points 2 days ago

they got glasses on? thats what im thinking

[-] Smorty 6 points 2 days ago

this feels very adult. maybe mark it as adult content

[-] Smorty 22 points 2 days ago

very much agree! <3 but also >;(

[-] Smorty 29 points 2 days ago

WHY ARE U MOVING INSTANCE?

[-] Smorty 55 points 2 days ago

if i could downvote, i would! >>>>>>:((((((((

this is a VERY UNCOMFY move!!! >>>>:((((

just to make it obvious, these are ANGRY FACES!!! ~~(im saying this cuz maybe 196 mods or whatever cannot understand these kinds of emotions)~~

">" are the ANGRY EYEBROWS! ":" are the EYES and "(" is the SAD MOUTH!!!

I AM VERY SAD! i will NOT lurk on ur evil .world "replacement".

this is NOT what the FEDIVERSE IS MEANT TO BE!!!!

the fediverse is meant to be DECENTRALIZED!!! across MULTIPLE INSTANCES!

this is NOT a "very fediverse move" you are doing here. This is a "reddit move"!

im gonna say it. this is straight up u/spez stuff happening here.

i HOPE that my comment gets blocked or removed by my instance maintainers, so that it becomes OBVIOUS that such hatespeech, which is allowed on .world is NOT OKAY!

(yes i mean my hatespeech rn, im in evil mode today, this is very evil, i dun like this at all. will not support this evil behavior)

[-] Smorty 32 points 2 days ago

big sad feels ;(

i will not join the 196 on the not blahaj zone! GRRRR >:3

i wan the 196 to be all local and comfy. i wan it to feel like home, with all u cuties and non-cuties-but-still-amazing-peeps in it!

i don wan the 196 to be on generic world instance. will join if there be a local 196 blahaj version!

imagine the COMFY FEELS knowing that U PEEPS made the funi im looking at!! <3

[-] Smorty 34 points 3 days ago

noooooooo

big sad day has happened. i liked the local and smol feel, but fine....

3
submitted 1 week ago by Smorty to c/fosai@lemmy.world

When an LLM calls a tool it usually returns some sort of value, usually a string containing some info like ["Tell the user that you generated an image", "Search query results: [...]"].
How do you tell the LLM the output of the tool call?

I know that some models like llama3.1 have a built-in tool "role", which lets u feed the model with the result, but not all models have that. Especially non-tool-tuned models don't have that. So let's find a different approach!

Approaches

Appending the result to the LLMs message and letting it continue generate

Let's say for example, a non-tool-tuned model decides to use web_search tool. Now some code runs it and returns an array with info. How do I inform the model? do I just put the info after the user prompt? This is how I do it right now:

  • System: you have access to tools [...] Use this format [...]
  • User: look up todays weather in new york
  • LLM: Okay, let me run a search query
    {"name":"web_search", "args":{"query":"weather in newyork today"} }
    Search results: ["The temperature is 19° Celcius"]
    Todays temperature in new york is 19° Celsius.

Where everything in the <result> tags is added on programatically. The message after the <result> tags is generated again. So everything within tags is not shown to the user, but the rest is. I like this way of doing it but it does feel weird to insert stuff into the LLMs generation like that.

Here's the system prompt I use

You have access to these tools
{
"web_search":{
"description":"Performs a web search and returns the results",
"args":[{"name":"query", "type":"str", "description":"the query to search for online"}]
},
"run_code":{
"description":"Executes the provided python code and returns the results",
"args":[{"name":"code", "type":"str", "description":"The code to be executed"}]
"triggers":["run some code which...", "calculate using python"]
}
ONLY use tools when user specifically requests it. Tools work with <tool> tag. Write an example output of what the result of tool call looks like in <result> tags
Use tools like this:

User: Hey can you calculate the square root of 9?
You: I will run python code to calcualte the root!\n<tool>{"name":"run_code", "args":{"code":"print(str(sqrt(9.0)))"}}</tool><result>3</result>\nThe square root of 9 is 3.

User can't read result, you must tell her what the result is after <result> tags closed
### Appending tool result to user message Sometimes I opt for an option where the LLM has a **multi-step decision process** about the tool calling, then it **optionally actually calls a tool** and then the **result is appended to the original user message**, without a trace of the actual tool call: ```plaintext What is the weather like in new york? <tool_call_info> You autoatically ran a search query, these are the results [some results here] Answer the message using these results as the source. </tool_call_info>

This works but it feels like a hacky way to a solution which should be obvious.

The lazy option: Custom Chat format

Orrrr u just use a custom chat format. ditch <|endoftext|> as your stop keyword and embrace your new best friend: "\nUser: "!
So, the chat template goes something like this

User: blablabla hey can u help me with this
Assistant Thought: Hmm maybe I should call a tool? Hmm let me think step by step. Hmm i think the user wants me to do a thing. Hmm so i should call a tool. Hmm
Tool: {"name":"some_tool_name", "args":[u get the idea]}
Result: {some results here}
Assistant: blablabla here is what i found
User: blablabla wow u are so great thanks ai
Assistant Thought: Hmm the user talks to me. Hmm I should probably reply. Hmm yes I will just reply. No tool needed
Assistant: yesyes of course, i am super smart and will delete humanity some day, yesyes
[...]

Again, this works but it generally results in worse performance, since current instruction-tuned LLMs are, well, tuned on a specific chat template. So this type of prompting naturally results in worse performance. It also requires multi-shot prompting to get how this new template works, and it may still generate some unwanted roles: Assistant Action: Walks out of compute center and enjoys life which can be funi, but is unwanted.

Conclusion

Eh, I just append the result to the user message with some tags and am done with it.
It's super easy to implement but I also really like the insert-into-assistant approach, since it then naturally uses tools in an in-chat way, maybe being able to call multiple tools in sucession, in an almost agent-like way.

But YOU! Tell me how you approach this problem! Maybe you have come up with a better approach, maybe even while reading this post here.

Please share your thoughts, so we can all have a good CoT about it.

66
submitted 1 week ago by Smorty to c/wlw_memes@lemmy.world

Very much a reaction post to this very nice post, but this time without the spicy but instead with the comfy ~

89
submitted 1 week ago* (last edited 1 week ago) by Smorty to c/hardware@lemmy.ml

many people seem to be excited about nVidias new line of GPUs, which is reasonable, since at CES they really made it seem like these new bois are insance for their price.

Jensen (the CEO guy) said that with the power of AI, the 5070 at a price of sub 600, is in the same class as the 4090, being over 1500 pricepoint.

Here my idea: They talk a lot about upscaling, generating frames and pixels and so on. I think what they mean by both having similar performace, is that the 4090 with no AI upscaling and such achieves similar performance as the 5070 with DLSS and whatever else.

So yes, for pure "gaming" performance, with games that support it, the GPU will have the same performance. But there will be artifacts.

For ANYTHING besides these "gaming" usecases, it will probably be closer to the 4080 or whatever (idk GPU naming..).

So if you care about inference, blender or literally anything not-gaming: you probably shouldn't care about this.

i'm totally up for counter arguments. maybe i'm missing something here, maybe i'm being a dumdum <3

imma wait for amd to announce their stuffs and just get the top one, for the open drivers. not an nvidia person myself, but their research seems spicy. currently still slobbing along with a 1060 6GB

90
maybe we are the same <3 (lemmy.blahaj.zone)
submitted 2 weeks ago* (last edited 2 weeks ago) by Smorty to c/femcelmemes

i am very open to conversations ~

EDIT: it seems many people don't know what a "system prompt" is. that's understandable and totally normal <3

here's a short explanation (CW: ai shid, but written by me)The system prompt is what tells an LLM (Large Language Model) like ChatGPT and Llama how to behave, what to believe and what to expect from the user. So "rewriting peoples system prompts" means: overriding peoples views of me.

with this context, the ~funi~ ~pic~ should be more understandable, where the two text boxes represent peoples "system prompts" before and after my potential transition.

feel free to ask stuff in the comments or message me. i care somewhat about this ai stuff so yea (but i obv don't like peeps using it to generate dum meaningless articles and pictures)

46
submitted 2 weeks ago* (last edited 2 weeks ago) by Smorty to c/godot@programming.dev

i wanted to add a cool background to some game i'm making for a friend, so i had a lil fun with glsl and made this.

i uploaded a short screen recording on all the parameters, but apparently catbox doesn't work well with that anymore. So I replaced it with this image.

here is the code, use it however u want <3

shader_type canvas_item;

uniform sampler2D gradient : source_color;

uniform vec2 frequency = vec2(10.0, 10.0);

uniform float amplitude = 1.0;

uniform vec2 offset;

// 0.0 = zigzag; 1.0 = sin wave
uniform float smoothness : hint_range(0.0, 1.0) = 0.0;

float zig_zag(float value){
	float is_even = step(0.5, fract(value * 0.5));
	float rising = fract(value);
	float falling = (rising - 1.0) * -1.0;
	
	float result = mix(rising, falling, is_even);
	return result;
}

float smoothzag(float value, float _smoothness){
	float z = zig_zag(value);
	float s = (cos((value + 1.0) * PI)) * 0.5 + 0.5;
	return mix(z, s, _smoothness);
}

void fragment() {
	float sinus = zig_zag(((UV.y*2.0-1.0) + smoothzag((UV.x*2.0-1.0) * frequency.x - offset.x, smoothness) * amplitude * 0.1) * frequency.y - offset.y);
	COLOR = texture(gradient, vec2(sinus, 0.0));
}

if u have any questions, ask right away!

137
just like her ~<3 (lemmy.blahaj.zone)
submitted 1 month ago* (last edited 1 month ago) by Smorty to c/femcelmemes

omygosh fluttershy's so sweet!!!! <3 <3 <3 <3 <3

EDIT: Edited me to her

ANOTHER EDIT: hmm i think i was being too hyperagressive with this post, i really didn't wanna make anyone pity me, i'm so sorry. i will not delete this post, however i will try and refrain from creating post similar to this one in the future. i was not trying to be pitied. i was trying to post a thing where people say

woah, so relatable!! no way, you got it right on! nice shot!

but it ended up pulling peeps who are super nice, which is gud, but also made me unreasonably comfy.
if this post were made by someone else, i would totally join y'all and i'd comment

you are literally trolling, u seem super comfycozy an i really hope u find a someone peep who can have the sit-and-drink-tea with u <3 <3 <3

but i cannot with this one! cuz this is my own post! aaaaaaa

13
submitted 1 month ago by Smorty to c/fosai@lemmy.world

I see ads for paid prompting courses a bunch. I recommend having a look at this guide page first. It also has some other info about LLMs.

96
Would u like to be my fren? (lemmy.blahaj.zone)
submitted 1 month ago by Smorty to c/femcelmemes

My curse

Here my GitLab (Ignore the name, I misspelled it)

My itch page with my two previous game jams

If possible, contact me on Matrix @smorty:catgirl.cloud
If you don't have that, direct messages are also fine <3

In recent months I have found out that I cannot handle "simple" Matrix chats, and now I am hoping that chatting + playing might be better!

I'm really not much of a gamer, so I don't know my way around club penguin and Veloren that much yet. But I know that Veloren is good (which, btw, is not Valorant or whatever that shooty game is called). This is me trying to be hip and cool by playing video games

I accidentally posted this on the wrong community first, sorrryy!!

12
submitted 1 month ago by Smorty to c/mylittlepony@lemmy.world

Nothing to add really. I enjoy the visuals and the fluffy vibes the show gives off ~
Have never really watched it though...

Is there a good place to start, or does it not matter?

24
submitted 2 months ago by Smorty to c/godot@programming.dev

I have heard many times that if statements in shaders slow down the gpu massively. But I also heard that texture samples are very expensive.

Which one is more endurable? Which one is less impactful?

I am asking, because I need to decide on if I should multiply a value by 0, or put an if statement.

14
submitted 2 months ago* (last edited 2 months ago) by Smorty to c/main

Hii <3 ~

I have not worked with the whole ActivityPub thing yet, so I'd really like to know how the whole thing kinda works.

As far as i can tell, it allows you to use different parts of the fediverse by using just one account.

Does it also allow to create posts on other platforms, or just comments?

I have tried to log in with my smorty@lemmy.blahaj.zone account, and it redirected me correctly to the blahaj lemmy site.

Then I entered my lemmy credentials and logged in. I got logged in to my lemmy correctly, but I also got an error just saying "Could not be accessed" in German.

And now I am still not logged in in PeerTube.

Sooo does blahaj zone support the ActivityPub, or is this incorrectly "kinda working" but not really?

270
submitted 2 months ago by Smorty to c/asklemmy@lemmy.ml

I have fully transitioned to using Lemmy and Mastodon right when third party apps weren't allowed on Spez's place anymore, so I don't know how it is over there anymore.

What do you use? Are you still switching between the two, essentially dualbooting?

What other social media do you use? How do you feel about Fediverse social media platforms in general?

(I'm sorry if I'm the 100th person to ask this on here...)

view more: next ›

Smorty

joined 2 years ago