[-] vivendi@programming.dev 1 points 5 months ago* (last edited 5 months ago)

No there is not. Borrow checking and RAII existed in C++ too and there is no formal axiomatic proof of their safety in a general sense. Only to a very clearly defined degree.

In fact, someone found memory bugs in Rust, again, because it is NOT soundly memory safe.

Dart is soundly Null-safe. Meaning it can never mathematically compile null unsafe code unless you explicitly say you're OK with it. Kotlin is simply Null safe, meaning it can run into bullshit null conditions.

The same thing with Rust: don't let it lull you into a sense of security that doesn't exist.

[-] vivendi@programming.dev 1 points 5 months ago

He is on lemmy?

[-] vivendi@programming.dev 1 points 5 months ago

LMFAO in my turf if we need to constantly check some values we either have a hook or a wrapped Stream

Weak diss

[-] vivendi@programming.dev 1 points 6 months ago

Where was this picture taken? This looks somehow incredibly like Iran

[-] vivendi@programming.dev 1 points 7 months ago

How tf did canon brick them

My shit is so ancient it has no idea what an "Internet" is. I'd like to see canon touch this mfer lol

[-] vivendi@programming.dev 1 points 7 months ago

This is because auto regressive LLMs work on high level "Tokens". There are LLM experiments which can access byte information, to correctly answer such questions.

Also, they don't want to support you omegalul do you really think call centers are hired to give a fuck about you? this is intentional

[-] vivendi@programming.dev 1 points 7 months ago

This particular graph is because a lot of people freaked out over "AI draining oceans" that's why the original paper (I'll look for it when I have time, I have a exam tomorrow. Fucking higher ed man) made this graph

[-] vivendi@programming.dev 1 points 7 months ago

You need to run the model yourself and heavily tune the inference, which is why you haven't heard from it because most people think using shitGPT is all there is with LLMs. How many people even have the hardware to do so anyway?

I run my own local models with my own inference, which really helps. There are online communities you can join (won't link bcz Reddit) where you can learn how to do it too, no need to take my word for it

view more: ‹ prev next ›

vivendi

joined 8 months ago