[-] Perspectivist@feddit.uk 1 points 51 seconds ago

FUD has nothing to do with what this is about.

[-] Perspectivist@feddit.uk 1 points 6 minutes ago

If more money wouldn't change the way you spend your time, then you're already rich.

[-] Perspectivist@feddit.uk 9 points 4 hours ago

And nothing of value was lost.

Sure, if privacy is worth nothing to you but I wouldn't speak for the rest of the UK and EU.

[-] Perspectivist@feddit.uk 5 points 4 hours ago

My feed right now.

[-] Perspectivist@feddit.uk 1 points 5 hours ago

No disagreement there. While it’s possible that Trump himself might not be - but also might be - guilty of any wrongdoing in this particular case, he sure acts like someone who is. And if he’s not protecting himself, then he’s protecting other powerful people around him who may have dirt on him, which they can use as leverage to stop him from throwing them under the bus without taking himself down in the process.

But that’s a bit beside the point. My original argument was about refraining from accusing him of being a child rapist on insufficient evidence, no matter how much it might serve someone’s political agenda or how satisfying it might feel to finally see him face consequences. If there’s undeniable proof that he is guilty of what he’s being accused of here, then by all means he should be prosecuted. But I’m advocating for due process. These are extremely serious accusations that should not be spread as facts when there’s no way to know - no matter who we’re talking about.

[-] Perspectivist@feddit.uk 2 points 7 hours ago

It’s actually the opposite of a very specific definition - it’s an extremely broad one. “AI” is the parent category that contains all the different subcategories, from the chess opponent on an old Atari console all the way up to a hypothetical Artificial Superintelligence, even though those systems couldn’t be more different from one another.

[-] Perspectivist@feddit.uk 1 points 7 hours ago

It’s a system designed to generate natural-sounding language, not to provide factual information. Complaining that it sometimes gets facts wrong is like saying a calculator is “stupid” because it can’t write text. How could it? That was never what it was built for. You’re expecting general intelligence from a narrowly intelligent system. That’s not a failure on the LLM’s part - it’s a failure of your expectations.

[-] Perspectivist@feddit.uk 3 points 8 hours ago

I don’t think you even know what you’re talking about.

You can define intelligence however you like, but if you come into a discussion using your own private definitions, all you get is people talking past each other and thinking they’re disagreeing when they’re not. Terms like this have a technical meaning for a reason. Sure, you can simplify things in a one-on-one conversation with someone who doesn’t know the jargon - but dragging those made-up definitions into an online discussion just muddies the water.

The correct term here is “AI,” and it doesn’t somehow skip over the word “artificial.” What exactly do you think AI stands for? The fact that normies don’t understand what AI actually means and assume it implies general intelligence doesn’t suddenly make LLMs “not AI” - it just means normies don’t know what they’re talking about either.

And for the record, the term is Artificial General Intelligence (AGI), not GAI.

[-] Perspectivist@feddit.uk 1 points 10 hours ago* (last edited 10 hours ago)

Claims like this just create more confusion and lead to people saying things like “LLMs aren’t AI.”

LLMs are intelligent - just not in the way people think.

Their intelligence lies in their ability to generate natural-sounding language, and at that they’re extremely good. Expecting them to consistently output factual information isn’t a failure of the LLM - it’s a failure of the user’s expectations. LLMs are so good at generating text, and so often happen to be correct, that people start expecting general intelligence from them. But that’s never what they were designed to do.

[-] Perspectivist@feddit.uk 3 points 12 hours ago

I have next to zero urge to “keep up with the news.” I’m under no obligation to know what’s going on in the world at all times. If something is important, I’ll hear about it from somewhere anyway - and if I don’t hear about it, it probably wasn’t that important to begin with.

I’d argue the “optimal” amount of news is whatever’s left after you actively take steps to avoid most of it. Unfiltered news consumption in today’s environment is almost certainly way, way too much.

[-] Perspectivist@feddit.uk 41 points 12 hours ago

Large language models aren’t designed to be knowledge machines - they’re designed to generate natural-sounding language, nothing more. The fact that they ever get things right is just a byproduct of their training data containing a lot of correct information. These systems aren’t generally intelligent, and people need to stop treating them as if they are. Complaining that an LLM gives out wrong information isn’t a failure of the model itself - it’s a mismatch of expectations.

[-] Perspectivist@feddit.uk 14 points 1 day ago

No one should expect anything they write online to stay private. It's the number 1 rule of the internet. Don't say anything you wouldn't be willing to defend having said in front of a court.

282
submitted 1 week ago* (last edited 1 week ago) by Perspectivist@feddit.uk to c/youshouldknow@lemmy.world

I see a huge amount of confusion around terminology in discussions about Artificial Intelligence, so here’s my quick attempt to clear some of it up.

Artificial Intelligence is the broadest possible category. It includes everything from the chess opponent on the Atari to hypothetical superintelligent systems piloting spaceships in sci-fi. Both are forms of artificial intelligence - but drastically different.

That chess engine is an example of narrow AI: it may even be superhuman at chess, but it can’t do anything else. In contrast, the sci-fi systems like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, or GERTY are imagined as generally intelligent - that is, capable of performing a wide range of cognitive tasks across domains. This is called Artificial General Intelligence (AGI).

One common misconception I keep running into is the claim that Large Language Models (LLMs) like ChatGPT are “not AI” or “not intelligent.” That’s simply false. The issue here is mostly about mismatched expectations. LLMs are not generally intelligent - but they are a form of narrow AI. They’re trained to do one thing very well: generate natural-sounding text based on patterns in language. And they do that with remarkable fluency.

What they’re not designed to do is give factual answers. That it often seems like they do is a side effect - a reflection of how much factual information was present in their training data. But fundamentally, they’re not knowledge databases - they’re statistical pattern machines trained to continue a given prompt with plausible text.

98

I was delivering an order for a customer and saw some guy messing with the bikes on a bike rack using a screwdriver. Then another guy showed up, so the first one stopped, slipped the screwdriver into his pocket, and started smoking a cigarette like nothing was going on. I was debating whether to report it or not - but then I noticed his jacket said "Russia" in big letters on the back, and that settled it for me.

That was only the second time in my life I’ve called the emergency number.

view more: next ›

Perspectivist

joined 1 week ago