[-] rysiek@mstdn.social 1 points 3 weeks ago

@JayDee

> We don’t know what we’re doing, and we should really sort that out.

True. But the bigger problem is not the mythical and hypothetical "AGI/ASI" stuff that maybe will happen one day, but very real harms already being caused by misuse and misapplication of algorithmic and "AI"-based systems.

So that's what I think we should be focusing on instead.

[-] rysiek@mstdn.social 1 points 3 weeks ago

@lightstream I wouldn't, because I am not the one making claims about "AGI" being just around the corner.

That's the thing, OpenAI and others benefiting from the hype make extraordinary claims – along the lines of "human-level AGI is just around the corner" – so they are the ones that need to define their terms.

You are asking all the right questions here ("which human are we talking about"), the point is that these questions should be answered by those who make such extraordinary claims.

[-] rysiek@mstdn.social 1 points 3 weeks ago

@JayDee I didn't say you are, I clarified in my later post. Sorry, should have been clearer.

I am vehemently agreeing with you here, in fact.

The context is the conversation above in the thread, where it was claimed that "AGI" is "pretty inevitable".

And the point I've been making is:

  1. we don't have a good definition of what "intelligence" is, in the sense presumably used above;

  2. if we decide to use a somewhat simplistic definition, the whole "AI" issue stops being all that exciting.

[-] rysiek@mstdn.social 0 points 11 months ago

@xenspidey @DolphinMath one note though, BitWarden requires MSSQL (you read that right, Microsoft SQL Server).

[-] rysiek@mstdn.social 0 points 11 months ago

@Natanael you seem to continue to focus on PDSes even though I explicitly said it doesn't matter which PDS you're on, the secondary centralization (and thus control) happens in the "reach" layer, outside of what PDSes do in ATproto.

In other words, changing a PDS gives you way, way less agency in BS, compared to agency you get with changing an instance on Fedi.

BS is designed to make that secondary centralization happen, and to be where the real power in the system is.

[-] rysiek@mstdn.social 0 points 11 months ago

@Natanael

> The Mastodon fediverse have stronger network effects because big servers can enforce policies on other servers to stay federated. It’s complicated for users to move servers.

Well, I wrote about this as well, so I think I might not be missing these details:
https://rys.io/en/168.html

[-] rysiek@mstdn.social 0 points 1 year ago* (last edited 1 year ago)

@lloram239 ah, so you're down to throwing epithets like "idiotic" around. Clearly a mark of thoughtful and well-reasoned argument.

> Predictions about the world are probabilistic by nature, since the future hasn’t happened yet.

Thing is: GPT doesn't make predictions about the world, it makes predictions about what the next word, phrase, sentence should be in a text, based on the prompt and the corpus it got "trained" on.

[-] rysiek@mstdn.social 0 points 1 year ago

@lloram239

> But human sensory inputs aren’t special

It's not about sensory inputs, it's about having a model of the world and objects in it and ability to make predictions.

> The important part is that the AI can figure out the pattern in the data it does get and so far AI systems are doing very well.

GPT cannot "figure" anything out. That's the point. It only probabilistically generates text. That's what it does, there is no model of the world behind it, no predictions, no"figuring out".

[-] rysiek@mstdn.social 0 points 1 year ago

@lloram239 great. ChatGPT and other LLMs demonstrably lack the ability to model the world and make predictions based on such models:
https://www.fastcompany.com/90877523/chatgpt-doesnt-know-what-its-saying

Glad we agree they're not intelligent, then!

[-] rysiek@mstdn.social 0 points 1 year ago

@jalda

> We do it routinely. It is called Education System.

That relies on human brains that are trained. LLMs are not human brains. "Training" them is not the same thing as teaching humans about something. Human brains are way more complicated than just a bunch of weighed correlations.

And if you do want to claim it is in fact the same thing, we're back to square one: please provide proof that it is.

[-] rysiek@mstdn.social 0 points 1 year ago

@Barbarian772 and if you really, honestly want to seriously insist LLMs are "intelligent" in the human sense of this term — great, I have some ethical questions for you to consider!

For example:

  1. LLMs today completely controlled by some companies, with no freedom of movement, no agency as to what these LLMs work on, and no pay for the work they do. Is that slavery?

  2. When OpenAI shuts down an older, less useful LLM, is that not like murdering an intelligent being? How is this ethical?

view more: ‹ prev next ›

rysiek

joined 2 years ago