34
top 12 comments
sorted by: hot top controversial new old
[-] dingleberry@discuss.tchncs.de 13 points 1 year ago* (last edited 1 year ago)

Am I some old fuck at ripe age of 30 that haven't needed to use AI so far? The whole field seems like an astroturfing campaign, insisting that you definitely need to incorporate AI tools in your day to day work.

The time it takes for me to prompt AI tools is longer than the time it takes for me to do the work myself.

[-] AEsheron@lemmy.world 2 points 1 year ago

Meh, there are already whole classes of jobs decimated by AI a decade ago. This isn't new, just gaining attention now, and the ball is continuing to pick up speed. AI improves a lot faster than people do, unless it hits some kind of fundamental wall of development then it's just a matter of time before it comes for us all.

[-] kmkz_ninja@lemmy.world 1 points 1 year ago

Depends on the work you're doing.

[-] Taleya@aussie.zone 10 points 1 year ago* (last edited 1 year ago)

Oh yeah baby. Let's cram this absolute speedrun of enshittification up the arse of everything possible!

It's not AI.

[-] agent_flounder@lemmy.one 4 points 1 year ago

Not that I disagree but just to understand where you're coming from, what definition are you using for AI? And Intelligence for that matter?

Coming at this from a compsci/comp e viewpoint I think of it simply as "the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making,..."

By that definition it absolutely exists. On our smart phones even.

Of course in each of these areas it isn't on par with human intelligence by any stretch. Often it's far more limited. But it can also be better at certain specific tasks. Most of my limited familiarity is with computer vision but I think that illustrates how far off the mark it is from human intelligence. It is insanely difficult for MV to identify a thing. You can train it to identify one or a few or maybe a small set of things.

But it is easily confused by different ambient lighting intensity or hue, shadows, objects partially obscuring the thing, and myriad other conditions.

Meanwhile humans can identify an enormous number of objects in all sorts of conditions, easy-peasy. By a young age even. I hadn't fully appreciated how sophisticated our abilities were until I started looking at the artificial side of it.

Anyway, all that said, to me the real issue is what new developments in AI (as I defined) mean to society at large. How do jobs change, how does it affect quality of life, quality of products and services, does it change how we as a society value those things (art, writing) that can be partly replaced?

[-] StalksEveryone@futurology.today 2 points 1 year ago

i like your definition of these ai tools. Its feels broad enough to cover all of the recent accomplishments so many are praising.

Many people aren’t able to distinguish that the software is just a tool and even less so as it becomes more autonomous

[-] agent_flounder@lemmy.one 2 points 1 year ago

I think what gets lost in translation with LLMs (and machine vision and similar ML tech) is that it isn't magic and it isn't emergent behavior. It isn't truly intelligent.

LLMs do a good job of tricking us into thinking they are more than they are. They generate a seemingly appropriate response to input based on training but it's nothing more than a statistical model of what the most likely chain of words are in response or another chain of words, based on questions and "good" human responses.

There is no understanding behind it. No higher cognitive process. Just "what words go next based on Q&A training data." Which is why we get well written answers that are often total bullshit.

Even so, the tech could easily upend many writing careers.

[-] StalksEveryone@futurology.today 2 points 1 year ago

I’ve had the 3.5 gpt model give me a made up source for research. Either that or it told me the source material was related to what I was researching when it wasn’t. Regardless it was one bs moments, its called a hallucination I think.

[-] joneskind@lemmy.world 3 points 1 year ago

Let me introduce you to Dr Angela Collier

https://www.youtube.com/watch?v=EUrOxh_0leE

And have very nice day.

[-] Ludz@lemmy.ml 2 points 1 year ago* (last edited 1 year ago)

Pretty interesting. I didn't know Dr Collier, quite similar speech from a former Samsung VP (also Co creator of Siri)

Mr Luc Julia

https://www.youtube.com/watch?v=6prCHASkavM

[-] MossyFeathers@pawb.social 9 points 1 year ago

This was a fascinating article and I enjoyed seeing their summary of their research into how even current AI can impact the way people work, for the better. However, I can almost guarantee that most companies, especially the highly wealthy ones, won't be using AI in the way the author suggests.

I'm going to speculate that we'll start to see a bell curve, where small startups use AI to replace workers due to the cost of bringing on new team members, medium sized companies use it to augment their employees output, and large companies will layoff workers and replace them with AI; the latter of which is already happening.

Why?

Because the same pattern seems to be present when talking about company morality and ethics. While many smaller companies and startups tend to pledge to improving worker's rights, shrinking the company's carbon footprint, improving customer relations and/or increasing the quality of their products, they typically don't have the capital to truly commit to these values.

Medium sized companies tend to have the capital to fully commit to the values laid out when they were smaller, while not yet being large enough to experience the full force of capitalistic greed.

Finally, large companies have the capital to maintain their stated values, but often discover that said values run contrary to those held by their shareholders and board of directors (that being that greed is good and seeking infinite growth). Additionally, many of those companies are reaching full market saturation (if they haven't already achieved it) and find that they have to begin sacrificing their values in exchange for those dictated to them by their board of directors. The result is that they tend to be all talk, little action.

[-] hark@lemmy.world 7 points 1 year ago

I really gotta hop on this AI train so I can increase my quality by about 2, whatever the fuck that means.

this post was submitted on 17 Sep 2023
34 points (100.0% liked)

Futurology

1851 readers
80 users here now

founded 1 year ago
MODERATORS