833

US experts who work in artificial intelligence fields seem to have a much rosier outlook on AI than the rest of us.

In a survey comparing views of a nationally representative sample (5,410) of the general public to a sample of 1,013 AI experts, the Pew Research Center found that "experts are far more positive and enthusiastic about AI than the public" and "far more likely than Americans overall to believe AI will have a very or somewhat positive impact on the United States over the next 20 years" (56 percent vs. 17 percent). And perhaps most glaringly, 76 percent of experts believe these technologies will benefit them personally rather than harm them (15 percent).

The public does not share this confidence. Only about 11 percent of the public says that "they are more excited than concerned about the increased use of AI in daily life." They're much more likely (51 percent) to say they're more concerned than excited, whereas only 15 percent of experts shared that pessimism. Unlike the majority of experts, just 24 percent of the public thinks AI will be good for them, whereas nearly half the public anticipates they will be personally harmed by AI.

top 50 comments
sorted by: hot top controversial new old
[-] icedcoffee@lemm.ee 2 points 1 day ago

Butlerian Jihad

[-] HighFructoseLowStand@lemm.ee 5 points 1 day ago

I mean, it hasn't thus far.

[-] EndlessNightmare@reddthat.com 16 points 2 days ago

AI has it's place, but they need to stop trying to shoehorn it into anything and everything. It's the new "internet of things" cramming of internet connectivity into shit that doesn't need it.

[-] futatorius@lemm.ee 2 points 11 hours ago

Now your smart fridge can propose unpalatable recipes. Woo fucking hoo.

[-] poopkins@lemmy.world 6 points 2 days ago

You're saying the addition of Copilot into MS Paint is anything short of revolutionary? You heretic.

[-] Clent@lemmy.dbzer0.com 16 points 2 days ago

I do as a software engineer. The fad will collapse. Software engineering hiring will increase but the pipeline of new engineers will is dry because no one wants to enter the career with companies hanging ai over everyone's heads. Basic supply and demand says my skillset will become more valuable.

Someone will need to clean up the ai slop. I've already had similar pistons where I was brought into clean up code bases that failed being outsourced.

Ai is simply the next iteration. The problem is always the same business doesn't know what they really want and need and have no ability to assess what has been delivered.

[-] futatorius@lemm.ee 2 points 11 hours ago

If it walks and quacks like a speculative bubble...

I'm working in an organization that has been exploring LLMs for quite a while now, and at least on the surface, it looks like we might have some use cases where AI could prove useful. But so far, in terms of concrete results, we've gotten bupkis.

And most firms I've encountered don't even have potential uses, they're just doing buzzword engineering. I'd say it's more like the "put blockchain into everything" fad than like outsourcing, which was a bad idea for entirely different reasons.

I'm not saying AI will never have uses. But as it's currently implemented, I've seen no use of it that makes a compelling business case.

[-] lobut@lemmy.ca 5 points 1 day ago

A complete random story but, I'm on the AI team at my company. However, I do infrastructure/application rather than the AI stuff. First off, I had to convince my company to move our data scientist to this team. They had him doing DevOps work (complete mismanagement of resources). Also, the work I was doing was SO unsatisfying with AI. We weren't tweaking any models. We were just shoving shit to ChatGPT. Now it was be interesting if you're doing RAG stuff maybe or other things. However, I was "crafting" my prompt and I could not give a shit less about writing a perfect prompt. I'm typically used to coding what I want but I had to find out how to write it properly: "please don't format it like X". Like I wasn't using AI to write code, it was a service endpoint.

During lunch with the AI team, they keep saying things like "we only have 10 years left at most". I was like, "but if you have AI spit out this code, if something goes wrong ... don't you need us to look into it?" they were like, "yeah but what if it can tell you exactly what the code is doing". I'm like, "but who's going to understand what it's saying ...?" "no, it can explain the type of problem to anyone".

I said, I feel like I'm talking to a libertarian right now. Every response seems to be some solution that doesn't exist.

I too am a developer and I am sure you will agree that while the overall intelligence of models continues to rise, without a concerted focus on enhancing logic, the promise of AGI likely will remain elusive.  AI cannot really develop without the logic being dramatically improved, yet logic is rather stagnant even in the latest reasoning models when it comes to coding at least.

I would argue that if we had much better logic with all other metrics being the same, we would have AGI now and developer jobs would be at risk. Given the lack of discussion about the logic gaps, I do not foresee AGI arriving anytime soon even with bigger a bigger models coming.

[-] Clent@lemmy.dbzer0.com 6 points 2 days ago

If we had AGI, the number of jobs that would be at risk would be enormous. But these LLMs aren't it.

They are language models and until someone can replace that second L with Logic, no amount of layering is going to get us there.

Those layers are basically all the previous AI techniques laid over the top of an LLM but anyone that has a basic understanding of languages can tell you how illogical they are.

[-] mctoasterson@reddthat.com 5 points 2 days ago

AI can look at a bajillion examples of code and spit out its own derivative impersonation of that code.

AI isn't good at doing a lot of other things software engineers actually do. It isn't very good at attending meetings, gathering requirements, managing projects, writing documentation for highly-industry-specific products and features that have never existed before, working user tickets, etc.

[-] futatorius@lemm.ee 1 points 11 hours ago

I work in an environment where we're dealing with high volumes of data, but not like a few meg each for millions of users. More like a few hundred TB fed into multiple pipelines for different kinds of analysis and reduction.

There's a shit-ton of prior art for how to scale up relatively simple web apps to support mass adoption. But there's next to nothing about how do to what we do, because hardly anyone does. So look ma, no training set!

[-] SSNs4evr@leminal.space 19 points 2 days ago

The problem could be that, with all the advancements in technology just since 1970, all the medical advancements, all the added efficiencies at home and in the workplace, the immediate knowledge-availability of the internet, all the modern conveniences, and the ability to maintain distant relationships through social media, most of our lives haven't really improved.

We are more rushed and harried than ever, life expectancy (in the US) has decreased, we've gone from 1 working adult in most families to 2 working adults (with more than 1 job each), income has gone down. Recreation has moved from wholesome outdoor activities to an obese population glued to various screens and gaming systems.

The "promise of the future" through technological advancement, has been a pretty big letdown. What's AI going to bring? More loss of meaningful work? When will technology bring fewer working hours and more income - at the same time? When will technology solve hunger, famine, homelessness, mental health issues, and when will it start cleaning my freaking house and making me dinner?

When all the jobs are gone, how beneficial will our overlords be, when it comes to universal basic income? Most of the time, it seems that more bad comes from out advancements than good. It's not that the advancements aren't good, it's that they're immediately turned to wartime use considerations and profiteering for a very few.

[-] sheetzoos@lemmy.world 15 points 2 days ago

New technologies are not the issue. The problem is billionaires will fuck it up because they can't control their insatiable fucking greed.

[-] umbrella@lemmy.ml 6 points 2 days ago* (last edited 2 days ago)

exactly. we could very well work less hours with the same pay. we wouldnt be as depressed and angry as we are right now.

we just have to overthrow, what, like 2000 people in a given country?

[-] briever@lemmy.world 17 points 2 days ago

For once, most Americans are right.

[-] pjwestin@lemmy.world 68 points 3 days ago

Maybe that's because every time a new AI feature rolls out, the product it's improving gets substantially worse.

[-] MangoCats@feddit.it 44 points 3 days ago

Maybe that's because they're using AI to replace people, and the AI does a worse job.

Meanwhile, the people are also out of work.

Lose - Lose.

[-] null_dot@lemmy.dbzer0.com 21 points 3 days ago

Even if you're not "out of work", your work becomes more chaotic and less fulfilling in the name of productivity.

When I started 20 years ago, you could round out a long day with a few hours of mindless data entry or whatever. Not anymore.

A few years ago I could talk to people or maybe even write a nice email communicating a complex topic. Now chatGPT writes the email and I check it.

It's just shit honestly. I'd rather weave baskets and die at 40 years old of a tooth infection than spend an additional 30 years wallowing in self loathing and despair.

load more comments (1 replies)
load more comments (1 replies)
load more comments (12 replies)
[-] kreskin@lemmy.world 10 points 2 days ago

Its just going to help industry provide inferior services and make more profit. Like AI doctors.

[-] TommySoda@lemmy.world 136 points 3 days ago

If it was marketed and used for what it's actually good at this wouldn't be an issue. We shouldn't be using it to replace artists, writers, musicians, teachers, programmers, and actors. It should be used as a tool to make those people's jobs easier and achieve better results. I understand its uses and that it's not a useless technology. The problem is that capitalism and greedy CEOs are ruining the technology by trying to replace everyone but themselves so they can maximize profits.

[-] faltryka@lemmy.world 30 points 3 days ago

The natural outcome of making jobs easier in a profit driven business model is to either add more work or reduce the number of workers.

load more comments (8 replies)
load more comments (5 replies)
[-] CancerMancer@sh.itjust.works 7 points 2 days ago

Just about every major advance in technology like this enhanced the power of the capitalists who owned it and took power away from the workers who were displaced.

[-] Naevermix@lemmy.world 8 points 2 days ago* (last edited 2 days ago)

They're right. What happens to the workers when they're no longer required? The horses faced a similar issue at the advent of the combustion engine. The solution? Considerably fewer horses.

load more comments (4 replies)
[-] IndiBrony@lemmy.world 42 points 3 days ago

The first thing seen at the top of WhatsApp now is an AI query bar. Who the fuck needs anything related to AI on WhatsApp?

[-] sgtgig@lemmy.world 5 points 2 days ago

Android Messages and Facebook Messenger also pushed in AI as 'something you can chat with'

I'm not here to talk to your fucking chatbot I'm here to talk to my friends and family.

[-] futatorius@lemm.ee 1 points 11 hours ago

It's easier to up-sell and cross-sell if you're talking to an AI.

[-] kautau@lemmy.world 18 points 3 days ago

Who the fuck needs ~~anything related to AI on~~ WhatsApp?

[-] alphabethunter@lemmy.world 7 points 2 days ago

Lots of people. I need it because it's how my clients at work prefer to communicate with me, also how all my family members and friends communicate.

load more comments (10 replies)
[-] dylanmorgan@slrpnk.net 53 points 3 days ago

It’s not really a matter of opinion at this point. What is available has little if any benefit to anyone who isn’t trying to justify rock bottom wages or sweeping layoffs. Most Americans, and most people on earth, stand to lose far more than they gain from LLMs.

[-] rockettaco37@lemmy.world 11 points 2 days ago

All it took was for us to destroy our economy using it to figure that out!

[-] surph_ninja@lemmy.world 5 points 2 days ago

Most people in the early 90’s didn’t have or think they needed a computer.

load more comments (1 replies)
[-] ininewcrow@lemmy.ca 45 points 3 days ago

This is like asking tobacco farmers what their thoughts are on smoking.

load more comments (6 replies)
[-] cupcakezealot 4 points 2 days ago

remember when tech companies did fun events with actual interesting things instead of spending three hours on some new stupid ai feature?

[-] TylerBourbon@lemmy.world 4 points 2 days ago

I dont believe AI will ever be more than essentially a parlar trick that fools you into thinking it's intelligent when it's really just a more advanced tool like excel compared to pen and paper or an abacus.

The real threat will be people who fool themselves into thinking it's more than that and that it's word is law, like a diety. Or worse, the people that do understand that but like various religious and political leaders that used religion to manipulate people, the new AI Pope's will try and do the same manipulation but with AI.

[-] StJohnMcCrae@slrpnk.net 4 points 2 days ago

"I dont believe AI will ever be more than essentially a parlar trick that fools you into thinking it's intelligent."

So in other words, it will achieve human-level intellect.

load more comments
view more: next ›
this post was submitted on 04 Apr 2025
833 points (100.0% liked)

Technology

68441 readers
3116 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS