88
top 10 comments
sorted by: hot top controversial new old
[-] dojan@lemmy.world 35 points 2 days ago* (last edited 2 days ago)

"I wrote an email to Google to say, 'you have access to my computer, is that right?'", he added.

lmao right, because the support person they reached, if indeed they even spoke to a person at all, would know and divulge the sources they train on. They may think that all their research is private but they're making use of these tech giant services. These tech giants have blatantly showed that they're OK with piracy and copyright infringement to further their goals, why would spying on research institutions be any different?

If you want to give it a run for its money, give it a novel problem that isn't solved, and see what it comes up with.

[-] TrenchcoatFullofBats@belfry.rip 2 points 2 days ago

"I wrote an email to ~~Google~~ Gryzzl to say, 'you have access to my computer, is that right?'", he added.

Later that day

[-] homesweethomeMrL@lemmy.world 20 points 2 days ago

Now if you'd all just empty your wallets into the AI bonfire. Thaaaat's right.

[-] FlowVoid@lemmy.world 21 points 2 days ago* (last edited 2 days ago)

Uh no, the AI didn't crack any problem.

The AI produced the same hypothesis that a scientist produced, one that the scientist considered his own original awesome idea.

But the truth is that science is less about producing awesome ideas and more about proving them. And AI did nothing in this regard, except to remind scientists that their original awesome ideas are often not so original.

There's even a term scientists use when another scientist has the same idea but actually managed to do the work of proving it: "scooped". It's a very common occurrence. It didn't happen here.

[-] SnotFlickerman 13 points 2 days ago* (last edited 2 days ago)

Google doesn't need access to all his unpublished research if he's ever mentioned anything about it online or in an email that went to a gmail address.

Further, University of Cambridge runs on Microsoft Exchange and University of Glasgow uses Office365.

Not to put to fine a point on it, but they don't need access to your computer and this feels a little bit overhyped.

Also just because it came to the same conclusion means about as much as it coming to the wrong conclusion, does it not? Since there is no actual "thinking" in these devices? How do we know the "right" conclusion wasn't merely a hallucination?

[-] Flisty@mstdn.social 2 points 2 days ago

@SnotFlickerman @cm0002 unless he's done the research himself he won't know whether the results are viable - as he says, they've got to test the "new" one. So at best it gives you a bit of a head start on new avenues, at worst it completely wastes your time down a new rabbithole.

[-] ryedaft@sh.itjust.works 9 points 2 days ago

It's so easy to ask a question in such a way that the statistically most likely answer is the one at the front of your mind.

[-] MNByChoice@midwest.social 4 points 2 days ago

Great! We have a tested solution and scalled up th3 drug to treat the issue. And in 2 days! Great!

Oh, that is not what we have?

[-] jpreston2005@lemmy.world 2 points 2 days ago

When AI decides to destroy the human virus, it now knows exactly how to create a bug capable of it. Probably more likely than pumping out a bunch of humanoid robots with guns, just create a bug, spread it around, and mess with our ability to communicate in time to stop the spread. BAM. Easy-peasy, humans are now down to a manageable 1 billion or so individuals.

[-] ristoril_zip@lemmy.zip 1 points 2 days ago

if this is machine learning and neural networks, I can believe it's a good thing, maybe even meaningful for the potential of so called artificial intelligence.

if this is an LLM that's alleged to have popped this "virus tail" theory out of... what exactly...? I'm not buying it.

this post was submitted on 20 Feb 2025
88 points (100.0% liked)

Technology

63134 readers
3045 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS