131
top 47 comments
sorted by: hot top controversial new old
[-] clucose@lemmy.ml 96 points 1 month ago

It is possible for AI to hallucinate elements that don't work, at least for now. This requires some level of human oversight.

So, the same as LLMs and they got lucky.

[-] ATDA@lemmy.world 6 points 1 month ago

It's like putting a million monkeys in a writers' room, but super charged on meth and consuming insane resources.

[-] john89@lemmy.ca 3 points 1 month ago

That monkey analogy is so far removed from reality, I think less of anyone who perpetuates it.

A room full of monkeys banging on keyboards will always generate gibberish, because they're fucking monkeys.

[-] surewhynotlem@lemmy.world 5 points 1 month ago

It would work if it were apes though.

Source: it did. Shakespeare existed.

[-] kibiz0r@midwest.social 53 points 1 month ago

Tim Harford mentioned this in his 2016 book “Messy”.

They just wanna call it AI and make it sound like some mysterious intelligence we can’t comprehend.

[-] frezik@midwest.social 8 points 1 month ago* (last edited 1 month ago)

It sorta is.

A key way that human intelligence works is to break a problem down into smaller components that can be solved individually. This is in part due to the limited computational ability of the human brain; there's not enough there to tackle the complete problem.

However, there's no particular reason AI would need to be limited that way, and it often isn't. Expert Go players see this in AI for that game. The AI tends to make all sorts of moves early on that don't seem to be following the usual logic, and it's because it's laid out the complete game in its "head" and going directly for the goal. Go is basically impossible for humans to win against the best AIs at this point.

This is a different kind of intelligence than we're used to, but there's no reason to discount it as invalid.

See the paper Understanding Human Intelligence through Human Limitations

[-] RedWeasel@lemmy.world 44 points 1 month ago

This isn’t exactly new. I heard a few years ago about a situation where the ai had these wires on the chip that should not do anything as they didn’t go anywhere , but if they removed it the chip stopped working correctly.

[-] drosophila 53 points 1 month ago

That was a different technique, using simulated evolution in an FPGA.

An algorithm would create a series of random circuit designs, program the FPGA with them, then evaluate how well each one accomplished a task. It would then take the best design, create a series of random variations on it, and select the best one. Rinse and repeat until the circuit is really good at performing the task.

[-] RedWeasel@lemmy.world 9 points 1 month ago

I think this is what I am thinking of. Kind of a predecessor of modern machine learning.

[-] CommanderCloon@lemmy.ml 13 points 1 month ago

It is a form of machine learning

[-] barsoap@lemm.ee 4 points 1 month ago

Which is just stochastic optimisation.

Which yes is exactly what evolution does, big picture. Small picture the genome evolves a bit more intelligently, using not random generation and filtering but an algorithm employing randomness to generate, and then the usual survival filter because doing it that way is, well, fitter. Also what you can see under a microscope.

[-] CandleTiger@programming.dev 26 points 1 month ago

I don’t know about AI involvement but this story in general is very very old.

http://www.catb.org/jargon/html/magic-story.html

[-] massive_bereavement@fedia.io 11 points 1 month ago

I thought of this as well. In fact, as a bit of fun I added a switch to a rack at our lab in a similar way with the same labels. This one though does nothing, but people did push the "turbo" button on old pc boxes despite how often those buttons weren't connected.

[-] Gormadt 10 points 1 month ago

My turbo button was connected to an LED but that was it

[-] RedWeasel@lemmy.world 4 points 1 month ago* (last edited 1 month ago)

I remember that as well.

Edit; moved comment to correct reply.

[-] db2@lemmy.world 10 points 1 month ago

Sounds like RF reflection used like a data capacitor or something.

[-] GreyEyedGhost@lemmy.ca 10 points 1 month ago

The particular example was getting clock-like behavior without a clock. It had an incomplete circuit that used RF reflection or something very similar to simulate a clock. Of course, removing this dead-end circuit broke the design.

[-] piecat@lemmy.world 3 points 1 month ago

Yeah, that probably sounds so unintuitive and weird to anyone who has never worked with RF.

[-] rezifon@lemmy.world 8 points 1 month ago* (last edited 1 month ago)
[-] buffalobuffalo 4 points 1 month ago

It may interest you to know that the switch still exists. https://github.com/PDP-10/its/issues/1232

[-] fl42v@lemmy.ml 8 points 1 month ago

Yeah, I've stumbled upon that one a while back too, probably. Was it also the one where the initial designs would refuse to work outside the room temperature 'til the ai was asked to take temps into account?

[-] FourPacketsOfPeanuts@lemmy.world 4 points 1 month ago

I remember this too, it was years and years ago (I almost want to say 2010-2015). Can't find anything searching for it

[-] GreyEyedGhost@lemmy.ca 3 points 1 month ago

You helped me narrow it down. I expect Adrian Thompson's research from the 90s, referenced in this Wikipedia article is what you're thinking of.

[-] FourPacketsOfPeanuts@lemmy.world 2 points 1 month ago

Yes! Exactly this thank you

For example, one group of gates has no logical connection to the rest of the circuit, yet is crucial to its function

(I should have gone with my gut, I knew it was ages ago. 30ish years by the sound of it!)

[-] ShepherdPie@midwest.social 2 points 1 month ago

Perhaps you're an AI who only hallucinated a circuit design.

[-] FourPacketsOfPeanuts@lemmy.world 2 points 1 month ago

:)

It's been found. Adrian Thompson's research from almost 30 years ago..

https://en.m.wikipedia.org/wiki/Evolvable_hardware

[-] Lettuceeatlettuce@lemmy.ml 37 points 1 month ago

"We are coming up with structures that are complex and look randomly shaped, and when connected with circuits, they create previously unachievable performance. Humans cannot really understand them, but they can work better."

Great, so we will eventually have black box chips running black box algorithms for corporations where every aspect of the tech is proprietary and hidden from view with zero significant oversight by actual people...

The true cyber-dystopia.

[-] meliante@lemm.ee 8 points 1 month ago

Well, that's kind of like the human brain isn't it? You don't really know how it does its thing but it does it.

[-] Lettuceeatlettuce@lemmy.ml 13 points 1 month ago* (last edited 1 month ago)

Nope, we actually have entire fields of study that focus on the brain and cognition with thousands of experts and decades of research and experimentation to effectively understand a ton about how our brains work and why we behave the way we do.

Plus, your brain is not created and owned entirely by trillion dollar megacorps with the primary incentive to use it to increase profitability.

[-] meliante@lemm.ee 6 points 1 month ago

We also know how "AI" works and how it creates its outputs in the same way we know the brain.

Don't try to equate having fields of study and experts is definitive knowledge of something, that's being fallacious.

[-] Lettuceeatlettuce@lemmy.ml 3 points 1 month ago

And yet, this AI expert stated that we don't know why the AI designed the chip in specific ways. There's a difference between understanding the rough mechanism for something, and understanding why something happened.

Imagine hiring an engineer to design something, they hand you a finished design; they cannot explain what it is, how they actually designed it, how it works, or why they made the specific choices they did.

I never made the false equivalency you claimed I did, and you also never addressed my second criticism, which is telling.

[-] meliante@lemm.ee 1 points 1 month ago

Well, if an alien entity did give us some new technology that we didn't have the science to build or would be an epistemological break and didn't explain it, it would still exist and it would be the product of something we don't understand.

I don't get your point, are you trying to say that if we don't know how it works, then the entity that created it is magical or something?

An engineer would have been restricted by our current knowledge and processes. An "AI" doesn't have that kind of hindrance.

Because your point is that it was a fluke, my point is that it was the product of a new kind of way of "thinking" and resolving problems. And compared it to how the human brain solves and resolved problems. We know the parts that are activated, we know how they communicate and transfer data but we have no way to explain how it all produces thoughts and dreams and whatever other processes our brains use to create new things. Or we would have recreated it already.

What I'm saying is that we might have created something new that we don't know how it does what it does, except is - very crudely explained - the product of probabilities and that's ok. We don't have to know how it does what it does, it will still do it.

[-] Doorbook@lemmy.world 5 points 1 month ago

This has been going on in chess for a while as well. Computer can detect patterns that human cannot because it has a better memory and knowledge base.

[-] KeenFlame@feddit.nu 4 points 1 month ago

Man so you have personally vetted all code your devices execute? It's already true

[-] Lettuceeatlettuce@lemmy.ml 1 points 1 month ago

The point is that it actually can be vetted.

[-] KeenFlame@feddit.nu 2 points 1 month ago

But.. It already can't. That's not possible for you. Is that actually something you chose to downvote and ignore instead of responding to?

[-] Lettuceeatlettuce@lemmy.ml 1 points 1 month ago

You must be a bot, you don't understand the semantics. Ironic, and blocked.

[-] Flaqueman@sh.itjust.works 17 points 1 month ago

See? I want this kind of AI. Not a word dreaming algorithm that spews misinformation

[-] FourPacketsOfPeanuts@lemmy.world 19 points 1 month ago

Read the article, it's still 'dreaming' and spewing garbage, it's just that in some iterations it's gotten lucky. "Human oversight needed" they say. The AI has no idea what it's doing.

[-] Flaqueman@sh.itjust.works 17 points 1 month ago

Yeah I got that. But I still prefer "AI doing science under a scientist's supervision" over "average Joe can now make a deepfake and publish it for millions to see and believe"

[-] BrianTheeBiscuiteer@lemmy.world 3 points 1 month ago* (last edited 1 month ago)

I wonder how well it could work to use AI in developing an algorithm to generate chip designs. My annoyance with all of this stuff is how much people say, "Look! AI invented something new! It only took a few hours and 100x the resources!"

AI is mainly the capitalist dream of a drinking bird toy keeping a nuclear reactor online and paying a layman slave wages to make sure the bird does its job (obligatory "Simpsons did it").

[-] FourPacketsOfPeanuts@lemmy.world 1 points 1 month ago

Maybe, but remember generative AI isn't any kind of deductive or methodical reasoning. It's literally "mash up the publicly available info and give a crowd sourced version of what to add next". This works for art because this kind of random harmony appeals to us asthetically and art is an area where people seek fewer constraints. But when you're engineering it's the opposite. Maybe it's useful to get engineers out of a rut and imagine new possibilities. But that's it. Generative AI has no idea if what's it's smushed together is garbage or randomly insightful.

[-] riskable@programming.dev 7 points 1 month ago

You want AI that makes chips that run AI faster and better?

You've fallen into its trap!

[-] brlemworld@lemmy.world 5 points 1 month ago

I want AI that takes a foreign language movie, and augments their face and mouth so it looks like they are speaking my language, and also changes their voice (not a voice over) to be in my language.

[-] fl42v@lemmy.ml 4 points 1 month ago

Idk, kinda the same, but instead of misinformation we get ICs that release a cloud of smoke in a shape of a cat when presented with specific pattern of inputs (or smth equally batshit crazy)

[-] KeenFlame@feddit.nu 2 points 1 month ago

They are all of the same breed and it's an ongoing field of study. The megacorps have soiled the use of them but they are still extremely strong support tools for some things, like detecting cancer on xrays and stuff

this post was submitted on 07 Jan 2025
131 points (100.0% liked)

Technology

63082 readers
3409 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS