120
all 35 comments
sorted by: hot top controversial new old
[-] kbal@fedia.io 44 points 5 months ago* (last edited 5 months ago)

If models are trained on data that it would be a security breach for them to reveal to their users, then the real breach occurred at training.

[-] dgerard@awful.systems 26 points 5 months ago

now you know that and i know that,

[-] Cube6392@beehaw.org 17 points 5 months ago

The big LLMs everyone's talking about and using are just advanced forms of theft

[-] sailor_sega_saturn@awful.systems 32 points 5 months ago* (last edited 5 months ago)

Sloppy LLM programming? Never!

In completely unrelated news I've been staring at this spinner icon for the past five minutes after asking an LLM to output nothing at all:

[-] self@awful.systems 22 points 5 months ago

same energy as “your request could not be processed due to the following error: Success”

[-] earthquake@lemm.ee 18 points 5 months ago

What are the chances that the front end was not programmed to handle the LLM returning an empty string?

[-] sailor_sega_saturn@awful.systems 15 points 5 months ago

Quite likely yeah. There's no way they don't have a timeout on the backend.

[-] dgerard@awful.systems 10 points 5 months ago

boooo Gemini now replies "I'm just a language model, so I can't help you with that."

[-] froztbyte@awful.systems 9 points 5 months ago

"what would a reply with no text look like?" or similar?

[-] dgerard@awful.systems 8 points 5 months ago

what would a reply with no text look like?

nah it just described what an empty reply might look like in a messaging app

they seem to have done quite well at making Gemini do mundane responses

[-] froztbyte@awful.systems 8 points 5 months ago

that's a hilarious response (from it). perfectly understand how it got there, and even more laughable

[-] casmael@lemm.ee 24 points 5 months ago

LLM vendors are incredibly bad ~~at responding to security issues~~

[-] Tar_alcaran@sh.itjust.works 10 points 5 months ago

They're surprisingly skilled at getting money from idiots.

[-] skillissuer@discuss.tchncs.de 7 points 5 months ago

their previous experience in crypto is shining

[-] corbin@awful.systems 20 points 5 months ago

My NSFW reply, including my own experience, is here. However, for this crowd, what I would point out is that this was always part of the mathematics, just like confabulation, and the only surprise should be that the prompt doesn't need to saturate the context in order to approach an invariant distribution. I only have two nickels so far, for this Markov property and for confabulation from PAC learning, but it's ~~completely expected~~ weird that it's happened twice.

[-] motor_spirit@lemmy.world 9 points 5 months ago

Lol that's like expecting gold rushers to be squared away with OSHA, I hope nobody's surprised here

[-] 0laura@lemmy.world 4 points 5 months ago

Not really a security issue I'd say. The AI speaking gibberish when you try to make it speak gibberish isn't really that big of an issue.

[-] froztbyte@awful.systems 14 points 5 months ago

sure hope you're not in charge of security anywhere

[-] blakestacey@awful.systems 25 points 5 months ago

Correction: I sure hope they're in charge of security at some place I don't like.

[-] froztbyte@awful.systems 11 points 5 months ago* (last edited 5 months ago)

.......I'll allow it

[-] 0laura@lemmy.world 2 points 5 months ago

How is it inherently a security issue when an LLM speaks gibberish? Genuine question.

[-] froztbyte@awful.systems 9 points 5 months ago* (last edited 5 months ago)

it "speaking gibberish" is not the problem. the answer to your question is literally in the third paragraph in the article.

if you do not comprehend what it references or implies, then (quite seriously) if you are in any way involved in any security shit get the fuck out. alternatively read up some history about, well, literally any actual technical detail of even lightly technical systems hacking. and that's about as much free advice as I'm gonna give you.

[-] V0ldek@awful.systems 8 points 5 months ago

User input doing unexpected stuff to the backend = Bad™

[-] kbal@fedia.io 2 points 5 months ago* (last edited 5 months ago)

It's a reasonable question, and the answer is perhaps beyond my ken even though I've had substantial experience with both building machine learning models (mostly in pre-LLM times) and keeping computer systems secure. That a chatbot might tell someone “how to make a bomb” is probably not a great example of the dangers they pose. Bomb making instructions are more or less available to everyone who can find chemistry textbooks. The greater dangers that the LLM owners are trying to guard against might instead be more like having one advising someone that they should make a bomb. That sort of thing could be hazardous to the financial security of the vendor as well as the health of its users.

Finding an input that will make the machine produce gibberish is not directly equivalent to the kind of misbehaviour that often indicates exploitable bugs in software that "crashes" in more conventional ways. But it may be loosely analagous to it, in that it's an observation of unintended behaviour which might reveal flaws that would otherwise remain hidden, giving attackers something to work with.

[-] froztbyte@awful.systems 9 points 5 months ago* (last edited 5 months ago)

so there's 3 immediately-suggestive paths that come to mind from this

the first is that gibbering prompts itself already means you've hit a boundary in the design of its execution space (or fucking around in the very edges of training data where its precision gets low), and that could mean you are beyond what the programmers thought of/handled. whether or not you can get reliable further behaviours in that mode/space will be extremely contingent on a lot of factors (model type, execution type, runtime, ...), but given how extremely rapidly and harshly oai (and friends) reacted to simple behavioural breaks I get the impression that they're more concerned with such cases than they might be letting on

the second fairly obvious vector is where everyone is trying to shove LLMs into everything without good safety boundaries. oh that handy chatbot on your doctor/airline/insurance/.... site that's pitched as "it can use your identification details and look up $x"[0], that means that system has access to places where to look up private data. so if you could break a boundary via whatever method, who's to say it can't go further. it's not like telling the prompt "do $x and only $x" will work, as many examples have shown

third path, and sort-of the one that ties the bow on the second a bit, is that most of these dipshits probably don't have proper isolation controls, just because it's hard and effortful. building actual multitenancy with strong inter-tenant separation is a lot of work. that's something that's just not done in bayfucker world unless it is specifically needed. so the more these things get shoved into various products and this segmentation work is not done thoroughly, the more likely that sort of shit becomes

[0] - couple years back (pre-llm) I worked on exactly this problem with a client. it's fantastically annoying to design, not half because humans are such wonderfully unpredictable input sources

[-] kbal@fedia.io 8 points 5 months ago

Yeah, no doubt they will push to give the things built atop the shaky foundation of LLMs as much responsibility and access to credentials as they think they can get away with. Making the models trustworthy for such purposes has been the goal since DeepMind set off in that direction with such optimism. There are a lot of people eager to get there, and a lot of other people eager to give us the impression right now that they will get there soon. That in itself is one more reason they react with some alarm when the products are easily provoked into producing garbage.

I'm sure it will go wrong in many interesting ways. Seems to me there are risks they haven't begun to think about. There's a lot of focus on preventing the models producing output that's obviously morally offensive, very little thought given to the idea that output entirely within the bounds of what is thought acceptable might end up accidentally calibrated to reinforce and perpetuate the existing prejudices and misconceptions the machines have learned from us.

[-] barsquid@lemmy.world 6 points 5 months ago

Why would they bother with safety boundaries for AI? Companies leak millions of records of PII all the time and there are zero real consequences. Of course we will start seeing access level bypass exploits leaking customer data.

[-] Tar_alcaran@sh.itjust.works 4 points 5 months ago

couple years back (pre-llm) I worked on exactly this problem with a client. it's fantastically annoying to design, not half because humans are such wonderfully unpredictable input sources

Oh don't worry, humans are amazingly unpredictable interfaces too, which is why social engineering works so well.

this post was submitted on 12 Jul 2024
120 points (100.0% liked)

TechTakes

1485 readers
127 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS