1279
submitted 5 months ago by ElCanut@jlai.lu to c/programmerhumor@lemmy.ml
top 50 comments
sorted by: hot top controversial new old
[-] lemmy_get_my_coat@lemmy.world 203 points 5 months ago

A new ripoff of an old classic

[-] Prunebutt@slrpnk.net 183 points 5 months ago

Is it a ripoff if they credit the original?

[-] OpenStars@startrek.website 24 points 5 months ago

Are you implying that the credit is here? If so, where? I am not seeing it.

[-] Prunebutt@slrpnk.net 121 points 5 months ago
[-] lemmy_get_my_coat@lemmy.world 43 points 5 months ago

I honestly didn't notice that - it was a bit small and pixelated, good catch

[-] OpenStars@startrek.website 25 points 5 months ago
load more comments (2 replies)
load more comments (2 replies)
[-] cor@slrpnk.net 16 points 5 months ago

if they make it almost exactly the same and “credit” it in the smallest font possible and didn’t get permission from the original author… i would say that’s definitely a ripoff

[-] elvith@discuss.tchncs.de 88 points 5 months ago

didn't get permission from the original author

Tell me you don't know xkcd without saying you don't know xkcd. These comics are licensed as CC-BY-NC 2.5, which means you are allowed to remix and use them, without explicitly asking for permission, as long as you attribute the original/author (which is given here) and as long as you do it non-commercially (which is given for this post IMHO).

load more comments (2 replies)
[-] DannyMac@lemm.ee 19 points 5 months ago
[-] db0@lemmy.dbzer0.com 32 points 5 months ago

It's not a parody. It's a homage

load more comments (5 replies)
load more comments (3 replies)
[-] CoggyMcFee@lemmy.world 12 points 4 months ago* (last edited 4 months ago)

In a version that doesn’t even fully make sense. With databases there is a well-defined way to sanitize your inputs so arbitrary commands can’t be run like in the xkcd comic. But with AI it’s not even clear how to avoid all of these kinds of problems, so the chiding at the end doesn’t really make sense. If anything the person should be saying “I hope you learned not to use AI for this”.

load more comments (1 replies)
[-] db0@lemmy.dbzer0.com 196 points 5 months ago* (last edited 5 months ago)

My gawds, some people need to learn what's a homage and also stop being upset on behalf of others. This comic is fine, stop bellyaching. This is what terminal permission culture does to a motherfucker.

[-] TexasDrunk@lemmy.world 71 points 5 months ago

The only person who should care about anything other than the quality is Randall. However since he licensed it CC BY-NC 2.5 how he feels about it doesn't really matter either.

[-] jsomae@lemmy.ml 28 points 5 months ago

I think people should be concerned about things on others' behalfs. We all need to stick together.

This situation is a send-up though. Totally not a concern.

[-] TexasDrunk@lemmy.world 11 points 5 months ago

Oh definitely! I just meant in this particular case.

[-] absentbird@lemm.ee 18 points 5 months ago

We can probably infer by the licensing that he's cool with it.

[-] bappity@lemmy.world 128 points 5 months ago

if someone is actually using ai to grade papers I'm gonna LITERALLY drink water

[-] lowleveldata@programming.dev 42 points 5 months ago

I'm gonna literally drink water if they DON'T

[-] fsxylo@sh.itjust.works 28 points 5 months ago

I'm drinking water as we speak and none of you can stop me!

[-] Tomato666@lemmy.sdf.org 17 points 5 months ago

As a large languag model I do not drink water

load more comments (5 replies)
load more comments (3 replies)
[-] SingularEye 86 points 5 months ago
[-] BleatingZombie@lemmy.world 40 points 5 months ago
[-] tiredofsametab@kbin.run 23 points 4 months ago

With xkcd attributed at the bottom of the image <3

Here's the XKCD: https://xkcd.com/327/

[-] ech@lemm.ee 84 points 5 months ago* (last edited 5 months ago)

More like "And I hope you learned not to trust the wellbeing and education of the children entrusted to you to a program that's not capable of doing either."

[-] 14th_cylon@lemm.ee 27 points 5 months ago

Well that would require too much work invested into stealing of https://xkcd.com/327/

[-] Theharpyeagle@lemmy.world 26 points 5 months ago* (last edited 5 months ago)

It could be credibly called an homage if it had a new punchline, but methinks the creator didn't know what "sanitize" meant in this context.

[-] AndrasKrigare@beehaw.org 23 points 5 months ago

Stealing is a strong word considering it gives credit in the bottom right

load more comments (4 replies)
[-] RustyNova@lemmy.world 60 points 5 months ago
[-] bobbytables@discuss.tchncs.de 39 points 5 months ago

It was in fact the mum who was good with computers. Bobby himself was never that interested in exploits.

[-] FarceOfWill@infosec.pub 32 points 5 months ago

He probably found it very hard to make any accounts on computers

[-] Evil_Shrubbery@lemm.ee 51 points 5 months ago

Always satanise your inputs.

load more comments (3 replies)
[-] MehBlah@lemmy.world 47 points 5 months ago

Its a MEH update on little bobby tables. Who is in his twenties now.

[-] derpgon@programming.dev 22 points 5 months ago

It's his younger brother Williams, tho.

[-] raldone01@lemmy.world 42 points 5 months ago* (last edited 5 months ago)

Reminds me of: https://www.wired.com/story/null-license-plate-landed-one-hacker-ticket-hell/

A guy thought it would be funny to change his license plate to NULL.

[-] seang96@spgrn.com 17 points 5 months ago* (last edited 5 months ago)

So to combat our horrible privacy culture we should name everything null...

hi my name is null, null.

load more comments (4 replies)
load more comments (1 replies)
[-] eestileib@sh.itjust.works 29 points 4 months ago

LLM system input is unsanitizable, according to NVidia:

The control-data plane confusion inherent in current LLMs means that prompt injection attacks are common, cannot be effectively mitigated, and enable malicious users to take control of the LLM and force it to produce arbitrary malicious outputs with a very high likelihood of success.

https://developer.nvidia.com/blog/securing-llm-systems-against-prompt-injection/

load more comments (1 replies)
[-] nucleative@lemmy.world 28 points 5 months ago

One of the best things ever about LLMs is how you can give them absolute bullshit textual garbage and they can parse it with a huge level of accuracy.

Some random chunks of html tables, output a csv and convert those values from imperial to metric.

Fragments of a python script and ask it to finish the function and create a readme to explain the purpose of the function. And while it's at it recreate the missing functions.

Copy paste of a multilingual website with tons of formatting and spelling errors. Ask it to fix it. Boom done.

Of course, the problem here is that developers can no longer clean their inputs as well and are encouraged to send that crappy input straight along to the LLM for processing.

There's definitely going to be a whole new wave of injection style attacks where people figure out how to reverse engineer AI company magic.

load more comments (1 replies)
[-] redcalcium@lemmy.institute 25 points 5 months ago

How do you sanitize ai prompts? With more prompts?

[-] CanadaPlus@lemmy.sdf.org 44 points 5 months ago* (last edited 5 months ago)

Easy, you just have a human worker strip out anything that could be problematic, and try not to bring it up around your investors.

[-] xmunk@sh.itjust.works 40 points 5 months ago

It's really easy, just throw an error if you detect a program will cause a halt. I don't know why these engineers refuse to just patch it.

load more comments (1 replies)
load more comments (4 replies)
[-] Aeri@lemmy.world 19 points 4 months ago

I am extremely horrified by the prospect of GenAI grading.

load more comments (3 replies)
[-] Dirk@lemmy.ml 16 points 5 months ago

Artificial Idiocy

load more comments
view more: next ›
this post was submitted on 07 Jun 2024
1279 points (100.0% liked)

Programmer Humor

32364 readers
190 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS