625
you are viewing a single comment's thread
view the rest of the comments
[-] RustyNova@lemmy.world 49 points 2 days ago

Meanwhile, had to debug a script that zipped a zip recursively, with the new data appended. The server had barely enough storage left, as the zip took almost 200GB (the data is only 3GB). I looked at the logs, last successful run: 2019

[-] r00ty@kbin.life 15 points 2 days ago

Yes, had the same happen. Something that should be simple failing for stupid reasons.

[-] RustyNova@lemmy.world 12 points 2 days ago

Well it's not that simple... Because whoever wrote that made it way too complicated (and the production version has been tweaked without updating the dev too)

A clean rewrite with some guard clauses helped remove the haduken ifs and actually zipping the file outside of the zipped directory helped a lot

[-] r00ty@kbin.life 6 points 2 days ago

I mean, I have to say I've hastened my own demise (in program terms) by over-engineering something that should be simple. Sometimes adding protective guardrails actually causes errors when something changes.

Am I understanding that last part correctly?

[...] and actually zipping the file outside of the zipped directory helped a lot

Did they just automatically create a backup zip-bomb in their script‽

[-] RustyNova@lemmy.world 8 points 2 days ago

I oversimplified it but the actual process was to zip files to send to an FTP server

The cron zipped the files to send in the same directory as the zipped files, then sent the zip, then deleted the zip

Looks fine, right? But what if the FTP server is slow and uploading take more time than the hourly cron dispatch? You now have a second script that zip all the folder, with the previous zip file, which will slow down the upload, etc...

I believe may have been started by an FTP upload erroring out and forcing an early return without having a cleanup, and progressively got worse

... I suppose this happened. The logs were actually broken and didn't actually add the message part of the error object, and only logging the memory address to it

[-] adavis@lemmy.world 2 points 2 days ago
[-] RustyNova@lemmy.world 12 points 2 days ago

Oh no need. The client didn't noticed anything in 6 years, and the reason why we had to check is because they wanted us to see if we could add this feature... That already existed.

[-] elvith@feddit.org 7 points 2 days ago

My favorite part is, if you do some extensive analytics from time to time (e.g. to prepare an upgrade to a new major version) and as a side effect stumble upon some workflows/pipelines/scripts constantly failing (and alerting the process owner) every five minutes for... at least a few months already.

Then you go and ask the process owner and they're just like "yeah, we were annoyed by the constant error notification mails, so we mad a filter that auto deletes them"...

[-] greybeard@feddit.online 1 points 2 days ago

I feel like half my job is trying to stop false positives and other noise from hitting important places. Because false positives kill any chance true positives will be noticed/reacted to/processed.

this post was submitted on 01 Oct 2025
625 points (100.0% liked)

Programmer Humor

26715 readers
2474 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS