[-] melfie@lemy.lol 1 points 5 days ago* (last edited 5 days ago)

I originally thought it was one of my drives in my RAID1 array that was failing, but I noticed copying data was yielding btrfs corruption errors on both drives that could not be fixed with a scrub and I was also getting btrfs corruption errors on the root volume as well. I figured it would be quite an odd coincidence if my main SSD and 2 hard disks all went bad and I happened upon an article talking about how corrupt data can also occur if the RAM is bad. I also ran SMART tests and everything came back with a clean bill of health. So, I installed and booted into Memtester86+ and it immediately started showing errors on the single 16Gi stick I was using. I happened to have a spare stick that was a different brand, and that one passed the memory test with flying colors. After that, all the corruption errors went away and everything has been working perfectly ever since.

I will also say that legacy file systems like ext4 with no checksums wouldn’t even complain about corrupt data. I originally had ext4 on my main drive and at one point thought my OS install went bad, so I reinstalled with btrfs on top of LUKS and saw I was getting corruption errors on the main drive at that point, so it occurred to me that 3 different drives could not have possibly had a hardware failure and something else must be going on. I was also previously using ext4 and mdadm for my RAID1 and migrated it to btrfs a while back. I was previously noticing as far back as a year ago that certain installers, etc. that previously worked no longer worked, which happened infrequently and didn’t really register with me as a potential hardware problem at the time, but I think the RAM was actually progressively going bad for quite a while. btrfs with regular scrubs would’ve made it abundantly clear much sooner that I had files getting corrupted and that something was wrong.

So, I’m quite convinced at this point that RAID is not a backup, even with the abilities of btrfs to self-heal, and simply copying data elsewhere is not a backup, because something like bad RAM in both cases can destroy data during the copying process, whereas older snapshots in the cloud will survive such a hardware failure. Older data backed up that wasn’t coped with faulty RAM may be fine as well, but you’re taking a chance that a recent update may overwrite good data with bad data. I was previously using Rclone for most backups while testing Restic with daily, weekly, and monthly snapshots for a small subset of important data the last few months. After finding some data that was only recoverable in a previous Restic snapshot, I’ve since switched to using Restic exclusively for anything important enough for cloud backups. I was mainly concerned about the space requirements of keeping historical snapshots, and I’m still working on tweaking retention policies and taking separate snapshots of different directories with different retention policies according risk tolerance for each directory I’m backing up. For some things, I think even btrfs local snapshots would suffice with the understanding that it’s to reduce recovery time, but isn’t really a backup . However, any irreplaceable data really needs monthly Restic snapshots in the cloud. I suppose if don’t have something like btrfs scrubs to alert you that you have a problem, even snapshots from months ago may have an unnoticed problem.

[-] melfie@lemy.lol 3 points 5 days ago

Don’t understand the downvotes. This is the type of lesson people have learned from losing data and no sense in learning it the hard way yourself.

[-] melfie@lemy.lol 1 points 5 days ago* (last edited 5 days ago)

TS transpiles to JS, and then when that JS is executed in Deno, Node.js, a Blink browser like Chrome, etc., it gets just in time compiled to native machine code instead of getting interpreted. Hope that helps.

[-] melfie@lemy.lol 1 points 6 days ago

The JavaScript code is compiled to native and is heavily optimized, as opposed to being interpreted.

[-] melfie@lemy.lol 9 points 6 days ago

Having a synced copy elsewhere is not an adequate backup and snapshots are pretty important. I recently had RAM go bad and my most recent backups had corrupt data, but having previous snapshots saved the day.

[-] melfie@lemy.lol 4 points 6 days ago

I had to deal with large JavaScript codebases targeting IE8 back in the day and probably would’ve slapped anyone back then who suggested using JavaScript for everything. I have to say, though, that faster runtimes like v8 and TypeScript have done wonders, and TypeScript nowadays is actually one of my favorite languages.

[-] melfie@lemy.lol 14 points 1 week ago* (last edited 1 week ago)

This article sums up a Stanford study of AI and developer productivity. TL;DR - net productivity boost is a modest 15-20%, or as low as negative to 10% in complex, brownfield codebases. This tracks with my own experience as a dev.

https://www.linkedin.com/pulse/does-ai-actually-boost-developer-productivity-striking-%C3%A7elebi-tcp8f

[-] melfie@lemy.lol 12 points 2 weeks ago* (last edited 2 weeks ago)

Worst part with Meta Quest is it seems you have to sign up as a dev and give them a credit card in order to sideload (a.k.a., install stuff on the device you purchased). So, you can shell out hundreds for one of their devices and the device and all your data are belong to Meta. I assume it’s the same deal with these glasses. Zuck off, Zuck.🖕

[-] melfie@lemy.lol 14 points 3 weeks ago

Geeks are enthusiasts who collect and engage with specific topics, often focusing on trends and memorabilia, while nerds are more academically inclined, concentrating on mastering knowledge and skills in their areas of interest. Both terms can overlap, but they emphasize different aspects of passion and expertise.

https://laist.com/shows/take-two/whats-the-difference-between-a-geek-and-a-nerd

[-] melfie@lemy.lol 23 points 3 weeks ago

“Flying car” is a bullshit term. They are aircraft and must be treated as such.

[-] melfie@lemy.lol 15 points 3 weeks ago

If the state of open source phones are anything to judge by, we will have open source cars at some point, except the foot brake isn’t working yet, so you’ll have to use the hand brake for now. Cars and phones both take a lot of resources to develop, and maybe you’ll be able to “de-Stellantis” your car at some point instead of going fully open source, but judging by the recent steps Google has taken to weaken de-Googling, I’m not sure how long that would last either.

[-] melfie@lemy.lol 18 points 3 weeks ago* (last edited 3 weeks ago)

I enjoyed Sabine’s analysis in another video that continuing to make increasingly larger models with more compute is about as effective as continuing to make larger and larger particle accelerators. Come on, bro, this million km Gigantic Hadron Collider will finally get us to the TOE. Just one more trillion, bro.

view more: ‹ prev next ›

melfie

joined 1 month ago