169
top 50 comments
sorted by: hot top controversial new old
[-] SaltySalamander@fedia.io 31 points 16 hours ago

As soon as I can get a 20TB SSD for ~$325-350, let me know. Until then, lol.

[-] MangoPenguin 17 points 19 hours ago

Wouldn't a HDD based system be like 1/10th the price? I don't know if HDDs are going away any time soon.

[-] jj4211@lemmy.world 10 points 17 hours ago* (last edited 13 hours ago)

The disk cost is about a 3 fold difference, rather than order of magnitude now.

These disks didn't make up as much of the costs of these solutions as you'd think, so a disk based solution with similar capacity might be more like 40% cheaper rather than 90% cheaper.

The market for pure capacity play storage is well served by spinning platters, for now. But there's little reason to iterate on your storage subsystem design, the same design you had in 2018 can keep up with modern platters. Compared to SSD where form factor has evolved and the interface indicates revision for every pcie generation.

[-] Nomecks@lemmy.ca 5 points 17 hours ago

Spinning platter capacity can't keep up with SSDs. HDDs are just starting to break the 30TB mark and SSDs are shipping 50+. The cost delta per TB is closing fast. You can also have always on compression and dedupe in most cases with flash, so you get better utilization.

[-] fluffykittycat@slrpnk.net 2 points 4 hours ago

The cost per terabyte is why hard disk drives are still around. Once the cost for the SSD is only maybe 10% higher is when the former will be obsolete.

[-] SaltySalamander@fedia.io 1 points 16 hours ago

You can also have always on compression and dedupe in most cases with flash

As you can with spinning disks. Nothing about flash makes this a special feature.

[-] Nomecks@lemmy.ca 5 points 15 hours ago* (last edited 15 hours ago)

The difference is you can use inline compression and dedupe in a high performance environment. HDDs suck at random IO.

[-] enumerator4829@sh.itjust.works 1 points 14 hours ago

See for example the storage systems from Vast or Pure. You can increase window size for compression and dedup far smaller blocks. Fast random IO also allows you to do that ”online” in the background. In the case of Vast, you also have multiple readers on the same SSD doing that compression and dedup.

So the feature isn’t that special. What you can do with it in practice changes drastically.

For servers physical space is also a huge concern. 2.5” drives cap out at like 6tb I think, while you can easily find an 8tb 2.5” SSD anywhere. We have 16tb drives in one of our servers at work and they weren’t even that expensive. (Relatively)

[-] Aux@feddit.uk 2 points 14 hours ago

You can put multiple 8 gig m.2 ssds into 2.5" slot.

[-] Natanael@infosec.pub 2 points 18 hours ago

It's losing cost advantages as time goes. Long term storage is still on tape (and that's actively developed too!), and flash is getting cheaper, and spinning disks have inherent bandwidth and latency limits. It's probably not going away entirely, but it's main usecases are being squeezed on both ends

[-] hapablap@lemmy.sdf.org 16 points 20 hours ago

My sample size of myself has had 1 drive fail in decades. It was a solid state drive. Thankfully it failed in a strangely intermittent way and I was able to recover the data. But still, it surprised me as one would assume solid state would be more reliable. That spinning rust has proven to be very reliable. But regardless I'm sure SSD will be/are better in every way.

[-] DSTGU@sopuli.xyz 10 points 17 hours ago* (last edited 17 hours ago)

I believe you see the main issue with your experiences - the sample size. With small enough sample you can experience almost anything. Wisdom is knowing what you can and what you cant extrapolate to the entire population

I have one HDD that survived 20+ years, and an aliexpress SSD that died in 6 months. Therefore all SSDs are garbage!!!

That’s also the only SSD I’ve ever had fail on me and I’ve had them since 2011. In that same time I’ve had probably 4 HDDs fail on me. Even then I know to use data from companies like backblaze that have infinitely more drives than I have.

[-] AnUnusualRelic@lemmy.world 5 points 19 hours ago

I'm about to build a home server with a lot of storage (relatively, around 6 or 8 times 12 TB as a ballpark), and I didn't even consider anything other than spinning drives so far.

[-] nucleative@lemmy.world 4 points 18 hours ago

Because spinning disks are a bit cheaper than SSD?

[-] AnUnusualRelic@lemmy.world 4 points 17 hours ago* (last edited 17 hours ago)

Especially for large sizes. Also speed isn't really much of an issue on a domestic network.

[-] pr0sp3kt@lemmy.dbzer0.com 5 points 19 hours ago* (last edited 19 hours ago)

I had a terrible experience through all my life with HDDs. Slow af, sector loss, corruption, OS corruption... I am traumatized. I got 8TB NvMe for less than $500... Since then I have not a single trouble (well except I n electric failure, BTRFS CoW tends to act weird and sometimes doesnt boot, you need manual intervention)

[-] Theoriginalthon@lemmy.world 2 points 12 hours ago

I agree that single HDDs are terrible, but once you raid multiple of them together, it becomes much better. And now with zfs even better still

[-] AngryCommieKender@lemmy.world 2 points 17 hours ago* (last edited 17 hours ago)

Sounds like you may not be making enough sacrifices to The Omnisiah

/s

[-] Korhaka@sopuli.xyz 13 points 1 day ago

Probably at some point as prices per TB continue to come down. I don't know anyone buying a laptop with a HDD these days. Can't imagine being likely to buy one for a desktop ever again either. Still got a couple of old ones active (one is 11 years old) but I do plan to replace them with SSDs at some point.

[-] twice_hatch@midwest.social 3 points 18 hours ago

"in enterprises" oh lol

[-] dual_sport_dork@lemmy.world 51 points 1 day ago* (last edited 20 hours ago)

No shit. All they have to do is finally grow the balls to build SSD's in the same form factor as the 3.5" drives everyone in enterprise is already using, and stuff those to the gills with flash chips.

"But that will cannibalize our artificially price inflated/capacity restricted M.2 sales if consumers get their hands on them!!!"

Yep, it sure will. I'll take ten, please.

Something like that could easily fill the oodles of existing bays that are currently filled with mechanical drives, both in the home user/small scale enthusiast side and existing rackmount stuff. But that'd be too easy.

[-] jj4211@lemmy.world 15 points 20 hours ago

Hate to break it to you, but the 3.5" form factor would absolutely not be cheaper than an equivalent bunch of E1.S or M.2 drives. The price is not inflated due to the form factor, it's driven primarily by the cost of the NAND chips, and you'd just need more of them to take advantage of bigger area. To take advantage of the thickness of the form factor, it would need to be a multi-board solution. Also, there'd be a thermal problem, since thermal characteristics of a 3.5" application are not designed with the thermal load of that much SSD.

Add to that that 3.5" are currently maybe 24gb SAS connectors at best, which means that such a hypothetical product would be severely crippled by the interconnect. Throughput wise, talking about over 30 fold slower in theory than an equivalent volume of E1.S drives. Which is bad enough, but SAS has a single relatively shallow queue while an NVME target has thousands of deep queues befitting NAND randam access behavior. So a product has to redesign to vaguely handle that sort of product, and if you do that, you might as well do EDSFF. No one would buy something more expensive than the equivalent capacity in E1.S drives that performs only as well as the SAS connector allows,

The EDSFF defined 4 general form factors, the E1.S which is roughly M.2 sized, and then E1.L, which is over a foot long and would be the absolute most data per cubic volume. And E3.S and E3.L, which wants to be more 2.5"-like. As far as I've seen, the market only really wants E1.S despite the bigger form factors, so I tihnk the market has shown that 3.5" wouldn't have takers.

[-] Hozerkiller@lemmy.ca 1 points 18 hours ago

I hope youre not putting m.2 drives in a server if you plan on reading the data from them at some point. Those are for consumers and there's an entirely different formfactor for enterprise storage using nvme drives.

[-] SaltySalamander@fedia.io 1 points 16 hours ago

Tell me, what would be the issue of reading data from an m.2 drive in a server?

[-] Hozerkiller@lemmy.ca 1 points 15 hours ago

M.2 drives like to get hot and die. They work great until they don't.

[-] SaltySalamander@fedia.io 1 points 15 hours ago

Sounds to me like you need to work on the cooling in your server case.

[-] Hozerkiller@lemmy.ca 1 points 15 hours ago

TBH i have an old ssd for the host and rust for all my data. Don't have m.2 or u.2 in my server but I've heard enough horror stories to just use u.2 if the time comes.

[-] jj4211@lemmy.world 1 points 17 hours ago* (last edited 13 hours ago)

Enterprise systems do have m.2, though admittedly its only really used as pretty disposable boot volumes.

Though they aren't used as data volumes so much, it's not due to unreliability, it's due to hot swap and power levels.

load more comments (1 replies)
[-] Sixtyforce@sh.itjust.works 33 points 1 day ago

I'll shed no tears, even as a NAS owner, once we get equivalent capacity SSD without ruining the bank :P

[-] Appoxo@lemmy.dbzer0.com 3 points 1 day ago

Considering the high prices for high density SSD chips...
Why are there no 3.5" SSDs with low density chips?

[-] jj4211@lemmy.world 6 points 21 hours ago

Not enough of a market

The industry answer is if you want that much volume of storage, get like 6 edsff or m.2 drives.

3.5 inch is a useful format for platters, but not particularly needed to hold nand chips. Meanwhile instead of having to gate all those chips behind a singular connector, you can have 6 connectors to drive performance. Again, less important for a platter based strategy which is unlikely to saturate even a single 12 gb link in most realistic access patterns, but ssds can keep up with 128gb with utterly random io.

Tiny drives means more flexibility. That storage product can go into nas, servers, desktops, the thinnest laptops and embedded applications, maybe wirh tweaked packaging and cooling solutions. A product designed for hosting that many ssd boards behind a single connector is not going to be trivial to modify for any other use case, bottleneck performance by having a single interface, and pretty guaranteed to cost more to manufacturer than selling the components as 6 drives.

[-] NeuronautML@lemmy.ml 3 points 23 hours ago* (last edited 22 hours ago)

I doubt it. SSDs are subject to quantuum tunneling. This means if you don't power up an SSD once in 2-5 years, your data is gone. HDDs have no such qualms. So long as they still spin, there's your data and when they no longer do, you still have the heads inside.

So you have a use case that SSDs will never replace, cold data storage. I use them for my cold offsite back ups.

[-] floquant@lemmy.dbzer0.com 15 points 21 hours ago

Sorry dude, but bit rot is a very real thing on HDDs. They're magnetic media, which degrades over time. If you leave a disk cold for 2-5 years, there's a very good chance you'll get some bad sectors. SSDs aren't immune from bit rot, but that's not through quantum tunneling - not any more than your CPU is affected by it at least.

[-] NeuronautML@lemmy.ml 1 points 13 hours ago* (last edited 12 hours ago)

I did not meant to come across as saying that HDDs don't suffer bit rot. However, there are specific long term storage HDDs that are built specifically to be powered up sporadically and resist external magnetic influences on the track. In a proper storage environment they will last over 5 years without being powered up and still retain all information. I know it because i use them in this exact scenario for over 2 decades. Conversely there are no such long term storage SSDs.

SSDs store information through trapped charges which most certainly lose charge through quantuum tunneling as well as generalized charge leakage. As insulation loses effectiveness, the potential barrier for the charge allows for what is normally a manageable effect, much like in the CPU like you said, to become out of the scope of error correction techniques. This is a physical limitation that cannot be overcome.

[-] MonkderVierte@lemmy.ml 5 points 21 hours ago

You're wrong. HDD need about as much frequently powering up as SSD, because the magnetization gets weaker.

[-] NeuronautML@lemmy.ml 2 points 13 hours ago* (last edited 13 hours ago)

Here's a copy paste from superuser that will hopefully show you that what you said is incorrect in a way i find expresses my thoughts exactly

Magnetic Field Breakdown

Most sources state that permanent magnets lose their magnetic field strength at a rate of 1% per year. Assuming this is valid, after ~69 years, we can assume that half of the sectors in a hard drive would be corrupted (since they all lost half of their strength by this time). Obviously, this is quite a long time, but this risk is easily mitigated - simply re-write the data to the drive. How frequently you need to do this depends on the following two issues (I also go over this in my conclusion).

https://superuser.com/questions/284427/how-much-time-until-an-unused-hard-drive-loses-its-data

[-] floquant@lemmy.dbzer0.com 2 points 17 hours ago

Note that for HDDs, it doesn't matter if they're powered or not. The platter is not "energized" or refreshed during operation like an SSD is. Your best bet is to have some kind of parity to identify and repair those bad bits.

[-] n2burns@lemmy.ca 4 points 20 hours ago

Nothing in this article is talking about cold storage. And if we are talking about cold storage, as others gave pointed out, HHDs are also not a great solution. LTO (magnetic tape) is the industry standard for a good reason!

[-] NeuronautML@lemmy.ml 2 points 13 hours ago* (last edited 13 hours ago)

Tape storage is the gold standard but it's just not realistically applicable to low scale operations or personal data storage usage. Proper long term storage HDDs do exist and are perfectly adequate to the job as i specified above and i can attest this from personal experience.

[-] thejml@lemm.ee 23 points 1 day ago

Meanwhile Western Digital moves away from SSD production and back to HDDs for massive storage of AI and data lakes and such: https://www.techspot.com/news/107039-western-digital-exits-ssd-market-shifts-focus-hard.html

load more comments (2 replies)
[-] solrize@lemmy.world 18 points 1 day ago* (last edited 23 hours ago)

Hdds were a fad, I'm waiting for the return of tape drives. 500TB on a $20 cartridge and I can live with the 2 minute seek time.

[-] AnUnusualRelic@lemmy.world 2 points 19 hours ago

It's not a real hard disk unless you can get it to walk across the server room anyway.

[-] earphone843@sh.itjust.works 17 points 1 day ago

Tape drives are still definitely a thing.

[-] Appoxo@lemmy.dbzer0.com 5 points 1 day ago

If you exclude the introductory price of the drive and needing specialized software to read/write to it it's very affordable €/TB

[-] MangoPenguin 1 points 19 hours ago

Tapes are still sold in pretty high densities, don't have to wait!

[-] NOT_RICK@lemmy.world 19 points 1 day ago

Spinning rust is a funny way of describing HDDs, but I immediately get it

load more comments
view more: next ›
this post was submitted on 08 Mar 2025
169 points (100.0% liked)

Technology

64937 readers
3797 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS