321
submitted 1 year ago by supakaity to c/main

So it's been a few days, where are we now?

I also thought given the technical inclination of a lot of our users that you all might be somewhat interested in the what, how and why of our decisions here, so I've included a bit of the more techy side of things in my update.

Bandwidth

So one of the big issues we had was the heavy bandwidth caused by a massive amount of downloaded content (not in terms of storage space, but multiple people downloading the same content).

In terms of bandwidth, we were seeing the top 10 single images resulting in around 600GB+ of downloads in a 24 hour period.

This has been resolved by setting up a frontline caching server at pictrs.blahaj.zone, which is sitting on a small, unlimited 400Mbps connection, running a tiny Caddy cache that is reverse proxying to the actual lemmy server and locally caching the images in a file store on its 10TB drive. The nginx in front of lemmy is 301 redirecting internet facing static image requests to the new caching server.

This one step alone is saving over $1,500/month.

Alternate hosting

The second step is to get away from RDS and our current fixed instance hosting to a stand-alone and self-healing infrastructure. This has been what I've been doing over the last few days, setting up the new servers and configuring the new cluster.

We could be doing this cheaper with a lower cost hosting provider and a less resiliant configuration, but I'm pretty risk averse and I'm comfortable that this will be a safe configuration.

I woudn't normally recommend this setup to anyone hosting a small or single user instance, as it's a bit overkill for us at this stage, but in this case, I have decided to spin up a full production grade kubernetes cluster with a stacked etcd inside a dedicated HA control plane.

We have rented two bigger dedicated servers (64GB, 8 CPU, 2TB RAID 1, 1 GBPS bandwidth) to run our 2 databases (main/standby), redis, etc on. Then a the control plane is running on 3 smaller instances (2GB, 2 CPU each).

All up this new infrastructure will cost around $9.20/day ($275/m).

Current infrastructure

The current AWS infrastructure is still running at full spec and (minus the excess bandwidth charges) is still costing around $50/day ($1500/m).

Migration

Apart from setting up kubernetes, nothing has been migrated yet. This will be next.

The first step will be to get the databases off the AWS infrastucture first, which will be the biggest bang for buck as the RDS is costing around $34/day ($1,000/m)

The second step will be the next biggest machine which is our Hajkey instance at Blåhaj zone, currently costing around $8/day ($240/m).

Then the pictrs installation, and lemmy itself.

And finally everything else will come off and we'll shut the AWS account down.

top 34 comments
sorted by: hot top controversial new old
[-] jo@blahaj.zone 37 points 1 year ago

@supakaity@lemmy.blahaj.zone I'm pleased that this post has been up for 30 minutes and no one has jumped in with "Acktually..." and proceeded to tell you how you're doing it wrong. jk /jk

Thankfully I have no clue about this stuff, but appreciate the detailed update nonetheless. #Hajkey #blahajzone

[-] ayilathebailey@blahaj.zone 9 points 1 year ago

@jo @supakaity@lemmy.blahaj.zone That's because the ones who are complaining are wisely doing so in their own spaces, as they ought to. They're also revealing just how much they know about things, in the usual armchair quarterback / bleacher flyhalf fashion.

[-] jo@blahaj.zone 4 points 1 year ago

@ayilathebailey Good points. I'm in a bit of a mood and really need to tone down my ascerbic replies, lest I be accused of the same. @supakaity@lemmy.blahaj.zone

[-] ayilathebailey@blahaj.zone 3 points 1 year ago

@jo @supakaity@lemmy.blahaj.zone And here I am adding a little more pepper in my pitches.

[-] MsPenguinette 5 points 1 year ago

I think it's because what they are saying is pretty solid. It's the exact solution I'd recommend

[-] lapis 34 points 1 year ago

Absolutely wild to me that moving off AWS + setting up the caching server will bring overall costs down by around a factor of ten. So glad y’all are capable of the advanced technical junk, and super thankful that you’re willing and able to host the various blahaj.zone instances!

[-] moonsnotreal 30 points 1 year ago

It's amazing how much setting up a caching server saves

[-] princessnorah 29 points 1 year ago* (last edited 1 year ago)

Thank you for all your communication about how the server is being run. I always feel in good hands here on Blahaj :)

[-] audiomodder 27 points 1 year ago

How can we donate to keep the infrastructure up and running?

[-] nowitsabby 31 points 1 year ago
[-] ezri 26 points 1 year ago

Awesome! Keep up the good work

[-] masukomi 20 points 1 year ago

a) holy 💩 i had no idea this was so expensive b) please include the ko-fi link for us to help support in future updates.

(link found in other comments)

[-] supakaity 13 points 1 year ago

It's not supposed to be, that's the issue. :)

[-] masukomi 7 points 1 year ago

well yeah, but even once the costs are reduced 10x or whatever there will still be costs and it's still be good to support its continued existence.

[-] jdp23@blahaj.zone 19 points 1 year ago

@supakaity@lemmy.blahaj.zone thanks for the detailed update!

[-] iso 17 points 1 year ago

I'm glad you moved away from AWS, I wouldn't even consider going for VM hosting and would've gone dedicated from the get go (or even self-hosting on a colo / using a good fiber connection at home, but I guess I live in a super privileged country when it comes to ISPs).

Isn't k8s a bit overkill tho? Front-loaded caching seems to make sense, but a single 10gbit dedi could probably resolve the issue easier and simpler, couldn't it?

[-] iso 13 points 1 year ago* (last edited 1 year ago)

Just to add some more background on this: I used to work tightly with the Network Team in the website team of the biggest contender in its market (can't disclose which one without people figuring out the company since the market is a bit niche).

We had 20'000 Users a day with a lot of images served.

The whole infrastructure consisted of 2 Firewall servers and the main DB (pSQL) on 2 self-hosted servers (think colo, it was sitting in a very remote location with 2 big diesel generators that would've ran the whole datacenter for a week iirc), with 14 Hetzner backend mirrors who ran the whole PHP code, served images and the angular + some weird custom Javascript. Scaling was done by simply throwing more Hetzners at it.

Given that Lemmy runs super performance efficient in comparison to 20 year deprecated PHP code that held together with ducttape, I feel like much less could make it work.

[-] tallgirlvanessa 3 points 1 year ago

I'm out of my depth but based on the cost savings, seems like a good situation? Kubernetes does scare me though. On the other hand it might be sensible to do this kind of overcorrection just in case the traffic takes another big spike. On the other other hand what you're describing seems pretty dang effective.

[-] iso 6 points 1 year ago

yeah, you pretty much described the use case for k8s. It allows for rapid horizontal scaling, since you can easily throw another machine into the cluster if you need it. It mostly makes sense if you actually have multiple machines sitting idle to begin with, so this technology is mostly used in combination with managed quick rent servers (think AWS).

Beyond that, k8s is kinda fancy for cluster management, but if you don't have a cluster you kinda don't need it to begin with. Using simple kernel VMs (think Proxmox) or just Docker works better there. You could still go for k8s since it's pretty much docker with cluster functionalities, just in case you want to expand eventually (sidenote, docker allows for cluster functionalities too, but they put a price on it, while k8s is open source iirc).

In that company I worked, k8s was considered but ultimately not implemented since it was considered a bit overkill. We already had everything set up with a bunch of bash scripts anyway, so it didn't matter too greatly to begin with.

[-] MsPenguinette 2 points 1 year ago

I think it's smart to start with k8s. Better than having to switch over to it later. Since lemmy is growing and will continue to grow.

Learning k8s is the more difficult part. If you know k8s well, it's much easier to deploy than an ec2 deployment. Especially if you need an ASG and ELBs

[-] cupcakezealot 15 points 1 year ago

I have no idea what any of this means but I'm glad you were able to figure out and make it cheaper (Hopefully it's not the emojis causing it because I love my blobcat in a box emoji :))

[-] Lanthanae 12 points 1 year ago

It's cool to see behind the curtain on this stuff, thanks for the update!

[-] LuckingFurker 10 points 1 year ago

I have no idea what a lot of this means but I'm glad to know that our admins are so cool and knowledgeable about it ❤️

[-] sit_up_straight 6 points 1 year ago

appreciate the dedication and transparency! I'm a developer myself but im still learning the basics when it finds to clusters and scaling

[-] NoStressyJessie 4 points 1 year ago* (last edited 1 year ago)

I tried to update my profile picture for the first time since the migration and now I don't have a profile picture at all, anyone else noticed issues? Image upload attempted from the webui settings page at lemmy.blahaj.zone.

Throws the error "{"data":{"msg":"Couln't upload file, Couldn't save file, No space left on device (os error 28)","files":null},"state":"success"}" in a toast notification on bottom left of page.

[-] ArieTheFloof 2 points 1 year ago

Yeah same here, assuming its just a migration hiccup

[-] ada 4 points 1 year ago

@NoStressyJessie@lemmy.blahaj.zone Just log files filling up a partition. It should be good to go again now

[-] NoStressyJessie 2 points 1 year ago* (last edited 1 year ago)

I'm trying, maybe I messed up when I converted the file, but it shows as a broken image, when I go to the web address where the image should be hosted it says

{"msg":"Error in MagickWand, ImproperImageHeader `/data/pict-rs/files/jhLII3k5jz.png' @ error/png.c/ReadPNGImage/4286"}

Edit: Same kind of error for jpg

{"msg":"Error in MagickWand, InsufficientImageDataInFile `/data/pict-rs/files/gnUPYJkCuT.jpg' @ error/jpeg.c/ReadJPEGImage_/1112"}

the first image I exported from gimp, 2nd picture was converted online. Seems unlikely I botched 2 seperate conversion attempts using seperate utilities

[-] bdonvr@thelemmy.club 3 points 1 year ago

Once pict-rs updates to allow directly serving images from object storage- wouldn't it be beneficial to migrate it to an object storage that allows unlimited egress like Cloudflare R2?

[-] ada 20 points 1 year ago

Cloudflare is a non starter

[-] tallgirlvanessa 3 points 1 year ago

Can you say why? I might wanna move some of my stuff if they're being shitty

I mean, Cloudbleed sucked, and their constant refrain of "we're not HOSTING bigoted websites, just caching all their stuff and handing it to whoever asks for it", is that it?

[-] ada 18 points 1 year ago

Their explicit and active protection of the rights of bigots over the wellbeing of the people those bigots are targeting

[-] princess 5 points 1 year ago

gods i love this place 💖

[-] CoachDom 1 points 1 year ago

Thanks for the update!

I signed up for a monthly donation on Ko-Fi and advise everybody that can afford to do so as well!

this post was submitted on 01 Aug 2023
321 points (100.0% liked)

Blahaj Lemmy Meta

2353 readers
1 users here now

Blåhaj Lemmy is a Lemmy instance attached to blahaj.zone. This is a group for questions or discussions relevant to either instance.

founded 2 years ago
MODERATORS