view the rest of the comments
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
If you look here: https://lemmy.world/comment/65982
At least specs and capacity wise, it doesn't suggest it is hitting a wall.
The more I dug into things, the more I think the limitation comes from an age old issue in that if your service is expected to connect to a lot of flakey destinations, you're not going to be in for a good time. I think the big instance backend is trying to send federation event messages, and a bunch of smaller federated destinations have shuttered (because they're not getting all the messages, so they just go and sign up on the big instances to see everything), which results in the big instances' out going connection have to wait for timeout and/or discover the recipient is no longer available, which results in a backed up queue of messages to send out.
When I posted a reply to myself on lemmy.world, it took 17 seconds to reach my instance (hosted in a data centre w/ sub 200ms ping to lemmy.world itself, so not a network latency issue here), which exceeds the 10 seconds limit per defined by Lemmy. Increasing it on the application protocol level won't help, because as more small instances come up, they too would also like to subscribe to big hubs, which will just further exacerbate the lag.
I think the current implementation is very naive and can scale a bit, but will likely be insufficient as the fediverse grows, not as the individual instance's user grows. That is, the bottle neck will not so much be "this can support instance up to 100K users" but rather "now that there's 100K users, we'd also have 50K servers trying to federate with us". And to work around that, you're going to need a lot more than Postgres horizontal scaling... you'd need message buses and workers that can ensure jobs (i.e.: outward federation) can be sent effectively.