view the rest of the comments
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
Tangential question: What kind of server apps require that kind of processing power? I run a server on an Intel N200 laptop with multiple apps and services and it rarely uses more than 12% CPU and 15 watts. I'm wondering if I'm going to eventually run into something that needs a more powerful platform.
That N200 is likely on par or faster than dual Opteron 6272 CPUs, since they are so old.
A single Opteron 6272 is somewhat faster than the N200, but the Opteron's TDP is 115 watts while the N200's is only 6 watts. OP's server with 2 processors is more than 2x as fast as my single processor laptop, but can require nearly 40x the electricity. For a home server it's major overkill.
Newer CPUs can also just be better optimized and have more faster cache and that sort of thing, so might be faster at running a process even if they're the same on paper.
Nothin' I'm running, that's for sure!
It's not really that there are services that require that much processing power for a single request; it's that it's designed to handle normal requests for hundreds or thousands of users at once.
I suppose that supporting 0.5TB of RAM means it could deal with quite a big LLM, but any sort of halfway-modern GPU would absolutely run circles around it in terms of tokens per second, on any model that fit in their VRAM.
Sounds like my laptop will be plenty fast for some time to come.
This platform doesn't use much power to begin with, but I do run TLP using a battery profile despite the fact it's always plugged in. My intent is to lower the power consumption a bit further and extend battery run time if the AC fails. There's no noticeable impact on application performance. If you're running Linux maybe it will work on your hardware.