I run Debian + Docker, and use Portainer to manage the docker stacks
For years over done an Ubuntu LTS base with docker, but I've just recently started using debian base. Moved to debian for my workstation as well.
I deploy bare-metal with a mix of Ansible and Docker Compose.
I deploy bare-metal with a mix of Ansible and Docker Compose.
For personal Linux servers, I tend to run Debian or Ubuntu, with a pretty simple "base" setup that I just run through manually in my head.
- Setup my personal account.
- Upload my SSH keys.
- Configure the hostname (usually after something in Star Trek 🖖).
- Configure the /etc/hotss file.
- Make sure it is fully patched.
- Setup ZeroTier.
- Setup Telegraf to ship some metrics.
- Reboot.
I don't automate any of this because I don't see a whole of point in doing it.
I'd like to use rootless podman, but since I include zerotier in my containers, they need access to the tunnel device and net_admin, so rootless isn't an option right now.
Podman-compose works for me. I'd like to learn how to use Ansible and Kubernetes, but right now, it's just my Lemmy VPS and my Raspberry Pi 4, so I don't have much need for automation at the moment. Maybe some day.
You can add net_admin to the user running podman, I have added it to the ambient capability mask before, which acts like an inherited override for everything the user runs.
Cloud vps with debian. Then fix/update whatever weird or outdated image my vps provider gave me (over ssh). Then setup ssh certs instead of password. I use tmux a lot. Sometimes I have local scripts with scp to move some files around.
Usually I'm just hosting mosquitto, maybe apache2 webserver and WordPress or Flask. The latter two are only for development and get moved to other servers when done.
I don't usually use containers.
I'm better at hardware development than all this newfangled web stuff, so mostly just give me a command line without abstractions and I'm happy.
I have a bunch of different stuff, a dedicated server with Debian, 4 raspberry Pis + 1 micro computer that acts as a LB/Router/DHCP/DNS for the Pis.
In general I would say that my logic is as follows:
- Every OS change is done through Ansible. This sometimes is a pain, you want to just
apt install X
and instead you might need to create a new playbook for it, but in the long term, it paid off multiple times. I do have some default playbook that does basic config (user, SSH key provisioning, some default packages) and hardening (SSH config, iptables). - I then try to keep the OS logic to a minimum, and do everything else as code. On my older dedicated server I run mostly docker-compose with Systemd + templated docker-compose files dropped by Ansible. The Pis instead run Kubernetes, with flux and all my applications are either directly managed via Flux or they have Helm in between. This means I can destroy a cluster, create another way, point it to my flux repository and I am pretty much back where I started.
Sounds cool. ansible could never convince me, though, because playbook writing is so annoying.
Oh, I am there with you on that. I got used in my previous job, where everything was done with Ansible, but I still find myself copy pasting and changing most of the times. I actually like way more a declarative approach a-la-terraform.
Overall though there is a lot of community material, and once the playbooks are written it's quite good!
For me it’s Ubuntu Server as the OS base, swag as reverse proxy and docker-compose for the services. So mostly SSH and yolo but with containers. I’d guess having something like Portainer running would probably be useful, but for me the terminal was enough.
As folder structure I just have a services
directory with subfolders for each app/service.
Proxmox and shell scripts. I have everything automated from base install to updates.
All the VMs are Debian which install with a custom seed file. Each VM has a config script that will completely setup all users, ip tables, software, mounts, etc. SSL certs are updated on one machine with acme.sh and then pushed out as necessary.
One of these days I’ll get into docker but half the fun is making it all work. I need some time to properly set it up and learn how to configure it securely.
NixOS instances running Nomad/Vault/Consul. Each service behind Traefik with LE certs. Containers can mount NFS shares from a separate NAS which optionally gets backed up to cloud blob storage.
I use SSH and some CLI commands for deployment but only because that’s faster than CICD. I’m only running ~’nomad run …’ for the most part
The goal was to be resilient to single node failures and align with a stack I might use for production ops work. It’s also nice to be able to remove/add nodes fairly easily without worrying about breaking any home automation or hosting.
A series of VPSes running AlmaLinux, I have a relatively big Ansible playbook to setup everything after the server goes online. The idea is that I can at any time scrape the server off, install an OS, put in all the persistent data (Docker volumes and /srv partition with all the heavy data), and run a playbok.
Docker Compose for services, last time I checked Podman, podman-compose didn't work properly, and learning a new orchestration tool would take an unjustifiable amount of time.
I try to avoid shell scripts as much as possible because they are hard to write in such a way so that they handle all possible scenarios, they are difficult to debug, and they can make a mess when not done properly. Premade scripts are usually the big offenders here, and they are I nice way to leave you without a single clue how the stuff they set up works.
I don't have a selfhosting addiction.
Proxmox + mostly Debian + currently documenting my builds for future automation.
Lots of snapshots and clones/backups, for in case I want to roll back, or in case I want a head start in the future.
For example, I have a couple LAMP stack VMs backed up. If I need another LAMP VM, I clone (restore-as-unique) the backup in Proxmox, twiddle a few settings to make it actually unique, and go.
I don't do Docker or anything like it currently, and eventually I'm sure I'll learn, but having a crapload of VMs (true VM or LXC) suits me just fine for now. I will likely learn how to do my deployments with Ansible before learning Docker et al.
Only ssh,nvim,htop, and screen. Rest all are whatever is required. I like to keep things minimal until i really need the server to do anything specific.
I resort to docker only if i need the application temporarily or the application setup is awkward/annoying.
I try to have most of the common parts setup with ansible. Over time, keep adding more and more. This is useful specially for things you may not do, or need, often and that is not as fresh on your mind how you set it up last time.
Any configuration management system would work; I find ansible is very approachable and fast to get productive with it.
I've recently switched my entire self hosted infrastructure to NixOS, but only after a few years of evaluation, because it's quite a paradigm shift but well worth it imho.
Before that I used to stick to a solid base of Debian with some docker containers. There are still a few of those remaining that I have yet to migrate to my NixOS infra (namely mosquitto, gotify, nodered and portainer for managing them).
Kubernetes.
I deploy all of my container/Kubernetes definitions from Github:
K
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!