158
submitted 1 week ago* (last edited 1 week ago) by kiol@lemmy.world to c/selfhosted@lemmy.world

Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

top 50 comments
sorted by: hot top controversial new old
[-] jet@hackertalks.com 1 points 3 days ago

KISS

The more complicated the machine the more chances for failure.

Remote management plus bare metal just works, it's very simple, and you get the maximum out of the hardware.

Depending on your use case that could be very important

[-] erock@lemmy.ml 2 points 5 days ago

Here’s my homelab journey: https://bower.sh/homelab

Basically, containers and GPU is annoying to deal with, GPU pass through to a VM is even more annoying. Most modern hobbyist GPUs also do not support splitting your GPU. At the end of the day, it’s a bunch of tinkering which is valuable if that’s your goal. I learned what I wanted, now I’m back to arch running everything with systemd and quadlet

[-] OnfireNFS@lemmy.world 2 points 6 days ago

This reminds me of a question I saw a couple years ago. It was basically why would you stick with bare metal over running Proxmox with a single VM.

It kinda stuck with me and since then I've reimaged some of my bare metal servers with exactly that. It just makes backup and restore/snapshots so much easier. It's also really convenient to have a web interface to manage the computer

Probably doesn't work for everyone but it works for me

[-] atzanteol@sh.itjust.works 110 points 1 week ago

Containers run on "bare metal" in exactly the same way other processes on your system do. You can even see them in your process list FFS. They're just running in different cgroup's that limit access to resources.

Yes, I'll die on this hill.

[-] sylver_dragon@lemmy.world 33 points 1 week ago

But, but, docker, kubernetes, hyper-scale convergence and other buzzwords from the 2010's! These fancy words can't just mean resource and namespace isolation!

In all seriousness, the isolation provided by containers is significant enough that administration of containers is different from running everything in the same OS. That's different in a good way though, I don't miss the bad old days of everything on a single server in the same space. Anyone else remember the joys of Windows Small Business Server? Let's run Active Directory, Exchange and MSSQL on the same box. No way that will lead to prob... oh shit, the RAM is on fire.

kubernetes

Kubernetes isn't just resource isolation, it encourages splitting services across hardware in a cluster. So you'll get more latency than VMs, but you get to scale the hardware much more easily.

Those terms do mean something, but they're a lot simpler than execs claim they are.

load more comments (1 replies)
load more comments (1 replies)
[-] Semi_Hemi_Demigod@lemmy.world 6 points 1 week ago

Learning this fact is what got me to finally dockerize my setup

load more comments (2 replies)
[-] sepi@piefed.social 59 points 1 week ago

"What is stopping you from" <- this is a loaded question.

We've been hosting stuff long before docker existed. Docker isn't necessary. It is helpful sometimes, and even useful in some cases, but it is not a requirement.

I had no problems with dependencies, config, etc because I am familiar with just running stuff on servers across multiple OSs. I am used to the workflow. I am also used to docker and k8s, mind you - I've even worked at a company that made k8s controllers + operators, etc. I believe in the right tool for the right job, where "right" varies on a case-by-case basis.

tl;dr docker is not an absolute necessity and your phrasing makes it seem like it's the only way of self‐hosting you are comfy with. People are and have been comfy with a ton of other things for a long time.

[-] kiol@lemmy.world 20 points 1 week ago

Question is totally on purpose, so that you'll fill in what it means to you. The intention is to get responses from people who are not using containers, that is all. Thank you for responding!

[-] sepi@piefed.social 1 points 2 days ago* (last edited 2 days ago)

What is stopping you from running HP-UX for all your workloads? The question is totally in purpose so that you'll fill in what it means to you.

load more comments (1 replies)
[-] nucleative@lemmy.world 24 points 1 week ago

I've been self-hosting since the '90s. I used to have an NT 3.51 server in my house. I had a dial in BBS that worked because of an extensive collection of .bat files that would echo AT commands to my COM ports to reset the modems between calls. I remember when we had to compile the slackware kernel from source to get peripherals to work.

But in this last year I took the time to seriously learn docker/podman, and now I'm never going back to running stuff directly on the host OS.

I love it because I can deploy instantly... Oftentimes in a single command line. Docker compose allows for quickly nuking and rebuilding, oftentimes saving your entire config to one or two files.

And if you need to slap in a traefik, or a postgres, or some other service into your group of containers, now it can be done in seconds completely abstracted from any kind of local dependencies. Even more useful, if you need to move them from one VPS to another, or upgrade/downgrade core hardware, it's now a process that takes minutes. Absolutely beautiful.

[-] roofuskit@lemmy.world 7 points 1 week ago

Hey, you made my post for me though I've been using docker for a few years now. Never, looking, back.

[-] fubarx@lemmy.world 23 points 1 week ago

Have done it both ways. Will never go back to bare metal. Dependency hell forced multiple clean installs down to bootloader.

The only constant is change.

[-] laserjet@lemmy.dbzer0.com 18 points 1 week ago

Every time I have tried it just introduces a layer of complexity I can't tolerate. I have struggled to learn everything required to run a simple Debian server. I don't care what anyone says, docker is not simpler or easier. Maybe it is when everything runs perfectly but they never do so you have to consider the eventual difficulty of troubleshooting. And that would be made all the more cumbersome if I do not yet understand the fundamentals of Linux system.

However I do keep a list of packages I want to use that are docker-only. So if one day I feel up to it I'll be ready to go.

[-] kiol@lemmy.world 8 points 1 week ago

Did you try compose scripts as opposed to docker run

[-] laserjet@lemmy.dbzer0.com 5 points 1 week ago

I don't know. both? probably? I tried a couple of things here and there. it was plain that bringing in docker would add a layer of obfuscation to my system that I am not equipped to deal with. So I rinsed it from my mind.

If you think it's likely that I followed some "how to get started with docker" tutorial that had completely wrong information in it, that just demonstrates the point I am making.

[-] enumerator4829@sh.itjust.works 18 points 1 week ago

My NAS will stay on bare metal forever. Any complications there is something I really don’t want. Passthrough of drives/PCIe-devices works fine for most things, but I won’t use it for ZFS.

As for services, I really hate using Docker images with a burning passion. I’m not trusting anyone else to make sure the container images are secure - I want the security updates directly from my distribution’s repositories, and I want them fully automated, and I want that inside any containers. Having Nixos build and launch containers with systemd-nspawn solves some of it. The actual docker daemon isn’t getting anywhere near my systems, but I do have one or two OCI images running. Will probably migrate to small VMs per-service once I get new hardware up and running.

Additionally, I never found a source of container images I feel like I can trust long term. When I grab a package from Debian or RHEL, I know that package will keep working without any major changes to functionality or config until I upgrade to the next major. A container? How long will it get updates? How frequently? Will the config format or environment variables or mount points change? Will a threat actor assume control of the image? (Oh look, all the distros actually enforce GPG signatures in their repos!)

So, what keeps me on bare metal? Keeping my ZFS pools safe. And then just keeping away from the OCI ecosystem in general, the grass is far greener inside the normal package repositories.

load more comments (2 replies)
[-] zod000@lemmy.dbzer0.com 18 points 1 week ago* (last edited 1 week ago)

Why would I want add overheard and complexity to my system when I don't need to? I can totally see legitimate use cases for docker, and work for purposes I use VMs constantly. I just don't see a benefit to doing so at home.

load more comments (3 replies)
[-] mesamunefire@piefed.social 17 points 1 week ago* (last edited 1 week ago)

All my services run on bare metal because its easy. And the backups work. It heavily simplifies the work and I don't have to worry about things like a virtual router, using more cpu just to keep the container...contained and running. Plus a VERY tiny system can run:

  1. Peertube
  2. GoToSocial + client
  3. RSS
  4. search engine
  5. A number of custom sites
  6. backups
  7. Matrix server/client
  8. and a whole lot more

Without a single docker container. Its using around 10-20% of the RAM and doing a dd once in a while keeps everything as is. Its been 4 years-ish and has been working great. I used to over-complicate everything with docker + docker compose but I would have to keep up with the underlining changes ALL THE TIME. It sucked, and its not something I care about on my weekends.

I use docker, kub, etc...etc... all at work. And its great when you have the resources + coworkers that keep things up to date. But I just want to relax when I get home. And its not the end of the world if any of them go down.

load more comments (6 replies)
[-] missfrizzle@discuss.tchncs.de 15 points 1 week ago

pff, you call using an operating system bare metal? I run my apps as unikernels on a grid of Elbrus chips I bought off a dockworker in Kamchatka.

and even that's overkill. I prefer synthesizing my web apps into VHDL and running them directly on FPGAs.

until my ASIC shuttle arrives from Taipei, naturally, then I bond them directly onto Ethernet sockets.

/uj not really but that'd be sick as hell.

load more comments (1 replies)
[-] 30p87@feddit.org 14 points 1 week ago

That I've yet to see a containerization engine that actually makes things easier, especially once a service does fail or needs any amount of customization. I've two main services in docker, piped and webodm, both because I don't have the time (read: am too lazy) to write a PKGBUILD. Yet, docker steals more time than maintaining a PKGBUILD, with random crashes (undebuggable, as the docker command just hangs when I try to start one specific container), containers don't start properly after being updated/restarted by watchtower, and debugging any problem with piped is a chore, as logging in docker is the most random thing imagineable. With systemd, it's in journalctl, or in /var/log if explicitly specified or obviously useful (eg. in multi-host nginx setups). With docker, it could be a logfile on the host, on the guest, or stdout. Or nothing, because, why log after all, when everything "just works"? (Yes, that's a problem created by container maintainers, but one you can't escape using docker. Or rather, in the time you have, you could more easily properly(!) install it bare metal) Also, if you want to use unix sockets to more closely manage permissions and prevent roleplaying a DHCP and DNS server for ports (by remembering which ports are used by which of the 25 or so services), you'll either need to customize the container, or just use/write a PKGBUILD or similar for bare metal stuff.

Also, I need to host a python2.7 django 2.x or so webapp (yes, I'm rewriting it), which I do in a Debian 13 VM with Debian 9 and Debian 9 LTS repos, as it most closely resembles the original environment, and is the largest security risk in my setups, while being a public website. So into qemu it goes.

And, as I mentioned, either stuff is officially packaged by Arch, is in the AUR or I put it into the AUR.

[-] deadcade@lemmy.deadca.de 11 points 1 week ago

Personally I have seen the opposite from many services. Take Jitsi Meet for example. Without containers, it's like 4 different services, with logs and configurations all over the system. It is a pain to get running, as none of the services work without everything else being up. In containers, Jitsi Meet is managed in one place, and one place only. (When using docker compose,) all logs are available with docker compose logs, and all config is contained in one directory.

It is more a case-by-case thing whether an application is easier to set up and maintain with or without docker.

load more comments (1 replies)

Do you host on more than one machine? Containerization / virtualization begins to shine most brightly when you need to scale / migrate across multiple servers. If you're only running one server, I definitely see how bare metal is more straight-forward.

load more comments (4 replies)
load more comments (3 replies)
[-] splendoruranium@infosec.pub 14 points 1 week ago

Curious to know what the experiences are for those who are sticking to bare metal. Would like to better understand what keeps such admins from migrating to containers, Docker, Podman, Virtual Machines, etc. What keeps you on bare metal in 2025?

If it aint broke, don't fix it 🤷

[-] ZiemekZ@lemmy.world 13 points 1 week ago

I consider them unnecessary layers of abstraction. Why do I need to fiddle with Docker Compose to install Immich, Vaultwarden etc.? Wouldn't it be simpler if I could just run sudo apt install immich vaultwarden, just like I can do sudo apt install qbittorrent-nox today? I don't think there's anything that prohibits them from running on the same bare metal, actually I think they'd both run as well as in Docker (if not better because of lack of overhead)!

load more comments (5 replies)
[-] neidu3@sh.itjust.works 10 points 1 week ago

I started hosting stuff before containers were common, so I got used to doing it the old fashioned way and making sure everything played nice with each other.

Beyond that, it's mostly that I'm not very used to containers.

[-] sem 9 points 1 week ago

For me the learning curve of learning containers does not match the value proposition of what benefits they're supposed to provide.

[-] billwashere@lemmy.world 11 points 1 week ago

I really thought the same thing. But it truly is super easy. At least just the containers like docker. Not kubernetes, that shit is hard to wrap your head around.

Plus if you screw up one service and mess everything up, you don’t have to rebuild your whole machine.

[-] dogs0n@sh.itjust.works 6 points 1 week ago

100% agree, my server has pretty much nothing except docker installed on it and every service I run is always in containers.

Setting up a new service is mostly 0% risk and apps can't bog down my main file system with random log files, configs, etc that feel impossible to completely remove.

I also know that if for any reason my server were to explode, all I would have to do is pull my compose files from the cloud and docker compose up everything and I am exactly where I left off at my last backup point.

[-] billwashere@lemmy.world 9 points 1 week ago* (last edited 6 days ago)

Ok I’m arguing for containers/VMs and granted I do this for a living… I’m a systems architect so I build VMs and containers pretty much all the time time at work… but having just one sorta beefy box at home that I can run lots of different things is the way to go. Plus I like to tinker with things so when I screw something up, I can get back to a known state so much easier.

Just having all these things sandboxed makes it SO much easier.

[-] SpookyMulder@twun.io 9 points 1 week ago

No, you're not looking to understand. You're looking to persuade.

load more comments (4 replies)
[-] kutsyk_alexander@lemmy.world 8 points 1 week ago* (last edited 1 week ago)

I use Raspberry Pi 4 with 16GB SD-card. I simply don't have enough memory and CPU power for 15 separate database containers for every service which I want to use.

load more comments (7 replies)
[-] savvywolf@pawb.social 8 points 1 week ago

I've always done things bare metal since starting the selfhosting stuff before containers were common. I've recently switched to NixOS on my server, which also solves the dependency hell issue that containers are supposed to solve.

[-] sylver_dragon@lemmy.world 8 points 1 week ago

I started self hosting in the days well before containers (early 2000's). Having been though that hell, I'm very happy to have containers.
I like to tinker with new things and with bare metal installs this has a way of adding cruft to servers and slowly causing the system to get into an unstable state. That's my own fault, but I'm a simple person who likes simple solutions. There are also the classic issues with dependency hell and just flat out incompatible software. While these issues have gotten much better over the years, isolating applications avoids this problem completely. It also makes OS and hardware upgrades less likely to break stuff.

These days, I run everything in containers. My wife and I play games like Valheim together and I have a Dockerfile template I use to build self-hosted serves in a container. The Dockerfile usually just requires a few tweaks for AppId, exposed ports and mount points for save data. That paired with a docker-compose.yaml (also built off a template) means I usually have a container up and running in fairly short order. The update process could probably be better, I currently just rebuild the image, but it gets the job done.

[-] HiTekRedNek@lemmy.world 7 points 1 week ago

In my own experience, certain things should always be on their own dedicated machines.

My primary router/firewall is on bare metal for this very reason.

I do not want to worry about my home network being completely unusable by the rest of my family because I decided to tweak something on the server.

I could quite easily run OpnSense in a VM, and I do that, too. I run proxmox, and have OpnSense installed and configured to at least provide connectivity for most devices. (Long story short: I have several subnets in my home network, but my VM OpnSense setup does not, as I only had one extra interface on that equipment, so only devices on the primary network would work)

And tbh, that only exists because I did have a router die, and installed OpnSense into my proxmox server temporarily while awaiting new-to-me equipment.

I didn't see a point in removing it. So it's there, just not automatically started.

load more comments (3 replies)
[-] kiol@lemmy.world 6 points 1 week ago

Are you concerned about your self-hosted bare metal machine being a single point of failure? Or, are you concerned it will be difficult to reproduce?

load more comments (1 replies)
[-] Evotech@lemmy.world 6 points 1 week ago

It's just another system to maintain, another link in the chain that can fail.

I run all my services on my personal gaming pc.

[-] Strider@lemmy.world 6 points 1 week ago

Erm. I'd just say there's no benefit in adding layers just for the sake of it.

It's just different needs. Say I have a machine where I run a dedicated database on, I'd install it just like that because as said there's no advantage in making it more complicated.

[-] melfie@lemy.lol 5 points 1 week ago* (last edited 1 week ago)

I use k3s and enjoy benefits like the following over bare metal:

  • Configuration as code where my whole setup is version controlled in git
  • Containers and avoiding dependency hell
  • Built-in reverse proxy with the Traefik ingress controller. Combined with DNS in my OpenWRT router, all of my self hosted apps can be accessed via appname.lan (e.g., jellyfin.lan, forgejo.lan)
  • Declarative network policies with Calico, mainly to make sure nothing phones home
  • Managing secrets securely in git with Bitnami Sealed Secrets
  • Liveness probes that automatically “turn it off and on again” when something goes wrong

These are just some of the benefits just for one server. Add more and the benefits increase.

Edit:

Sorry, I realize this post is asking why go bare metal, not why k3s and containers are great. 😬

load more comments
view more: next ›
this post was submitted on 24 Sep 2025
158 points (100.0% liked)

Selfhosted

51948 readers
1048 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS