90

Hello everyone,

I finally managed to get my hands on a Beelink EQ 14 to upgrade from the RPi running DietPi that I have been using for many years to host my services.

I have always was interested in using Proxmox and today is the day. Only problem is I am not sure where to start. For example, do you guys spin up a VM for every service you intend to run? Do you set it up as ext4, btrfs, or zfs? Do you attach external HDD/SSD to expand your storage (beyond the 2 PCIe slots in the Beelink in this example).

I’ve only started reading up on Proxmox just today so I am by no means knowledgeable on the topic

I hope to hear how you guys setup yours and how you use it in terms of hosting all your services (nextcloud, vaultwarden, cgit, pihole, unbound, etc…) and your ”Dos and Don’ts“

Thank you 😊

top 48 comments
sorted by: hot top controversial new old
[-] drkt@scribe.disroot.org 17 points 3 weeks ago

I recommend you use containers instead of VMs when possible, as VMs have a huge overhead by comparison, but yes. Each service gets its own container, unless 2 services need to share data. My music container, for example, is host to both Gonic, slskd and Samba.

[-] MangoPenguin 9 points 3 weeks ago* (last edited 3 weeks ago)

There is barely any overhead with a Linux VM, a Debian minimal install only uses about 30MB of RAM! As an end user i find performance to be very similar with either setup.

[-] modeh@piefed.social 6 points 3 weeks ago
[-] drkt@scribe.disroot.org 12 points 3 weeks ago

Correct.

Side note- people will tell you not to put dockers in an LXC but fuck em. I don't want to pollute my hypervisor with docker's bullshit and the performance impact is negligeable.

[-] Hominine@lemmy.world 6 points 3 weeks ago

There are dozens of us!

[-] felbane@lemmy.world 4 points 3 weeks ago* (last edited 3 weeks ago)

I wouldn't recommend running docker/podman in LXC, but that's just because it seems to run better as a full VM in my experience.

No sense running it in the hypervisor, agreed.

LXC is great for everything else.

[-] zingo@sh.itjust.works 3 points 3 weeks ago* (last edited 3 weeks ago)

as VMs have a huge overhead by comparison.

Not at all. The benefits outweighs the slight increased RAM usage by a huge margin.

I have Urbackup running in a dietpi VM. I have it set for 256mb of RAM. That includes the OS and the Urbackup service. It works perfectly fine.

I have an alpine VM that runs 32 docker containers using about 3.5GB of RAM. I wouldn't call that bloat by any means.

[-] drkt@scribe.disroot.org 3 points 3 weeks ago* (last edited 3 weeks ago)

A fresh Debian container uses 22 MiB of RAM. A fresh debian VM uses 200+ MiB of RAM.
A VM has to translate every single hardware interaction, a container doesn't.

I don't want to fuck flies about the definition of 'huge' with you, but that's kind of a huge difference.

[-] zingo@sh.itjust.works 1 points 3 weeks ago* (last edited 3 weeks ago)

Translate? You know that a CPU sits idle most of the time right?

What kind of potato are you running? Also, how many hundred services do you run on it anyway, complaining about 200mb. You better off running docker on baremetal, if you are that worried.

Do you know how much RAM Windows 11 uses on idle?

WTF

[-] possiblylinux127@lemmy.zip 1 points 3 weeks ago

I wouldn't do that as it complicates things unnecessarily. I would just run a container runtime inside LXC or VM.

I use one VM per service. WAN facing services, of which I only have a couple, are on a separate DMZ subnet and are firewalled off from the LAN.

It's probably little overkill for a self hosted setup but I have enough server resources, experience, and paranoia to support it.

[-] anamethatisnt@sopuli.xyz 7 points 3 weeks ago

I prefer running true vms too, but it is resource intensive.
Playing with lxcs and docker could allow one to run more services on a little beelink.

[-] jubilationtcornpone@sh.itjust.works 4 points 3 weeks ago* (last edited 3 weeks ago)

Yeah, with something that size you're pretty much limited to containers.

Edit: Which is totally fine, OP. Self hosting is an opportunity to learn and your setup can be easily changed as your needs change over time.

[-] lucas@startrek.website 2 points 3 weeks ago

Am I looking at the wrong device? Beelink EQ15 looks like it has an N150 and looks like 16GB of ram? That's plenty for quite few VMs. I run an N100 minipc with only 8GB of RAM and about half a dozen VMs and a similar number of LXC containers. As long as you're careful about only provisioning what each VM actually needs, it can be plenty.

In this situation it's not necessarily that it's the "right" or "wrong" device. The better question is, "does it meet your needs?" There are pros and cons to running each service in its own VM. One of the cons is the overhead consumed by the VM OS. Sometimes that's a necessary sacrifice.

Some of the advantages of running a system like Proxmox are that it's easily scalable and you're not locked into specific hardware. If your current Beelink doesn't prove to be enough, you can just add another one to the cluster or add a different host and Proxmox doesn't care what it is.

TLDR: it's adequate until it's not. When it's not, it's an easy fix.

[-] lucas@startrek.website 1 points 3 weeks ago

Absolutely. I actually have an upgrade already planned, but it's just that it's not because I can't run VMs, it's more that I want to run more hungry services than will fit on those resources, whatever virtualisation layers were being used. The fact that it's an easy fix to more a VM/lxc to a new host is absolutely it, though.

[-] modeh@piefed.social 1 points 3 weeks ago

I have a couple of publicly accessible services (vaultwarden, git, and searxng). Do you place them on a separate subnet via proxmox or through the router?

My understanding in networking is fundamental enough to properly setup OpenWrt with an inbound and outbound VPN tunnels along with policy based routing, and that’s where my networking knowledge ends.

[-] anamethatisnt@sopuli.xyz 2 points 3 weeks ago

Unless you wanna expose services to others my recommendation is always to hide your services behind a vpn connection.

[-] modeh@piefed.social 3 points 3 weeks ago

I travel internationally and some of the countries In been to have been blocking my wireguard tunnel back home preventing me from accessing my vault. I tried setting it up with shadowsocks and broke my entire setup so I ended up resetting it.

Any suggestions that is not tailscale?

[-] anamethatisnt@sopuli.xyz 2 points 3 weeks ago

I find setting up an openvpn server with self-signed certificates + username and password login works well. You can even run it on tcp/443 instead of tcp/1194 if you want to make it less likely to be blocked.

[-] abeorch@friendica.ginestes.es 1 points 3 weeks ago

@modeh We should talk - I am using Proxmox and #openwrt. I am setting up a dmz for publoc services with external ports exposed. (but failing)

[-] abeorch@friendica.ginestes.es 7 points 3 weeks ago

@modeh Id love to meet others who are just starting out with Proxmox and do some casual video calls/chats European tomezones to learn together /try stuff out.

[-] Lyra_Lycan 7 points 3 weeks ago

For inspiration, here's my list of services:

Name ID No. Primary Use
heart (Node) ProxMox
guard (CT) 202 AdGuard Home
management (CT) 203 NginX Proxy Manager
smarthome (VM) 804 Home Assistant
HEIMDALLR (CT) 205 Samba/Nextcloud
authentication (VM) 806 BitWarden
mail (VM) 807 Mailcow
notes (CT) 208 CouchDB
messaging (CT) 209 Prosody
media (CT) 211 Emby
music (CT) 212 Navidrome
books (CT) 213 AudioBookShelf
security (CT) 214 AgentDVR
realms (CT) 216 Minecraft Server
blog (CT) 217 Ghost
ourtube (CT) 218 ytdl-sub YouTube Archive
cloud (CT) 219 NextCloud
remote (CT) 221 Rustdesk Server

Here is the overhead for everything. CPU is an i3 6100 and RAM is 2133MHz:

Quick note about my setup, some things threw a permissions hissy fit when in separate containers, so Media actually has Emby, Sonarr, Radarr, Prowlarr and two instances of qBittorrent. A few of my containers do have supplementary programs.

[-] modeh@piefed.social 2 points 3 weeks ago

Thank you, that’s actually quite informative. Gives me a good idea of what could go where in terms of my setup.

So far I recreated my RPi DietPi setup in a VM but for some reason Pi-Hole + Unbound combo is now fucking with my internet connectivity. It is so weird, I assigned it a static lease for the old RPi IP address in OpenWrt and left all the rules in there intact and you would think it would be a “drop-in replacement” but it isn’t. Not sure if Proxmox has some weird firewall situation going on. Definitely need to fuck around more with it to better understand it.

[-] lemming741@lemmy.world 3 points 3 weeks ago

To piggyback on the permissions hissy fit-

My aar stack, openmediavault, and transmission stack have different usernames mapped to the same uid and it is a pain in the ass. I "fixed it" by making a NAS group that catches them all, but by "fixed it" I really mean "got it working"

So be aware of what uid will own a file and maybe change it to a uid in the 1100+ range to make NFS easier in the future.

[-] Lyra_Lycan 1 points 2 weeks ago* (last edited 2 weeks ago)

Yes! This.

I have one machine for network sharing storage and thus a user for login and r/w powers. The same storage is used by other machines to save the files, and so each autonomous user for CCTV and qBitTorrent needed to have the same UID as the Samba login, so each program had rw permissions.

And those containers had to be privileged iirc in order for each root (UID 0) to access the shared storage properly. I may be wrong though

[-] Lyra_Lycan 1 points 2 weeks ago* (last edited 2 weeks ago)

Self-hosting be like ^^

I think I had issues similar to that. Perhaps the PiHole is running a conflicting DHCP server? I have my own set of weird issues.. Bad connectivity so I need a WiFi range extender, but it's not a true extender and has its own IP address, acting as a router sometimes and not forwarding DNS queries to the main router.. That, a lack of NAT loopback functionality, a lack of changeable DNS settings and the AdGuard Home apparently taking precedent in that side of the house, and I have a cocktail of connection issue bs lol. The main router can DNS perfectly fine, but if I'm connected to the extender I have to add DNS rewrites to AGH.. which works for most services..

The journey is largely about overcoming obstacles aha, and the reward for doing so.. Hope yours goes well!

[-] catrass@lemmy.zip 6 points 3 weeks ago

As with most things homelab related, there is no real "right" or "wrong" way, because its about learning and playing around with cool new stuff! If you want to learn about different file systems, architectures, and software, do some reading, spin up a test VM (or LXC, my preference), and go nuts!

That being said, my architecture is built up of general purpose LXCs (one for my Arr stack, one for my game servers, one for my web stuff, etc). Each LXC runs the related services in docker, which all connect to a central Portainer instance for management.

Some things are exceptions though, such as Open Media Vault and HomeAssistant, which seem to work better as standalone VMs.

The services I run are usually something that are useful for me, and that I want to keep off public clouds. Vaultwarden for passwords and passkeys, DoneTick for my todo-list, etc. If I have a gap in my digital toolkit, I always look for something that I can host myself to fill thay gap. But also a lot of stuff I want to learn about, such as the Grafana stack for observability at the moment.

[-] modeh@piefed.social 1 points 3 weeks ago

Thank you.

I guess I have more reading to do on Portainer and LXC. Using an RPi with DietPi, I didn’t have the need to learn any of this. Now is a good time as ever.

But generally speaking, how is a Linux container different (or worse) than a VM?

[-] anamethatisnt@sopuli.xyz 6 points 3 weeks ago

A VM is properly isolated and has it's own OS and kernel. This improves security at the cost of overhead.
If you are starved for hardware resources then running lxcs instead of vms could give you more bang for the buck.

[-] Lyra_Lycan 5 points 3 weeks ago* (last edited 3 weeks ago)

An LXC is isolated, system-wise, by default (unprivileged) and has very low resource requirements.

  • Storage also expands when needed, i.e. you can say it can have 40GB but it'll only use as much as needed and nothing bad will happen if your allocated storage is higher than your actual storage.. Until the total usage approaches 100%. So there's some flexibility. With a VM the storage is definite.
  • Usually a Debian 12 container image takes up ~1.5GB.
  • LXCs are perfectly good for most use cases. VMs, for me, only come in when necessary, when the desired program has more needs like root privileges, in which case a VM is much safer than giving an LXC access to the Proxmox system. Or when the program is a full OS, in the case of Home Assistant.

Separating each service ensures that if something breaks, there are zero collateral casualties.

[-] anamethatisnt@sopuli.xyz 6 points 3 weeks ago

I would start with one VM running portainer and once that is up and running I would recommend learning how to backup and restore the VM. If you have enough disks I would look into ZFS RAID 1 for redundancy.
https://pve.proxmox.com/wiki/ZFS_on_Linux
Learning the redundancy and backup systems before having too many services active allows you to screw up and redo.

[-] SidewaysHighways@lemmy.world 4 points 3 weeks ago

portainer is cool. dockge is 😎

[-] anamethatisnt@sopuli.xyz 2 points 3 weeks ago

I remember trying both back when my server was new but missing something in dockge, can't remember what right now.

[-] modeh@piefed.social 2 points 3 weeks ago

The Beelink comes with two PCIe slots, so I have two internal drives for now. Is it acceptable to attach external HDDs and set them up in a RAID configuration with the internal ones? I do plan on the Beelink being a NAS too (limited budget, can’t afford a separate dedicated NAS at the moment)

[-] anamethatisnt@sopuli.xyz 4 points 3 weeks ago

I wouldn't use RAID on USB.
If you only got 2x m.2 slots then I would probably prioritize disk space over RAID1 and ensure you got a backup up and running. There are m.2 to sata adapters but your Bee-link doesn't have a suitable psu for that.

[-] BOFH666@lemmy.world 6 points 3 weeks ago

Replace cgit with Forgejo. I really like the software from Jason, but Forgejo is a huge difference

[-] modeh@piefed.social 5 points 3 weeks ago

Only reason I am thinking cgit is because I want a simple interface to show repos and commit history, not interested in doing pull requests, opening issues, etc…

I feel Forgejo would be “killing an ant with a sledgehammer” kinda situation for my needs.

Nonetheless, thank you for your suggestion.

[-] hobbsc@lemmy.sdf.org 5 points 3 weeks ago

i have very few services and tend to lean into virtual machines instead of containers out of habit. i have proxmox running on an old mini-pc that needs to be replaced at some point. 16GB of RAM in it, 4 cores on the CPU (it's an i3 at 2ghz), and a 100GB SSD.

VMs and services are as follows:

  • ubuntu vm
    • runs my omada controller in docker
    • used to run all of my containers in docker but i migrated them to podman
  • fedora vm
    • runs several containers via podman
      • alexandrite, where i'm composing this now!
      • uptime kuma
      • redlib for browsing reddit
      • kanboard for organizing my contracting work
  • dietpi in a vm to run pi-hole (migrated here when my pi zero-w cooked itself)
    • this also handles internal dns for each server so i don't have to type out IP addresses
  • home assistant HAOS vm

home assistant backs itself up to my craptastic nas and the rest of the stuff doesn't really have any backups. i wouldn't be upset if they died, except for my kanboard instance. i can rebuild that from scratch if needed.

i'll be investing in a new mini-pc and some more disks soon, though.

[-] MangoPenguin@piefed.social 4 points 3 weeks ago

I have a single container for docker that runs 95% of services, and a few other containers and VMs for things that aren't docker, or are windows/osx.

ext4 is the simple easy option, I tend to pick that on systems with lower amounts of RAM since ZFS does need some RAM for itself.

I do have an external USB HDD for backups to be stored on.

[-] Zwuzelmaus@feddit.org 4 points 3 weeks ago* (last edited 3 weeks ago)

You have that new machine to play with. So do it.

Install it and play around. If you do nothing that should "last forever" in these first days, you can tear it down and do it again in different ways.

I have recently played in the same way with the proxmox unattended install feature, and it was a lot fun. One text file and a bootable image on a stick.

[-] modeh@piefed.social 2 points 3 weeks ago

Oh yeah, absolutely will do. Was simply hoping to get an idea of how self-hosters who’ve been using it for a while now set it up to get a rough picture of where I want to be once I am done screwing around with it.

[-] nis@feddit.dk 2 points 3 weeks ago* (last edited 3 weeks ago)

I've been doing it for a couple of years. I don't think I'll ever be done screwing around with it.

Embrace the flux :)

[-] abeorch@friendica.ginestes.es 2 points 3 weeks ago

@modeh Certainly no expert but would starting with setting up some cloudint image templates be somewhere in there?

[-] modeh@piefed.social 1 points 3 weeks ago

Not even sure what that is, so most likely a no for me.

[-] incentive@lemmy.ml 2 points 3 weeks ago

Template for setting up your new VMs - after setting up your first template its a few clicks and deploy for new VMs

[-] possiblylinux127@lemmy.zip 2 points 3 weeks ago* (last edited 3 weeks ago)

Install Proxmox with ZFS

Next configure the non enterprise repo or buy a subscription

[-] Ron@zegheteens.nl 1 points 3 weeks ago

It depends a bit on your needs. My proxmox setup is like multiple nodes (computers) with local (2 drives with ZFS mirrorig), they all use a truenas server as NFS host for data storage. For some things I use conaitners (LXC) but other thing I use VMs.

this post was submitted on 07 Sep 2025
90 points (100.0% liked)

Selfhosted

51877 readers
505 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS