1
164

Due to the large number of reports we've received about recent posts, we've added Rule 7 stating "No low-effort posts. This is subjective and will largely be determined by the community member reports."

In general, we allow a post's fate to be determined by the amount of downvotes it receives. Sometimes, a post is so offensive to the community that removal seems appropriate. This new rule now allows such action to be taken.

We expect to fine-tune this approach as time goes on. Your patience is appreciated.

2
419
submitted 2 years ago* (last edited 2 years ago) by devve@lemmy.world to c/selfhosted@lemmy.world

Hello everyone! Mods here 😊

Tell us, what services do you selfhost? Extra points for selfhosted hardware infrastructure.

Feel free to take it as a chance to present yourself to the community!

🦎

3
14

Before you scream at your screen, I am aware this setup isn't ideal, to say the least, my Self-hosting has been composed of a laptop with a usb carry with a 2.5 1tb hard drive. I recent made up my mind about getting a couple 4 tb server hdd (heard barracuda are relatively silent) to run software raid 1, since I can't find a budget double bay carry (that I can purchase locally) I've decided I'll get a couple 3.5 inch usb cases and get a splitter to run the power from just just one brick.

My question is regarding resiliency, I get occasional blackouts and low tension every now and then, a few times a year but it can be a few times in a day. I've never had hardware dying because of it and I don't have a UPS, but I worry I could be risking data corruption or something swapping to this setup because of the extra power those drives will need being fed from the wall instead of the laptop (the laptop feeds the current drive over usb alone and it has a battery) which could be abruptly cut off every now and then. Right now, the worst this has caused has been having ro reboot the system because it got unmounted but never had a loss of data from this.

Am I worrying for nothing? Would it be just the same? Should I just put this off until (if) I can afford the drives plus a ups? So far I've had my server for basically free, but I'm running out of space for family photos and I kinda have to upgrade.

4
18

Sup. I have proxmox configured to start a Jellyfin LXC whenever the host (re)starts. However, the /dev/nvidia* devices do not appear until I manually run nvidia-smi (probably anything nvidia* would would work) on the host, so the autostart is failing. Any ideas why I would need to run something likenvidia-smi first to get/dev/nvidia* devices to show up?

5
235
6
57

Hello!! Some recent technical problems on my family's NAS gave me a big scare and finally pushed me to figure out a way to back it all up. I'm asking here specifically because I really don't know where to even starts because of the fact I've got just under 50 terabytes worth of data stored in a 7-disk RAID-5 and would prefer to keep it cheap. What are your suggestions?

7
193
submitted 1 day ago* (last edited 1 day ago) by BonkTheAnnoyed to c/selfhosted@lemmy.world

I generated 16 character (upper/lower) subdomain and set up a virtual host for it in Apache, and within an hour was seeing vulnerability scans.

How are folks digging this up? What's the strategy to avoid this?

I am serving it all with a single wildcard SSL cert, if that's relevant.

Thanks

Edit:

  • I am using a single wildcard cert, with no subdomains attached/embedded/however those work
  • I don’t have any subdomains registered with DNS.
  • I attempted dig axfr example.com @ns1.example.com returned zone transfer DENIED

Edit 2: I'm left wondering, is there an apache endpoint that returns all configured virtual hosts?

Edit 3: I'm going to go through this hardening guide and try against with a new random subdomain https://www.tecmint.com/apache-security-tips/

8
23
submitted 1 day ago* (last edited 1 day ago) by worhui@lemmy.world to c/selfhosted@lemmy.world

I wasn't sure if this was the best place to post this. A series of events happened and I recently changed up my home network.

One of the larger changes I made was to add a Unifi Cloud gateway gateway ultra.

Right away my biggest challenge is that it does not accurately list all of the client devices that it has given a DHCP lease to.

Has anyone else run into this issue?

I have some IP based security cameras that I have only been able to locate before by looking at my ISP's dhcp lease list and find the IP.

So right now I have cameras on my network and I have to brute force lookup the IP of them to figure out where they are.

A more minor annoyance is that the network topology map is wrong and that ubiquity switches are not being mapped.

9
25

Can anyone share their experiences of running a NAS with ZimaOS or similar software vs just using Debian?

I'm pretty comfortable on the command line and like having full control over my system. I'm happily running Debian and using Docker Compose to run Immich, Home Assistant, Jellyfin, and a few other services. I also use Tailscale and have https setup.

However, I am curious to learn more about these more turnkey solutions. Are they worth switching to? I guess ZimaOS comes with a mobile app? Is that useful? Does ZimaOS make it easier for end users to use? Is managing ZimaOS annoying?

Is ZimaOS even worth considering if is not Open Source?

10
1386
submitted 2 days ago* (last edited 1 day ago) by h333d@lemmy.world to c/selfhosted@lemmy.world

I used to self-host because I liked tinkering. I worked tech support for a municipal fiber network, I ran Arch, I enjoyed the control. The privacy stuff was a nice bonus but honestly it was mostly about having my own playground. That changed this week when I watched ICE murder a woman sitting in her car. Before you roll your eyes about this getting political - stay with me, because this is directly about the infrastructure we're all running in our homelabs. Here's what happened: A woman was reduced to a data point in a database - threat assessment score, deportation priority level, case number - and then she was killed. Not by some rogue actor, but by a system functioning exactly as designed. And that system? Built on infrastructure provided by the same tech companies most of us used to rely on before we started self-hosting. Every service you don't self-host is a data point feeding the machine. Google knows your location history, your contacts, your communications. Microsoft has your documents and your calendar. Apple has your photos and your biometrics. And when the government comes knocking - and they are knocking, right now, today - these companies will hand it over. They have to. It's baked into the infrastructure. Individual privacy is a losing game. You can't opt-out of surveillance when participation in society requires using their platforms. But here's what you can do: build parallel infrastructure that doesn't feed their systems at all. When you run Nextcloud, you're not just protecting your files from Google - you're creating a node in a network they can't access. When you run Vaultwarden, your passwords aren't sitting in a database that can be subpoenaed. When you run Jellyfin, your viewing habits aren't being sold to data brokers who sell to ICE. I watched my local municipal fiber network get acquired by TELUS. I watched a piece of community infrastructure get absorbed into the corporate extraction machine. That's when I realized: we can't rely on existing institutions to protect us. We have to build our own. This isn't about being a prepper or going off-grid. This is about building infrastructure that operates on fundamentally different principles:

Communication that can't be shut down: Matrix, Mastodon, email servers you control

File storage that can't be subpoenaed: Nextcloud, Syncthing

Passwords that aren't in corporate databases: Vaultwarden, KeePass

Media that doesn't feed recommendation algorithms: Jellyfin, Navidrome

Code repositories not owned by Microsoft: Forgejo, Gitea

Every service you self-host is one less data point they have. But more importantly: every service you self-host is infrastructure that can be shared, that can support others, that makes the parallel network stronger. Where to start if you're new:

Passwords first - Vaultwarden. This is your foundation. Files second - Nextcloud. Get your documents out of Google/Microsoft. Communication third - Matrix server, or join an existing instance you trust. Media fourth - Jellyfin for your music/movies, Navidrome for music.

If you're already self-hosting:

Document your setup. Write guides. Make it easier for the next person. Run services for friends and family, not just yourself. Contribute to projects that build this infrastructure. Support municipal and community network alternatives.

The goal isn't purity. You're probably still going to use some corporate services. That's fine. The goal is building enough parallel infrastructure that people have actual choices, and that there's a network that can't be dismantled by a single executive order. I'm working on consulting services to help small businesses and community organizations migrate to self-hosted alternatives. Not because I think it'll be profitable, but because I've realized this is the actual material work of resistance in 2025. Infrastructure is how you fight infrastructure. We're not just hobbyists anymore. Whether we wanted to be or not, we're building the resistance network. Every Raspberry Pi running services, every old laptop turned into a home server, every person who learns to self-host and teaches someone else - that's a node in a system they can't control. They want us to be data points. Let's refuse.

What are you running? What do you wish more people would self-host? What's stopping people you know from taking this step?

EDIT: Appreciate the massive response here. To the folks in the comments debating whether I’m an AI: I’m flattered by the grammar check, but I'm just a guy in his moms basement with too much coffee and a background in municipal networking. If you think "rule of three" sentences are exclusive to LLMs, wait until you hear a tech support vet explain why your DNS is broken for the fourth time today.

More importantly, a few people asked about a "0 to 100" guide - or even just "0 to 50" for those who don't want to become full time sysadmins. After reading the suggestions, I want to update my "Where to start" list. If you want the absolute fastest, most user-friendly path to getting your data off the cloud this weekend, do this:

The Core: Install CasaOS, or the newly released (to me) ZimaOS. It gives you a smartphone style dashboard for your server. It’s the single best tool I’ve found for bridging the technical gap. It's appstore ecosystem is lovely to use and you can import docker compose files really easily.

The Photos: Use Immich. Syncthing is great for raw sync, but Immich is the first thing I’ve seen that actually feels like a near 1:1 replacement for Google Photos (AI tagging, map view, etc.) without the privacy nightmare.

The Connection: Use Tailscale. It’s a zero-config VPN that lets you access your stuff on the go without poking holes in your firewall.

I’m working on a Privacy Stack type repo that curates these one click style tools specifically to help people move fast. Infrastructure is only useful if people can actually use it. Stay safe out there.

11
26
submitted 1 day ago* (last edited 1 day ago) by anzo@programming.dev to c/selfhosted@lemmy.world

I just like the ring of these two words. Like civil disobedience against opression... I consider self hosting to be the epitome of these actions, in the context of global corporations "ruining" every "product" (be it: privacy concerns, data brokers, pushing AI into products, etc.) Hence, the concept that arises when these two words come together: Cybernetical Disobedience, is the greatest motivation (imho) to spin up some containers, be it searxng, vaultwarden, syncthing, or arr stack. All comes with some rebellious ideation behind... Do any of you feel like me?

12
155

I've got two domain names set up for work and personal email, but I'm absolutely drowning in unread emails, around 4,000. Most are those annoying notifications like "Your security code is xxx," "Your parcel has shipped," and requests to rate my experience.

Right now, I've been trying out Inbox Zero with an old Gmail account. It's cool, but honestly feels a bit overkill and only works with Gmail and Outlook. I switched to my own domains to get away from Google in the first place!

So, I’m on the hunt for an email provider that has solid SPAM filters and can create a priority inbox without all the pesky notification clutter. Bonus points if it supports custom domains.

Any suggestions?

13
69
14
13

Recently, when I reboot my Proxmox hardware, I'm greeted with this message after the bootloader splash screen. It won't progress any further, even after letting it sit overnight.

But restarting it 5-10 times will eventually get past it.

I suspect it might have something to do with me passing a physical disk and the whole GPU to a Windows VM (my temp solution for TV gaming until I can get Sunshine issues ironed out) but I'm not sure where to start looking for the issue.

I thought it might be freezing when the VM tries to take control of the GPU, but with a continuous ping to both the server and the VM, I never get a response. That makes me think the issue is happening before the VM is started.

I'm basically just hoping someone could point me to a log that might have a related error message.

15
14
submitted 2 days ago* (last edited 2 days ago) by theorangeninja@sopuli.xyz to c/selfhosted@lemmy.world

Hello everyone,

I am currently trying to transition from docker-compose to podman-compose before trying out podman quadlets eventually. The first couple containers work great but today I tried Linkding and I run into a weird error.

Linkding can't access the data directory because the permission gets denied. After inspecting the container all the directories inside belong to root. But podman runs rootless so that must be the issues. I tried to change the owner of the data directory on the host to root but then the data directory in the container belongs to nobody and nogroup (?). After checking the environment variable documentation of Linkding it seems like there is no environment variable for a UID and GID.

I think I have a fundamental misunderstanding how rootfull and rootless containers work so I would be very grateful if anybody could point me in the right direction on where to get a solution for this problem or anybody had success running Linkding rootless.

Thanks a lot in advance!


Edit:

I used named volumes because that's what the dev used in the example compose file. Now I tried to use named volumes instead and now everything seems to work fine. No error in the logs and the web ui is accessible.

16
26

I am looking for a solution to collaborative PDF editing (mostly annotations). I already have a Nextcloud installation with Office for several members so it would be great if it could be integrated, but it is not necessary.

What I mainly want is the possibility to add and view annotations made by several users on the same file at the same time.

Do you have a suggestion?

17
385
submitted 5 days ago by exu@feditown.com to c/selfhosted@lemmy.world
18
35

Hi guys!

So...Yeah. I have your average Deluge/Sonarr/Radarr combo. What I'm finding increasingly annoying is, these days some release groups are putting their names more frequently BEFORE the filename. This makes it rather hard to find even the folders of the files being downloaded. Is there an easy way to address this? I'd like to keep the rest of the things in the filename there, and maybe even the release group name...but at the end of the file. The most important thing, the filename should come first. How to best do this?

Thanks!

19
43

Hey guys, I was wondering if any of you have experience running their services on a Lenovo ThinkCentre, specifically a Lenovo ThinkCentre M70Q Gen1? From what I read they can be quite efficient when it comes to idle power draw.

I have the chance to buy a refurbished one for approx. €380, coming with a i5-10400T, 16 GB RAM and a 256 GB NVMe SSD. Do you guys think the price sounds fair?

I am mainly looking to expand my Proxmox single host setup comprising of an Intel N150 mini PC with a second node as backup. Maybe down the line if I can get my hands on another affordable mini PC I might dabble in setting up a Kubernetes cluster. But that's a project for another day 😄

20
35
submitted 4 days ago by idriss@lemmy.ml to c/selfhosted@lemmy.world
21
30

I bought a refurbished Lenovo M720s computer last summer to use as a homelab at my house. I loaded True Nas onto the internal SSD and swapped out the HDD drive that came with it for a 10tb drive. I also threw a 500gb drive in the M.2 slot to use for applications with True Nas.

All has been fine up to now. Recently though I aquired a 8tb HDD for cheap and figured I would throw it into the homelab for some extra storage. There was a extra Sata connector free on the motherboard anyway. I put in the drive and connect it, but then I realize that there is not another Sata power connector I can use for this drive. This computer makes you connect power to the drives from the motherboard and not the power supply.

So I am at a bit of a roadblock. I know I am pushing the capabilities of this little machine, but it seems silly that they give you 3 Sata ports but only 2 power ports for Sata drives. I guess they were probably intending for one of those ports to just be used for a DVD drive.

I went to a local computer store and they were not very helpful. I asked if I could use a splitter for the power port and they said I would fry my board.

Anyone know any solutions to this? I just need a way to power one more HDD. I will link the manual to the computer so it is easier to see what I am looking at.

Link

22
12
submitted 4 days ago* (last edited 4 days ago) by lka1988@lemmy.dbzer0.com to c/selfhosted@lemmy.world

I'm finally getting around to migrating my OMV-based NAS from its current "2014 Mac Mini + USB multi-drive enclosure" setup to a more reliable build that doesn't rely on USB. But I'm torn on CPU choice.

The "new" system is based on Intel 7th gen hardware, since that's what the majority of my whole homelab runs (with zero complaints). The motherboard is an Asus Prime Q270M-C, meant for more commercial applications, and supports Intel's vPro/AMT/ME/whatever it's called ("vPro" from here onward) OOB management setup. I would really like to utilize vPro since I'm familiar with it and most of my machines have this enabled (and not accessible from outside my LAN).

The only compatible 7th gen CPUs with vPro are the i5-7500/T, i5-7600/T, and i7-7700/T. All are cheap (≤$50), easy to find on eBay, and I have no issues using the 35W T SKU. That said - I have a spare, yet perfectly functional Pentium G4560T sitting on my desk, but the only reason I haven't installed it yet is because it doesn't support vPro. I also have a 6th gen i5 (which the Asus mobo also supports) in an unused Optiplex 3040 SFF somewhere in my basement, but I don't think that CPU supports vPro. I should check...

Anyway, I have some options:

  1. Use the G4560T and deal with no vPro.
  2. Swap the G4560T for the i7-7700T currently installed in my HA instance (Lenovo M710q), but then deal with virtually zero CPU overhead in HA.
  3. Buy an i5-7500/7500T
  4. Buy an i5-7600/7600T
  5. Buy an i7-7700/7700T

I don't have an issue with any of these options, even losing vPro is something I can deal with. But I like having overhead, and hate having extra hardware laying around.

What say the Lemmings?


P.S.: For those interested, this is the planned NAS build.

23
63
submitted 6 days ago* (last edited 6 days ago) by crschnick@sh.itjust.works to c/selfhosted@lemmy.world

I'm proud to share major development updates for XPipe, a connection hub that allows you to access your entire server infrastructure from your local desktop. It can make your life easier when working with any kind of servers by eliminating many of the tedious tasks that come up when interacting with remote systems, either from the terminal or from a graphical interface.

It comes with integrations for SSH, docker and other containers, various hypervisors, cloud providers, and more without requiring setup on your remote systems. You can also keep using your favourite text/code editors, terminals, password managers, shells, command-line tools, and more with it.

Hub

It has been half a year since I last posted here, so there are a lot of improvements that were implemented since then:

Netbird support

You can now list and connect to devices in your Netbird network. This works via SSH and your locally installed netbird command-line client:

Netbird

Legacy system support

Up until now, the testing was done on relatively up-to-date machines that were not considered EOL. However, in practice, legacy systems are still used. The handling of older Unix-based systems has been greatly improved, especially when they did not ship with GNU command-line tools.

As long as you can connect to a system via SSH somehow, it should work now regardless of how old the system is. If you're into retrocomputing, feel free to give this a try.

AIX

HP-UX

AWS support

You can now connect to your AWS systems from within XPipe. Currently, EC2 systems and S3 buckets are supported, also including support for SSM. The integration works on top of the AWS CLI. The usage of the AWS CLI allows the integration to work very flexibly on any existing CLI setup if you already use the CLI. You can use any IAM access keys and authentication methods with it.

AWS

SSH keygen

You can now generate new SSH keys from within XPipe. The keys are generated via the installed OpenSSH ssh-keygen CLI tool, so you can be assured that the keys are generated in a cryptographically secure manner. This keygen right now supports RSA, ED25519, and ED25519 + FIDO2:

Keygen

Keys of identities can now also be automatically applied to systems, allowing you to perform a quick key rotation when needed:

Identity Apply

The process of changing the authentication configuration of a system is not always one simple step. So the dialog is a comprehensive overview of what is needed to apply a certain identity to a remote system, with various quick-action buttons and notes. This gives you still full manual control of what should be done and an overview of what is required prior to doing so.

Identity Apply Dialog

Network scan

There is now the option to automatically search the local network for any listening SSH/VNC/RDP servers and add them automatically as new connections. This also works for remote systems and their networks:

scan

VNC

Up until now, the internal VNC implementation of XPipe did a somewhat acceptable job for most connections. However, it is not able to match dedicated VNC clients when it comes to more advanced features and authentication methods. There's simply not the development capacity to maintain all of these additional VNC features. For this reason, there is now support to also use an external VNC client with XPipe, just as with any other tool integrations:

VNC settings screenshot

Split terminals

There is now a new batch action to open multiple systems in a split terminal pane instead of individual tabs. This action is only supported for terminals that support this, which currently includes: Windows Terminal, Kitty, and WezTerm. In addition, this is also supported when using any other terminal and a terminal multiplexer like tmux or zellij.

Split Action

This allows you to also use a feature like broadcast mode of your terminal to type one command into multiple terminal panes at the same time.

Split Terminal

Tags

You can now create and add tags to connection entries. This allows you to have a more structured workflow when filtering individual connections.

Tags

macOS 26 Tahoe

XPipe adopts many of the new features of macOS 26 right away. The application window now uses the new Liquid Glass theming. The application icon has also been reworked with Liquid Glass in mind. There's also support for the new apple containers framework:

macOS Tahoe screenshot

Windows ARM

There are now native Windows ARM builds. These releases are also available in winget and scoop.

Other

  • Add support for flatpak variants of various editors and terminals
  • The nixpkg package now also supports macOS and has been reworked as a flake
  • Add support for nushell
  • Add support for xonsh
  • Several fixes to be able to run the application in the Android Linux Terminal app without issues
  • The entire interface has been reworked to better work with screen readers and other accessibility tools
  • Various many other small improvements
  • Many performance optimizations
  • A lot of bug fixes across the board

A note on the open-source model

Since it has come up a few times, in addition to the note in the git repository, I would like to clarify that XPipe is not fully FOSS software. The core that you can find on GitHub is Apache 2.0 licensed, but the distribution you download ships with closed-source extensions. There's also a licensing system in place with limitations on what kind of systems you can connect to in the community edition as I am trying to make a living out of this. I understand that this is a deal-breaker for some, so I wanted to give a heads-up.

Outlook

If this project sounds interesting to you, you can check it out on GitHub, visit the Website, or check out the Docs for more information.

Enjoy!

24
27
submitted 5 days ago* (last edited 5 days ago) by claim_arguably@lemdro.id to c/selfhosted@lemmy.world

I’m thinking about running FreshRSS on my local Linux PC, but my computer isn’t on all the time.

Basically all I want is to have read/unread status synced between my PC and other 2 phones. Could I have that? Most of time my PC would be off and I will be reading articles on phone, would the read status be synced to PC once it's on?

25
43

How to test and safely keep using your janky RAM without compromising stability using memtest86+ and the memmap kernel param.

view more: next ›

Selfhosted

54457 readers
917 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS