1
164

Due to the large number of reports we've received about recent posts, we've added Rule 7 stating "No low-effort posts. This is subjective and will largely be determined by the community member reports."

In general, we allow a post's fate to be determined by the amount of downvotes it receives. Sometimes, a post is so offensive to the community that removal seems appropriate. This new rule now allows such action to be taken.

We expect to fine-tune this approach as time goes on. Your patience is appreciated.

2
417
submitted 2 years ago* (last edited 2 years ago) by devve@lemmy.world to c/selfhosted@lemmy.world

Hello everyone! Mods here 😊

Tell us, what services do you selfhost? Extra points for selfhosted hardware infrastructure.

Feel free to take it as a chance to present yourself to the community!

🦎

3
31

Hi everyone,

I have been using cloudflared for DNS-over-HTTPS for the past 5 years and it's been working pretty well. One of the reasons for using it was because my ISP was hijacking my DNS queries and changing it to their own DNS server.

However, I saw this news where the proxy-dns feature in cloudflared is being closed and they are asking customers to shift to their WARP client instead.

I want to know what the community is using for encrypted DNS services (DoH, DoT, DoQ)

Thanks :)

4
19

So after months of dealing with problems trying to get the stuff I want to host working on my Raspberry Pi and Synology, I've given up and decided I need a real server with an x86_64 processor and a standard Linux distro. So I don't continue to run into problems after spending a bunch more, I want to seriously consider what I need hardware-wise. What considerations do I need to think about in this?

Initially, the main things I want to host are Nextcloud, Immich (or similar), and my own Node bot @DailyGameBot@lemmy.zip (which uses Puppeteer to take screenshots—the big issue that prevents it from running on a Pi or Synology). I'll definitely want to expand to more things eventually, though I don't know what. Probably all/most in Docker.

For now I'm likely to keep using Synology's reverse proxy and built-in Let's Encrypt certificate support, unless there are good reasons to avoid that. And as much as it's possible, I'll want the actual files (used by Nextcloud, Immich, etc.) to be stored on the Synology to take advantage of its large capacity and RAID 5 redundancy.

Is a second-hand Intel-based mini PC likely suitable? I read one thing saying that they can have serious thermal throttling issues because they don't have great airflow. Is that a problem that matters for a home server, or is it more of an issue with desktops where people try to run games? Is there a particular reason to look at Intel vs AMD? Any particular things I should consider when looking at RAM, CPU power, or internal storage, etc. which might not be immediately obvious?

Bonus question: what's a good distro to use? My experience so far has mostly been with desktop distros, primarily Kubuntu/Ubuntu, or with niche distros like Raspbian. But all Debian-based. Any reason to consider something else?

5
35
submitted 12 hours ago* (last edited 10 hours ago) by mlunar@lemmy.world to c/selfhosted@lemmy.world

Happy 2026 folks!

If you're in the market for a minimalistic self-hosted photo gallery, read further :)

I'm not entirely sure whether I should post here for every release or not, what do you think? I guess more than once a year wouldn't hurt 😅

Spurred by some household needs, the big thing in v0.21 is an easier search interface with search chips, but maybe more interestingly: wildcard date search.

You can now search for e.g. all Christmas day photos with created:*-12-25 or all photos on the first day of the year with created:*-01-01.

The demo is now on v0.25, feel free to give it a spin.

Of course, to see all the photos you have on your birthdays, you'll need to try it out on your own 😁

What do you think? Any ideas or comments welcome!

6
97
submitted 19 hours ago* (last edited 19 hours ago) by fccview@lemmy.world to c/selfhosted@lemmy.world

Hi!

2025 was a big year for my open source goals and I wanted to share some accomplishments with you, after all I wouldn't have gotten so far with any of my projects if it wasn't for the few selfhosted communities I interact with daily <3

Thank you for all your support and here's a summary of everything I've built/maintained throughout 2025. Everything is free, self-hostable and entirely open source.

p.s. - no, these are NOT vibe coded.

jotty.page - 1.3k stars - 78.3K total downloads

repo: https://github.com/fccview/jotty

Jotty is a lightweight note taking/checklist app with a ton of features packed in a very minimal easy to use UI, it features two types of encryptions, drawio/excalidraw/mermaid diagrams, tons of language syntax highlighting for codeblocks, kanban boards and a lot more.


Cr*nmaster - 903 stars - 103K total downloads

repo: https://github.com/fccview/cronmaster

Crnmaster is a UI to manage cronjobs and easily create and schedule scripts, it allows you to log cronjobs (including live logging)*


Scatola Magica (beta) - 108 stars - 2.47K total downloads

repo: https://github.com/fccview/scatola-magica

Scatola Magica is a file upload/download system which allows you to upload files in chunk for maximum speed. The cool feature about it is that you can drop a file literally anywhere, from anywhere, including copy/pasting into the page, if you paste text it creates a file with the right file extensions - e.g. if you paste javascript it'll create a .js file) - it also has full web torrent support, but needs to be enabled in the settings page as it's in beta

Here's to a 2026 full of self hosting and coding <3

7
43
submitted 23 hours ago* (last edited 23 hours ago) by early_riser@lemmy.world to c/selfhosted@lemmy.world

Bit of a followup to my previous post. I now have a VPS with nginx working as a reverse proxy to some services on my DMZ. My router (UDM pro) is running a wireguard server and the VPS is acting as a client.

I've used Letsencrypt to get certs for the proxy, but the traffic between the proxy and the backend is plain HTTP still. Do I need to worry about securing that traffic considering its behind a VPN? If I should secure it, is there an easier way to do self-signed certs besides spinning up your own certificate authority? Do self-signed certs work between a proxy and a backend, or would one or the other of them throw a fit like a browser does upon encountering a self-signed cert?

I'd rather not have to manage another set of certs just for one service, and I don't want to involve my internal domain if possible.

8
88
submitted 1 day ago* (last edited 23 hours ago) by TA_Help@piefed.social to c/selfhosted@lemmy.world

This is a throw away account, in case I end up working with someone that reads this post.

I've been lurking on this community with my main account for a few months now. I have ideas on what I'd like to self-host but between my ADHD, perfectionism, and anxiety, I'm frozen.

I need help selecting and implementing an initial set up. I'm not an IT professional but I'm a reasonably advanced user, so I'm confident I can do the setup work and ongoing management myself. I just need someone to:

  1. Discuss the big picture of what's involved in self-hosting and help fill in gaps in my understanding;
  2. Help me decide on the best initial setup for my needs and skill level;
  3. Hold my hand during the setup phase and make sure I'm not doing anything stupid;
  4. Ideally be available long term for the occassional question.

I'm willing to pay a fair hourly rate for this assistance. If someone in this community is interested, please dm me. You might want to use a throw away for that too, assuming this work can't be done anonymously.

Alternatively, any suggestions for good websites to find a consultant, and what skills I should be looking for, would also be greatly appreciated.

Thank you for reading. Wishing you all the best for 2026.

Edit: I appreciate all the offers for free help on this forum.

I perhaps didn't explain well enough that what I really need is a knowledgeable coach, who can get me moving and provide guidance. I bought the Official Pi-hole Raspberry Pi 4 Kit a few months ago and it's still sitting on my desk gathering dust. Embarrassing but true.

9
114
submitted 1 day ago* (last edited 20 hours ago) by BonkTheAnnoyed to c/selfhosted@lemmy.world

The instances that give the best results seem to also get throttled pretty often on the source search engines, to the point of near uselessness.

Thinking of hosting my own, but the maintenance seems pretty involved according to the docs.

What's your experience been like?

Edit: all right y'all, thanks for the feedback. I'm going to spin up an instance.

10
75
NAS decision paralysis (lemmy.dbzer0.com)

So far I have been looking at things from the sidelines, trying to learn about self hosting by osmosis just by being part of the community and reading what people are doing. Most of the time though posts are too far for my knowledge or needs and the rest of times they are too simple a solution or directed at people just starting. I guess I'm in a middle uncomfortable ground :)

Pair that with ADHD and the huge amount of options available and I have ended up with a decision paralysis that I'm just trying to finally shake off.

So with that introduction out of the way, I'll start laying down the details of what I'm looking for, what I have so far and what I wish to get from this post. Hopefully I can make it short enough without lacking in information.


WHAT I HAVE SO FAR


I have a couple of old laptops I've been using to play around with self hosting. One is running Endeavour OS (arch based) and I have put a few *arrs there, not even docker based. Also Jellyfin, Calibre, ... It was literally the first thing I set up and of course now I'd do things differently but I'll slowly change that with time.

I got my hands on a second laptop and decided to try some different approach and some new things. Threw some stable Debian at it and installed casaOS. Started installing a few services there, wireguard to try and provide secure remote access to myself (and hoping to get some future access alternative to some others without it, but let's not get distracted by a different topic), tried putting calibre-web to give myself access to calibre on the other machine (and so far failed at it but barely tried anything to get it to work), some other *arrs are also in casaos, and all the other services in the other laptop are configured in casaOS to provide one access point to all. I have in my mind to set up also immich and some file syncing/editor and self hosted note application. But haven't done that because I lack a trust worthy storage set up.

Aside of that I got from home assistant one of their green devices to support it and that one is entirely dedicated to home assistant.

Well, hopefully that gives a bit of background on what I have so far and how I am just messing with things and trying different services for better or worse 🙃


WHAT I AM LACKING


Obviously, by the post title and what I have said so far, I'm looking to improve the storage system I have, which is... A simple and starting to be oldish external hard drive with 8TB size. I'm kinda scared of it breaking... Even thinking of going to get a second ext drive to create a copy for now. But well, that's my fault and my problem. I have been postponing getting a NAS because the options are just so wildly open. I don't want anything super complex, but I don't want to end up using some synology and depend on their software or whatever. I want to get some hardware I'm the owner of and set it up with some open source solution. But there are so many options! Plus setting a whole NAS from scratch seems to be quite expensive and about to get more expensive with the storage market situation.


WHAT I HOPE TO GET IN THIS POST


I don't expect anyone to tell me what to do or what is the perfect solution, but I hope I can get some feedback, some help on choosing what could be a good path to start, and solve my decision paralysis so I can give the first steps which will likely tie me up to what I get first for the foreseeable future.

What I think I need is a 4 bay (at least) device where I can install some trueNAS or alternative that is simpler hopefully, something that is not too expensive. I'm willing to compromise on hard drive speed and format to get a better price. Of course I'd rather get M.2 SSD drives if someone has a cheap alternative :D

I've been looking at the different RAID levels to understand which I would need (WIP) but basically I'd hope to have some back up system and more space than now with the option to expand in the future. I have no experience administering such system but I don't have an issue with learning it on the spot when I need to up the sizes etc. For context, my 8 TB are nearly full (at 7 used), it has taken a looooong time for it to be full, but the size requirements would only increase with immich and files and notes for the whole family. Maybe I would want to have different hard drives for personal data and media storage... Eggs in baskets and so on.

Well, thank you all for coming to my ted talk, I hope I have set up enough of the details that might help you help me help myself without boring you to death or making you give up on reading this :)

11
35

Hi all. I made a self-hosted API for CRUD-ing JSON files. Built for data storage in small personal projects, or mocking an API for development. Advantages are simplicity, interoperability and performance (using the cache system).

API is based on your JSON structure. So the example below is for CRUD-ing [geralt][city] in file.json. The value (which can be anything) is then added to the body of the request. For me, it has been really flexible and useful, so I want to share it and collect feedback!

12
65
submitted 2 days ago* (last edited 2 days ago) by erev@lemmy.world to c/selfhosted@lemmy.world

I have a small homelab that's not nice enough for /r/homelab but is a bit more than just self hosting. Since I'm a decently knowledgeable sysadmin and network engineer, my goal is to build an enterprise-ish environment for myself to tinker around and play inside. This means a lot of my setup is more complicated than it needs to be and I spend a lot of time troubleshooting and debugging my overengineering, so when something breaks my first assumption is that it was something I did. I usually build my stuff to be relatively aelf sufficient when I leave it alone.

But this weekend and today I simply couldn't find what I broke. I was attempting to move a clunky lets encrypt cert renewal job off of my DNS server to somewhere I could better manage it. Why was it on my DNS server? Because for a while now, dynamic updates only half worked for me. My bind9 server was fully capable and I have a custom nsupdate cronjob to update my DDNS records that I installed on my UDM-Pro. But for whatever reason, as soon as I entered my home network^1^ it wouldn't work. Since I thought it better to manage my certs from Proxmox or another internal service, I needed to figure out why this was. I looked high, I looked low, I looked in /etc but there was no configuration error that I could find. I tested the same TSIG key on another machine in my VPC and on my UDM-Pro but there it went without a hitch. The error was weird — NOTIMP — and I couldn't find anything relevant online. As a last resort I turned to ChatGPT^2^, but all this confirmed was that there should be no errors with my configuration. It's conclusion was that it had to be networking.

So i scoured the configuration of my UDM looking for any filtering or traffic rules I had, but nothing was clicking. This wasn't a connection issue, this is the server telling me that updates were not allowed for this zone. I was clearly hitting the DNS server, right? Well there was nothing in the update logs on the server, so I suspected that for some reason the requests weren't making it through. So I spun up wireshark on my UDM and on my DNS server, and saw for myself that the dynamic update requests weren't even reaching the bind server. I would see the update come into the router, and a response from the bind server, so what was responding? This was either some crazy filtering from my ISP — which i knew to be false because updates from the router worked — or my UDM doing something. Finally after some sleep I came back and looked at the UDM cobsole again and it hit me.

Ad block.

I quickly paused it and lo and behold it was blocking my dynamic updates. There was no record of this in the Insights tab; it was just silently absorbing my dynamic updates and masquerading as my name server. I can understand masquerading as name servers due to what its supposed to do, but I have no idea why it would steal my dynamic updates. I wouldn't think what DNS filtering that enables is fail closed. For being a prosumer company, Ubiquiti's features always feel halfway implemented to work in most scenarios but never actually developing full support for things. Yes, I brought this onto myself for enabling ad-blocking (it was good while it lasted, I'll have to reimplement it in a non stupis way) but the fact that it does zero inspection of the DNS opcode before forwarding requests feels dumb.


^1^I have two "sites", my homelab and a cloud VPC; critical infra like DNS and mail is hosted in the VPC.

^2^I minimally use AI for troubleshooting as a last resort to either turn me on a new path to the solution or as a sanity check before I blame a different component.

13
56
submitted 2 days ago* (last edited 2 days ago) by eskuero@lemmy.fromshado.ws to c/selfhosted@lemmy.world
  • A different device from your home server?
  • On the same home server as the services but directly on the host?
  • On the same home server as the services but inside some VM or container?

Do you configure it manually or do you use some helper/interface like WGEasy?

I have been personally using wgeasy but recently started locking down and hardening my containers and this node app running as root is kinda...

14
126
submitted 2 days ago by otter@lemmy.ca to c/selfhosted@lemmy.world

Findroid is a third party Jellyfin app on android

cross-posted from: https://discuss.tchncs.de/post/51821147

New features

  • Fresh New Look: Enjoy a complete redesign of the main user interface for a smoother experience.
  • Jellyfin 10.11 support: Full compatibility with the latest server APIs.
  • Media Segments: Now uses the Media Segments API for skipping intros, outros, and credits.

Improvements

  • Playback now starts instantly, adding items to the playback playlist as you watch.
    • This allows you to watch very large seasons without issues.
  • Save images locally when downloading media items.
15
59

Or is that not a thing? I don't recall seeing much in the way of party games in a self hosted environment.

16
928
submitted 4 days ago by alam@lemmy.world to c/selfhosted@lemmy.world

Hi folks!

I’m the creator of BentoPDF. It is an open source PDF toolkit that runs entirely in your browser. Your documents stay private, by design.

BentoPDF started as a small side project, but over time it has grown into something much bigger. With our latest major update, BentoPDF now includes 100+ tools, all running fully client-side.

You can do the basics like merge PDFs(while preserving bookmarks), split documents, extract or delete pages, reorder files, rotate pages, and compress PDFs. Thee are also some advanced tools.

You can edit and annotate PDFs directly in the browser: highlight text, add comments, draw shapes, insert images, fill(including XFA) and create forms, manage bookmarks, generate tables of contents, redact, add headers, footers, watermarks, and page numbers.

BentoPDF also supports an extensive range of file conversions. You can convert Word, Excel, PowerPoint, OpenOffice, Pages, CSV, RTF, EPUB, MOBI, comic book formats, and many more into PDFs, and also convert PDFs back into Word, Excel, images, Markdown, CSV, JSON, and plain text.

For images, BentoPDF supports a massive variety of formats, including HEIC, WebP, SVG, PSD, JP2, and and aalso other formats such as EPUB, CBR/CBZ. You can convert images to PDFs, extract images from PDFs in their original format, or rasterize PDFs with full DPI control.

There are also organization and optimization tools: OCR, PDF/A conversion, booklet creation, N-up layouts, page division, attachment management, layer (OCG) editing, metadata inspection and editing, repair tools, and advanced compression algorithms that rival commercial solutions.

The latest update also includes AI ready extraction tools to export PDFs to structured JSON, extract tables as CSV/Markdown/JSON, and prepare PDFs for RAG and LLM workflows.

All of this works entirely in the browser, without accounts, uploads, or tracking.

This is my first post here and I hope you like it. Any feedback or feature requests are appreciated. Thank you.

Github Link: https://github.com/alam00000/bentopdf

17
27

I'm looking into replacing cloudflare with a VPS running a reverse proxy over a VPN, however, every solution I see so far assumes you're running Docker, either for the external reverse proxy host or the services you're self hosting.

The VPS is already virtualized (perhaps actually containerized given how cheap I am) so I don't want to put Docker on top of that. The stuff I'm self hosting is running in Proxmox containers on a 15 year old laptop, so again, don't want to make a virtual turducken.

Besides, Docker just seems like a pain to manage. I don't think it was designed for use as a way to distribute turnkey appliances to end users. It was made for creating reproducible ephemeral development environments. Why else would you have to specify that you want a storage volume to persist across reboots? But I digress.

Anyway, I want to reverse proxy arbitrary IP traffic, not just HTTP/S Is that possible? If so, how?

My initial naive assumption is that you set up a VPN tunnel between the VPS and the various proxmox containers, with the local containers initiating the connection so port forwarding isn't necessary. You then set up the reverse proxy on the VPS to funnel traffic through the tunnel to the correct self-hosted container based on domain name and/or port.

18
76
submitted 3 days ago* (last edited 3 days ago) by rook@lemmy.zip to c/selfhosted@lemmy.world

Hey everyone,

I recently built my first NAS. It was bough used with SAS hardware. I've finally got past all the roadblocks and problems that were in my way (I basically bricked a whole SAS drive, a hero of a lemmy user helped me fix it).

Now after filling the 15 TB of RAIDZ2 with around 100gb of data. One of the drives started waiving its white flag and wants to die on me.

I am a complete beginner with no experience with these things.

Is my drive dying and should be replaced? or can it be fixed?

This is the output of the 507 errors that TrueNAS received form it and labelled the vDev as degraded and the drive as faulted:

Output of zpool status and sudo smartctl -a /dev/sdd

As a beginner it looks like this drive is cooked, please let me know if it needs replacing so I can order a new one and replace it right away.

Thank you sooo much!

Edit: SAS not SATA drives

19
29

I have Pangolin set up as a reverse proxy in my VPS and Cloudflare as a DNS provider with its free tier.

I want to migrate out of Cloudflare for my setup, however I lack the requisite network knowledge to safely transition my VPS and domain to better alternatives and don't know where to start its research from.

There are two features I intend my setup to have after the transition:

  • DDoS protection: I was considering using Crowdsec as there is a guide to incorporate it into Pangolin, but I do not know if it will be sufficient or not. I saw a post earlier listing some alternative DDoS protection solutions, but I am wary of their limited free tier options compared to that of Cloudflare and I don't wish to pay for them as my homelab is mostly going to be used by me and a handful of friends.

  • Wildcard Certificate Generation: My domain provider has a poor DNS service and is not listed under LEGO's supporter DNS providers for enabling wildcard certificate generation and the Cloudflare one does not seem to work for some reason. I don't know of any other compatible DNS provider I could shift to unless it is provided within the other DDoS protection services as mentioned above.

Again, I don't have much knowledge in this field but I'm willing to learn and make an informed decision. Please let me know any suitable alternatives for the above, the pros and cons for the migrations, or some guide on performing such transition from Cloudflare as you seem fit.

20
108

Hi everyone, it's been a while since my last update.

Just a recap: Postiz is an open-source social media scheduling tool supporting 25 social media channels/platforms. (including Lemmy)

You can craft different posts, schedule them in advance, and cross-post them to multiple platforms, and use various tools to make them better.

https://github.com/gitroomhq/postiz-app

Any star would be amazing ❤️

---

My daughter was born 3 months ago, and I felt so burned out that I thought about selling Postiz. But after a while, I suddenly found the energy to go back!

I am struggling today to maintain the open-source side. Most of the PRs I get aren't "good enough," and just checking and iterating on them is super hard (time + mentally). Sorry in advance for unanswered PRs.

I do want to say that everything that I develop every day is always open-source, I have no closed-source code.

---

There was one thing that always hit me as feedback from open-source developers I have read before: "Usually open-source is not as good as commercial products."

And I kind of agree with the notion. But because of that, I decided to stop adding new features and make the system as good as possible in both UI and UX.

---

I have contacted my designers and redesigned the entire post-creation process

Before:

After:

So here is what's new:

  • Complete redesign, higher quality, it doesn't look "bootstrappy" anymore.
  • Schedule post size increase to the size of the screen to fullscreen.
  • First post takes the entire screen; when you add comments, it shrinks.
  • Inner scroll for the posts lists and the preview, before it was scrolling the page, and made it very uncomfortable.
  • Indicator over each social platform if you exited global mode, see the small pink circle.
  • Different previews for all the major platforms.
  • Tons of bug fixes I have found on the way.
  • Indicator about the number of characters in every channel - on the global edit.
  • Remove the option to add comments in platforms that you can't add comments to 🙈
  • Media library design (UI and UX improvement): When you select multiple media items, it will tell you the import order.

---

Some other new features:

  • Add a new provider: Google My Business.
  • You can disable email notifications for successful / failed posts.
  • Added a new MCP and Agent to schedule posts (AI stuff)
  • Add Listmonk as a provider - yes, you can schedule newsletters :)

---

Thank you so much for this amazing community. I hope you had a merry Christmas.

And I wish you all a Happy New Year!!

21
94

I'm looking for a self hosted Kanban board where we as a exteded family can track things which have to be done. Since my parents are getting older and me and my siblings live all in different countries there is more and more to do to help our parents. But it's difficult to keep track who is doing what and what status things are and we're forgetting to do things, etc.

But because we will need to store confidential information there I am not so fond of using something like trello and would rather want to selfhost it.

I had a look at some of the self hosted KanBan boards, but none of them were mobile phone friendly. But we really need it to be mobile friendly, best case scenario would be a app for both Android and iPhone, but a PWA would be also OK. Most of the work in the tool will be done by us on mobile phones because we are doing it mostly on the go.

We don't need much functionality, Tasks, subtasks, comments, assignees, status and attachments would be the most important ones, different project would also be good.

Anyone has some idea what I should look into?

22
37

cross-posted from: https://aussie.zone/post/28062823

I'm trying to set up Nextcloud using the AIO Docker install onto my Synology.

I got through the first stage of setup, and navigated to the /containers page. It shows all containers as "Starting", with a yellow dot. Except for the Fulltextsearch, which is Stopped red (due to me stopping it, after I realised I had installed it despite my platform not supporting Seccomp, but the "Optional containers" checkbox being greyed out even when it’s stopped).

Many of these containers show as green/healthy in the DSM Container Manager even though the /containers page doesn't show them as such.

Logs for the different containers:

Mastercontainer logs:

Trying to fix docker.sock permissions internally...
Adding internal www-data to group root
DOCKER_API_VERSION was found to be set to '1.43'.
Please note that only v1.44 is officially supported and tested by the maintainers of Nextcloud AIO.
So you run on your own risk and things might break without warning.
WARNING: No kernel memory TCP limit support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support
WARNING: No kernel memory TCP limit support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support
Initial startup of Nextcloud All-in-One complete!
You should be able to open the Nextcloud AIO Interface now on port 8080 of this server!
E.g. https://internal.ip.of.this.server:8080/
⚠️ Important: do always use an ip-address if you access this port and not a domain as HSTS might block access to it later!

If your server has port 80 and 8443 open and you point a domain to your server, you can get a valid certificate automatically by opening the Nextcloud AIO Interface via:
https://your-domain-that-points-to-this-server.tld:8443/
/usr/lib/python3.12/site-packages/supervisor/options.py:13: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
  import pkg_resources
{"level":"warn","ts":1766322552.6626272,"msg":"failed to set GOMAXPROCS","error":"open /sys/fs/cgroup/cpu/cpu.cfs_quota_us: no such file or directory"}
{"level":"info","ts":1766322552.6628811,"msg":"GOMEMLIMIT is updated","package":"github.com/KimMachineGun/automemlimit/memlimit","GOMEMLIMIT":3671407411,"previous":9223372036854775807}
{"level":"info","ts":1766322552.6629462,"msg":"using config from file","file":"/Caddyfile"}
{"level":"info","ts":1766322552.6645825,"msg":"adapted config to JSON","adapter":"caddyfile"}
{"level":"info","ts":1766322552.6664238,"msg":"serving initial configuration"}
[mpm_event:notice] [pid 152:tid 152] AH00489: Apache/2.4.66 (Unix) OpenSSL/3.5.4 configured -- resuming normal operations
[core:notice] [pid 152:tid 152] AH00094: Command line: 'httpd -D FOREGROUND'
NOTICE: fpm is running, pid 157
NOTICE: ready to handle connections
NOTICE: PHP message: 404 Not Found
Type: Slim\Exception\HttpNotFoundException
Code: 404
Message: Not found.
File: /var/www/docker-aio/php/vendor/slim/slim/Slim/Middleware/RoutingMiddleware.php
Line: 76
Trace: #0 /var/www/docker-aio/php/vendor/slim/slim/Slim/Routing/RouteRunner.php(62): Slim\Middleware\RoutingMiddleware->performRouting(Object(GuzzleHttp\Psr7\ServerRequest))
#1 /var/www/docker-aio/php/vendor/slim/csrf/src/Guard.php(482): Slim\Routing\RouteRunner->handle(Object(GuzzleHttp\Psr7\ServerRequest))
#2 /var/www/docker-aio/php/vendor/slim/slim/Slim/MiddlewareDispatcher.php(178): Slim\Csrf\Guard->process(Object(GuzzleHttp\Psr7\ServerRequest), Object(Slim\Routing\RouteRunner))
#3 /var/www/docker-aio/php/vendor/slim/twig-view/src/TwigMiddleware.php(117): Psr\Http\Server\RequestHandlerInterface@anonymous->handle(Object(GuzzleHttp\Psr7\ServerRequest))
#4 /var/www/docker-aio/php/vendor/slim/slim/Slim/MiddlewareDispatcher.php(129): Slim\Views\TwigMiddleware->process(Object(GuzzleHttp\Psr7\ServerRequest), Object(Psr\Http\Server\RequestHandlerInterface@anonymous))
#5 /var/www/docker-aio/php/src/Middleware/AuthMiddleware.php(53): Psr\Http\Server\RequestHandlerInterface@anonymous->handle(Object(GuzzleHttp\Psr7\ServerRequest))
#6 /var/www/docker-aio/php/vendor/slim/slim/Slim/MiddlewareDispatcher.php(283): AIO\Middleware\AuthMiddleware->__invoke(Object(GuzzleHttp\Psr7\ServerRequest), Object(Psr\Http\Server\RequestHandlerInterface@anonymous))
#7 /var/www/docker-aio/php/vendor/slim/slim/Slim/Middleware/ErrorMiddleware.php(77): Psr\Http\Server\RequestHandlerInterface@anonymous->handle(Object(GuzzleHttp\Psr7\ServerRequest))
#8 /var/www/docker-aio/php/vendor/slim/slim/Slim/MiddlewareDispatcher.php(129): Slim\Middleware\ErrorMiddleware->process(Object(GuzzleHttp\Psr7\ServerRequest), Object(Psr\Http\Server\RequestHandlerInterface@anonymous))
#9 /var/www/docker-aio/php/vendor/slim/slim/Slim/MiddlewareDispatcher.php(73): Psr\Http\Server\RequestHandlerInterface@anonymous->handle(Object(GuzzleHttp\Psr7\ServerRequest))
#10 /var/www/docker-aio/php/vendor/slim/slim/Slim/App.php(209): Slim\MiddlewareDispatcher->handle(Object(GuzzleHttp\Psr7\ServerRequest))
#11 /var/www/docker-aio/php/vendor/slim/slim/Slim/App.php(193): Slim\App->handle(Object(GuzzleHttp\Psr7\ServerRequest))
#12 /var/www/docker-aio/php/public/index.php(200): Slim\App->run()
#13 {main}
Tips: To display error details in HTTP response set "displayErrorDetails" to true in the ErrorHandler constructor.
NOTICE: Terminating ...
NOTICE: exiting, bye-bye!
[mpm_event:notice] [pid 152:tid 152] AH00491: caught SIGTERM, shutting down

Database logs:

+ rm -rf '/var/lib/postgresql/data/*'
+ touch /mnt/data/initial-cleanup-done
+ set +ex
chmod: /var/run/postgresql: Operation not permitted
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default "max_connections" ... 100
selecting default "shared_buffers" ... 128MB
selecting default time zone ... Australia/Brisbane
creating configuration files ... ok
running bootstrap script ... ok
sh: locale: not found
[30] WARNING:  no usable system locales were found
performing post-bootstrap initialization ... ok
initdb: warning: enabling "trust" authentication for local connections
initdb: hint: You can change this by editing pg_hba.conf or using the option -A, or --auth-local and --auth-host, the next time you run initdb.
syncing data to disk ... ok


Success. You can now start the database server using:

    pg_ctl -D /var/lib/postgresql/data -l logfile start

waiting for server to start....
[36] LOG:  starting PostgreSQL 17.7 on x86_64-pc-linux-musl, compiled by gcc (Alpine 15.2.0) 15.2.0, 64-bit
[36] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
[39] LOG:  database system was shut down at 2025-12-21 23:21:07 AEST
[36] LOG:  database system is ready to accept connections
 done
server started
CREATE DATABASE


/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/init-user-db.sh
CREATE ROLE
ALTER DATABASE
+ touch /mnt/data/initialization.failed
+ psql -v ON_ERROR_STOP=1 --username nextcloud --dbname nextcloud_database
GRANT
GRANT
+ rm /mnt/data/initialization.failed

waiting for server to shut down....2025-12-21 23:21:12.597 AEST [36] LOG:  received fast shutdown request
+ set +ex
[36] LOG:  aborting any active transactions
[36] LOG:  background worker "logical replication launcher" (PID 42) exited with exit code 1
[37] LOG:  shutting down
[37] LOG:  checkpoint starting: shutdown immediate
[37] LOG:  checkpoint complete: wrote 934 buffers (5.7%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.805 s, sync=0.674 s, total=2.456 s; sync files=308, longest=0.322 s, average=0.003 s; distance=4260 kB, estimate=4260 kB; lsn=0/19163B0, redo lsn=0/19163B0
[36] LOG:  database system is shut down
 done
server stopped

PostgreSQL init process complete; ready for start up.

[14] LOG:  starting PostgreSQL 17.7 on x86_64-pc-linux-musl, compiled by gcc (Alpine 15.2.0) 15.2.0, 64-bit
[14] LOG:  listening on IPv4 address "0.0.0.0", port 5432
[14] LOG:  listening on IPv6 address "::", port 5432
[14] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
[57] LOG:  database system was shut down at 2025-12-21 23:21:15 AEST
[14] LOG:  database system is ready to accept connections
[55] LOG:  checkpoint starting: time
[55] LOG:  checkpoint complete: wrote 48 buffers (0.3%); 0 WAL file(s) added, 0 removed, 0 recycled; write=4.592 s, sync=0.911 s, total=6.666 s; sync files=13, longest=0.172 s, average=0.071 s; distance=270 kB, estimate=270 kB; lsn=0/1959CE8, redo lsn=0/1959C58
++ rm -f /mnt/data/database-dump.sql.temp
++ touch /mnt/data/export.failed
++ pg_dump --username nextcloud nextcloud_database
++ rm -f /mnt/data/database-dump.sql
++ mv /mnt/data/database-dump.sql.temp /mnt/data/database-dump.sql
++ pg_ctl stop -m fast
[14] LOG:  received fast shutdown request
[14] LOG:  aborting any active transactions
[14] LOG:  background worker "logical replication launcher" (PID 60) exited with exit code 1
[55] LOG:  shutting down
[55] LOG:  checkpoint starting: shutdown immediate
[55] LOG:  checkpoint complete: wrote 0 buffers (0.0%); 0 WAL file(s) added, 0 removed, 0 recycled; write=0.001 s, sync=0.001 s, total=0.502 s; sync files=0, longest=0.000 s, average=0.000 s; distance=0 kB, estimate=243 kB; lsn=0/1959D98, redo lsn=0/1959D98
[14] LOG:  database system is shut down
waiting for server to shut down.... done
server stopped
++ rm /mnt/data/export.failed
++ echo 'Database dump successful!'
++ set +x
Database dump successful!
Setting postgres values...
chmod: /var/run/postgresql: Operation not permitted

PostgreSQL Database directory appears to contain a database; Skipping initialization

[14] LOG:  starting PostgreSQL 17.7 on x86_64-pc-linux-musl, compiled by gcc (Alpine 15.2.0) 15.2.0, 64-bit
[14] LOG:  listening on IPv4 address "0.0.0.0", port 5432
[14] LOG:  listening on IPv6 address "::", port 5432
[14] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
[24] LOG:  database system was shut down at 2025-12-21 23:49:29 AEST
[14] LOG:  database system is ready to accept connections

Nextcloud logs:

Waiting for database to start...
Waiting for database to start...
Waiting for database to start...

Redis logs:

Memory overcommit is disabled but necessary for safe operation
See https://github.com/nextcloud/all-in-one/discussions/1731 how to enable overcommit
Redis has started
# WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
# WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.

I don’t think Redis is related to my current problem, but I suspect they may be an issue later...

Configuration

AIO compass.yaml file:

name: nextcloud-aio
services:
  nextcloud-aio-mastercontainer:
    image: ghcr.io/nextcloud-releases/all-in-one:latest
    init: true
    restart: always
    container_name: nextcloud-aio-mastercontainer
    volumes:
      - nextcloud_aio_mastercontainer:/mnt/docker-aio-config
      - /run/docker.sock:/var/run/docker.sock:ro
    network_mode: bridge
    ports:
      - 8080:8080
    environment:
      APACHE_PORT: 11000
      APACHE_IP_BINDING: 127.0.0.1
      DOCKER_API_VERSION: 1.43 # As far as I can tell, this is the version supported on Synology when running "docker version | grep API"
      NEXTCLOUD_DATADIR: /volume1/nextcloud
      WATCHTOWER_DOCKER_SOCKET_PATH: /run/docker.sock
      COLLABORA_SECCOMP_DISABLED: true

volumes:
  nextcloud_aio_mastercontainer:
    name: nextcloud_aio_mastercontainer

Does anyone have any idea of how to get this working? Or of good troubleshooting steps to try?

23
19
submitted 3 days ago* (last edited 3 days ago) by Olgratin_Magmatoe@slrpnk.net to c/selfhosted@lemmy.world

I'm looking to expand and further secure my home server, and I've been poking around at the FUTO self hosting guide, and as a result I'm looking to host OpenVPN then connect to my services through that.

However, is it safe to have the machine running OpenVPN connected to my router, with my router operating normally, but forwarding the port to the OpenVPN server?

Then once I'm into that, I'd connect to what I'd like. Unless I'm misunderstanding, this would offer me sufficient security, correct?

I do have a backup RPi that I might end up turning into a router as the FUTO guide suggests, but I'd rather not mess with my network where possible, plus I'd need to buy a switch.

24
45

cross-posted from: https://lemmy.world/post/40805695

I have two machines:

  • 2014 Mac Mini
  • HP Pavilion g7

Mac Mini 2014:

Very slow, probably can no longer be updated, nor can it run worthwhile programs.

HP Pavilion g7

Extremely bulky, chunky, and doesn't even turn on unless it's plugged in. It's basically a desktop since the battery doesn't hold a charge.

I put Linux on it (Mint I think) a few months ago as a weekend experiment.

Question:

What should I do with them? Are they worth salvaging? Should I simply donate or recycle them?

I was thinking I could use at least one of them as a home media server or something so that I can disconnect my Smart TV from the internet, but I'm not sure if they will hold or how I would even control them from my phone (Android) if I'm sitting on the couch.

Open to all ideas. I'm somewhat technical (perhaps far less than the Lemmy community), but I don't know much about Linux or the command line unless I'm given step by step instructions on how to do something.

25
31
Proxmox with arr (lemmy.dbzer0.com)

Howdy selfhosters

I’ve got a bit of an interesting one that started as a learning experience but it’s one I think I got a bit over my head in. I had been running the arr stack via docker-compose on my old Ubuntu desktop pc. I got lucky with a recycler and managed to get a decent old workstation and my company tossed out some 15 SAS hdds. Thankfully those worked. I managed to get the proxmox setup finally and got a few drives mounted in a zfs pool that plex presently reads from. I unfortunately failed to manage to save a last backup copy of my old stack, however that one I’ll admit was a bit messy with using gluetun with a vpn tie to a German server for p2p on the stack. I did preserve a lot of my old data though as a migration for the media libraries.

I’m open to suggestions to have the stack running again on proxmox on the work station, I’m not sure how best to go about it with this since accessing a mount point is only accessible via lxc containers and I can’t really figure how to pass the zfs shares to a vm. I feel like I’m over complicating this but needing to maintain a secure connection since burgerland doesn’t make for the best arr stack hosts in my experience. It feels a bit daunting as I’ve tried to tackle it and give a few LLMs to write me up some guidelines to make it easier but I seemed to just not be able to make that work to teach me.

view more: next ›

Selfhosted

54099 readers
548 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS