64
submitted 1 week ago* (last edited 1 week ago) by TheHobbyist@lemmy.zip to c/selfhosted@lemmy.world

Hi folks,

TL;DR: my remaining issue seems to be firefox specific, I've otherwise made it work on other browsers and other devices, so I'll consider this issue resolved. Thank you very much for all your replies and help! (Edit, this was also solved now in EDIT-4).

I'm trying to setup HTTPS for my local services on my home network. I'm gotten a domain name mydomain.tld and my homeserver is running at home on let's say 192.168.10.20. I've setup Nginx Proxy Manager and I can access it using its local ip address as I've forwarded ports 80 and 443 to it. Hence, when I navigate on my computer to http://192.168.10.20/ I am greeted with the NPM default Congratulations screen confirming that it's reachable. Great!

Next, I've setup an A record on my registrar pointing to 192.168.10.20. I think I've been able to confirm this works because when I check on an online DNS lookup tool like https://centralops.net/CO/Traceroute as it says 192.168.10.20 is a special address that is not allowed for this tool.. Great!

Now, what I'm having trouble with, is the following: make it such that when I navigate to http://mydomain.tld/ I get to the NPM welcome screen at http://192.168.10.20/. When I try this, I'm getting the firefox message:

Hmm. We’re having trouble finding that site.
We can’t connect to the server at mydomain.tld.

Strangely, whenever I try to navigate to http://mydomain.tld/ it redirects me to https://mydomain.tld/, so I've tried solving this using a certificate, using the DNS-01 challenge from NPM, and setting up a reverse proxy from https://mydomain.tld/ to http://192.168.10.20/ and with the wildcard certificate from the challenge, but it hasn't changed anything.

I'm unsure how to keep debugging from here? Any advice or help? I'm clearly missing something in my understanding of how this works. Thanks!

EDIT: It seems several are confused by my use of internal IP addresses in this way, yes it is entirely possible. There are multiple people reporting to use exactly this kind of setup, here are some examples.

EDIT-2: I've made progress. It seems I'm having two issues simultaneously. First one was that I was trying to test my NPM instance by attempting to reach the Congratulations page, served on port 80. That in itself was not working as it ended in an infinite-loop resolving loop, so trying to instead expose the admin page, default port 81, seems to work in some cases. And that's due to the next issue, which is that on some browsers / with some DNS, the endpoint can be reached but not on others. For some reason I'm unable to make it work on Firefox, but on Chromium (or even on Vanadium on my phone), it works just fine. I'm still trying to understand what's preventing it from working on Firefox, I've attempted multiple DNS settings, but it seems there's something else at play as well.

EDIT-3: While I have not made it work in all situations I wanted, I will consider this "solved", because I believe the remaining issue is a Firefox-specific one. My errors so far, which I've addressed are that I could not attempt at exposing the NPM congratulations page which was shown on port 80, because it lead to a resolution loop. Exposing the actual admin page on port 81 was a more realistic test to verify whether it worked. Then, setting up the forwarding of that page using something like https://npm.mydomain.tld/ and linking that to the internal IP address of my NPM instance, and port 81, while using the wildcard certificate for my public domain was then necessary. Finally, I was testing exclusively on Firefox. While I also made no progress when using dig, curl or host, as suggested in the commends (which are still useful tools in general!) I managed to access my NPM admin page using other browsers and other devices, all from my home network (the only use-case I was interested in). I'll keep digging to figure out what specific issue remains with my Firefox, I've verified multiple things, from changing the DNS on firefox (seems not to work, showing Status: Not active (TRR_BAD_URL) in the firefox DNS page (e.g. with base.dns.mullvad.dns). Yet LibreWolf works just fine when changing DNS. Go figure...

EDIT-4: I have now solved it in firefox too, thanks to @non_burglar@lemmy.world! So it turns out, firefox has setup a validation system for DNS settings, called TRR. You can read more about it here: https://wiki.mozilla.org/Trusted_Recursive_Resolver Firefox has a number of TRR configurations, preventing the complete customization of DNS, but also with specific defaults that prevent my use-case. By opening up the firefox config page at about:config, search for network.trr.allow-rfc1918 and set it to true. This now solved it for me. This allows the resolution of local IP addresses. You can read more about RFC1918 here: https://datatracker.ietf.org/doc/html/rfc1918 I'll probably still look to actually make other DNS usable, such as base.dns.mullvad.net which is impossible to use on Firefox by default...

top 50 comments
sorted by: hot top controversial new old
[-] 30p87@feddit.org 14 points 1 week ago

The obvious question: Do you want to access your server only from within your network or also from anywhere else?

[-] TheHobbyist@lemmy.zip 5 points 1 week ago

Good question. I'm only interested in accessing it from my home network and through my tailscale network.

[-] Agility0971@lemmy.world 3 points 1 week ago

Then you don't need to inform the rest of the world about your domain. Just use the hostname of the server on your tailnet and it should work all the time

[-] TheHobbyist@lemmy.zip 3 points 1 week ago

Wouldn't that require me to use tailscale even at home on my home network? It also does not provide HTTPS unless you maybe use magic DNS, but then we're back to using a public domain I guess.

load more comments (1 replies)
[-] 0x01@lemmy.ml 9 points 1 week ago

You set the A record to your internal ip address from within your router?

Nginx configs have a lot of options, you can route differently depending on the source context

So a couple questions:

  1. Do you only want to access this from your local network? If so setting up a domain name in the broader internet makes no sense, you're telling the whole world what local ip within your switch/router is your server. Make your own dns or something if you just want an easier way to hit your local resources
  2. do you want to access this from the internet, like when you're away from home? Then the ip address you add to your a record should be your isp's public ip address you were assigned, it will not start with 192.168, then you have your modem forward the port to your local system (nginx)

If you don't know what you are doing and have a good firewall setup do not make this service public, you will receive tons and tons of attacks just for making a public a record.

load more comments (3 replies)
[-] Mitchie151@lemmy.world 6 points 1 week ago

You can't point to 192.168.X.X that's your local network IP address. You need to point to your public IP address which you can find by just searching 'what is my IP'. Note that you can't be behind CGNAT for this, and either need a static IP or dynamic DNS configuration. Be aware of the risks involved exposing your home server to the internet in this manner.

[-] slazer2au@lemmy.world 6 points 1 week ago

You can't point to 192.168.X.X that's your local network IP address. You need to point to your public IP address

That's not true at all. That is exactly how I have my setup. A wildcard record at Porkbun pointing to the private IP of my home server so when I am home I have zero issues accessing things.

[-] HelloRoot@lemy.lol 4 points 1 week ago

A wildcard record at Porkbun pointing to the private IP of my home server

Which can not be 192.168.X.X

read: https://en.wikipedia.org/wiki/IP_address#Private_addresses

[-] slazer2au@lemmy.world 9 points 1 week ago

And yet, that is exactly what I am doing and it is working.

Rfc1918 address are absolutely usable with DNS in this fashion.
If I were to try to access it while I wasn't home it absolutely wouldn't work but that is not what I do.

[-] HelloRoot@lemy.lol 5 points 1 week ago* (last edited 1 week ago)

You are technically correct. I assumed that it was for external access because why would you pay porkbun for something internal?

You can just selfhost a DNS with that entry like https://technitium.com/dns/ (near bottom of feature list) it has a WebUI that allows you to manage DNS-Records through it.

[-] slazer2au@lemmy.world 4 points 1 week ago

That's true but then I would have to deal with PKI, cert chains, and DNS. When now all I need to do is get Traefik to grab a wildcard Let's Encrypt cert and everything is peachy.

[-] frongt@lemmy.zip 3 points 1 week ago

No, you'd just need to deal with running DNS locally, you can still use LE for internal certs.

But you still need to pass one of their challenges. Public DNS works for that. You don't need to have any records in public DNS though.

[-] HelloRoot@lemy.lol 3 points 1 week ago

That doesn't make any sense

[-] ShellMonkey@piefed.socdojo.com 4 points 1 week ago

I think I can see where they're going with it, but it is a bit hard to write out

Say I set up my favorite service in house, and said service has a client app. If I create my own DNS at home and point the client to the entry, and the service is running an encrypted connection with a self signed cert it can give the client app fits for being untrusted.

Compare that to putting NPM in front of the app, using it to get a LetsEncrypt cert using the DNS record option (no need to have LE reach the service publicly) and now you have a trusted cert signed by a public CA for the client app to connect to.

I actually do the same for a couple internal things that I want the local traffic secured because I don't want creds to be sniffable on the wire, but they're not public facing. I already have a domain for other public things so it doesn't cost anything extra to do it this way.

load more comments (1 replies)
[-] TheHobbyist@lemmy.zip 2 points 1 week ago

You sure can. You can see someone doing just that here successfully:

https://www.yewtu.be/watch?v=qlcVx-k-02E

load more comments (1 replies)
[-] quaff@lemmy.ca 5 points 1 week ago

This is a really good idea that I see dismissed a lot here. People should not access things over their LAN via HTTP (especially if you connect and use these services via WG/Tailscale). If you're self hosting an vital service that requires authentication, your details are transmitted plaintext. Imagine the scenario where you lose connection to your tailscale on someone else's WiFi and your clients try to make a connection over HTTP. This is terrible opsec.

Setting up letsencrypt via DNS is super simple.

Setting up an A record to your internal IP address is really easy, can be done via /etc/hosts, on your router (if it supports it, most do), in your tailnet DNS records, or on a self hosted DNS resolved like pihole.

After this, you'd simply access everything via HTTPS after reverse proxing your services. Works well locally, and via tailscale.

[-] aaravchen@lemmy.zip 9 points 1 week ago

People sleep on the DNS-01 challenges option for TLS. You don't need an internet accessible site to generate a LetsEncrypt/ZeroSSL certificate if you can use DNS-01 challenges instead. And a lot of common DNS providers (often also your domain registrar by default) are supported by the common tools for doing this.

Whether you're doing purely LAN connections or a mix of both LAN and internet, it's better to have TLS setup consistently.

[-] quaff@lemmy.ca 3 points 1 week ago* (last edited 1 week ago)

💯 Generally I see the dismissal from people who use their services purely through LAN. But I think it's good practice to just set up HTTPS/SSL/TLS for everything. You never know when your needs might change to where you need to access things via VPN/WG/Tailnet, and the moment you do, without killswitches everywhere, your OPSEC has diminished dramatically.

[-] aaravchen@lemmy.zip 2 points 1 week ago

I usually combine with using client certificate authentication as well for anything that isn't supposed to be world accessible, just internet accessible for me. Even if the site has it's own login.

load more comments (1 replies)
load more comments (5 replies)
[-] frongt@lemmy.zip 5 points 1 week ago* (last edited 1 week ago)

Try a different browser, or the curl command in another comment (but while on the LAN). Your understanding so far is correct, though unusual, typically it's not recommended to put LAN records in WAN DNS.

But if you've ever run HTTPS there before, Firefox might remember that and try to use it automatically. I think there's a setting in Firefox. You might also try the function to forget site information, both for the name and IP. I assume you haven't turned on any HTTP-to-HTTPS redirect in nginx.

Also verify that nginx is set up with a site for that name, or has a default site. If it doesn't, then it doesn't know what to do and will fail.

load more comments (2 replies)
[-] mhzawadi@lemmy.horwood.cloud 4 points 1 week ago

Your issue is using a non-routable IP on a public DNS provider, some home routers will assume it's a miss configuration and drop it.

If your only going to use the domain over a VPN and local network, I would use something like pihole to do the DNS.

If you want access from the internet at large, you will need your public IP in your DNS provider.

[-] TangledRockets@lemmy.world 4 points 1 week ago

The IP address you've used as an example would not work. That is a 'local' address, ie home address. If you want DNS to resolve your public domain name to your home server, you need to set the A record to your 'public' IP address, ie the external address of your modem/router. Find this by going to whatismyip.com or something similar.

That will connect your domain name with your router. You then set up port forwarding on the router to pass requests to the server.

[-] Evilschnuff@feddit.org 4 points 1 week ago

Why do you need a domain on an internet facing dns if you can just define it with your local dns? Unless you want to access your services via internet, in which case you would need a public ip.

[-] TheHobbyist@lemmy.zip 5 points 1 week ago

To have HTTPS without additional setup on all the devices which I use to access my services and without having to setup my own DNS server.

[-] whereyaaat@lemmings.world 3 points 1 week ago* (last edited 1 week ago)

You don't need a domain name for HTTPS.

192.168.x.x is always an IP address that is not exposed to the internet.

If you're trying to make your server accessible on the internet, you need to open a port (doesn't have to be 80 for 443) and have a reverse proxy direct connections to the services running on it.

Here's a post that explains the basics of how to set this up: https://lemmy.cif.su/post/3360504

Combine that with this and you should be good to go: https://www.digitalocean.com/community/tutorials/how-to-create-a-self-signed-ssl-certificate-for-nginx-on-debian-10

load more comments (1 replies)
[-] Jumuta@sh.itjust.works 2 points 1 week ago* (last edited 1 week ago)

I do this exact thing on my network so I know it works, but why are you trying to downgrade https to http? if you've set up dns-01 properly it should just work with https.

how did you configure dns-01?

[-] TheHobbyist@lemmy.zip 2 points 1 week ago* (last edited 1 week ago)

Yes, it was an attempt at doing on step at the time, but I realize I've been able to make it work in some browsers and on some DNS using HTTPS, as hoped. I'm now mostly trying to solve specific DNS issues, trying to understand why there are some cases where it's not working (i.e. in Firefox regardless of DNS setting, or when calling dig, curl or host).

[-] 30p87@feddit.org 2 points 1 week ago

Do a curl http://mydomain.tld/ -i with your server off/while off-network.

Your registrar probably has a service to rewrite http accesses to https automatically. Curl -i shows the headers, which will probably confirm that you're being redirected without even connecting to anything in your network.

[-] TheHobbyist@lemmy.zip 2 points 1 week ago

I tried, it just gave me the following:

curl: (6) Could not resolve host: mydomain.tld

Which is surprising. I got something similar when I tried traceroute earlier.

Yet when I look into my registrars records, all seems fine, and it seems to also be confirmed by the nslookup I mentioned in the OP. So I'm a bit confused.

[-] sucoiri@lemmy.world 4 points 1 week ago* (last edited 1 week ago)

dig mydomain.tld to see why your machine can't find the DNS record

Fwiw too most home networks use a DNS server on the router by default. Your devices should be able to resolve an address with a DNS record set statically there instead of on the WAN.

[-] TheHobbyist@lemmy.zip 2 points 1 week ago

I'm getting the following:

; <<>> DiG 9.18.39 <<>> mydomain.tld
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16004
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;mydomain.tld.		IN	A

;; Query time: 3 msec
;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
;; WHEN: Sun Oct 05 14:23:20 CEST 2025
;; MSG SIZE  rcvd: 44

I guess your proposal would be the last resort, but I have not seen any mention of this approach being necessary for others achieving what I'm trying.

[-] atzanteol@sh.itjust.works 3 points 1 week ago* (last edited 1 week ago)

It's not resolving, play around with dig a bit to troubleshoot: https://phoenixnap.com/kb/linux-dig-command-examples

I'd start with "dig @your.providers.dns.server your. domain.name" to query the provider servers directly and to see if the provider actually responds for your entry.

If so then it may be that you haven't properly configured the provider to be authoritative for your domain. Query @8.8.8.8 or one of the root servers. If they don't resolve it then they don't know where to send your query.

If they do, the problem is probably closer to home either your local network or Internet provider.

[-] TheHobbyist@lemmy.zip 2 points 1 week ago

If I put my registrar's DNS, or cloudflare or google, it works just fine in dig, here with google:

; <<>> DiG 9.18.39 <<>> @8.8.8.8 mydomain.tld
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1301
;; flags: qr rd ra ad; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;mydomain.tld.		IN	A

;; ANSWER SECTION:
mydomain.tld.	3600	IN	A	192.168.10.20

;; Query time: 34 msec
;; SERVER: 8.8.8.8#53(8.8.8.8) (UDP)
;; WHEN: Sun Oct 05 15:51:47 CEST 2025
;; MSG SIZE  rcvd: 60
[-] atzanteol@sh.itjust.works 2 points 1 week ago

Something that can make troubleshooting DNS issues a real pain is that there can be a lot of caching at multiple levels. Each DNS server can do caching, the OS will do caching (nscd), the browsers do caching, etc. Flushing all those caches can be a real nightmare. I had issues recently with nscd causing issues kinda like what you're seeing. You may or may not have it installed but purging it if it is may help.

[-] humanamerican@lemmy.zip 2 points 1 week ago

Have you considered using a mesh VPN instead of opening a port to the public? Nebula and TailScale are both great options that have a free tier which is more than enough for most home use cases. With Nebula you can even selfhost your discovery node so nothing is cloud-based, but then you're back to opening firewall ports again.

Anyway, its going to be more secure than even a properly configured reverse proxy setup and way less hassle.

[-] BaroqueInMind@piefed.social 2 points 1 week ago

One thing you probably forgot to check is if your TLD registrar supports DyDNS and you have it set on both sides of the route.

load more comments (2 replies)
[-] jabberwockiX@piefed.social 2 points 1 week ago

Sorry this will most definitely not work with your local IP address on an external DNS. That is not routable over the internet. I have a 192.168.10.20 IP address in my home network as well. You need to go to whatsmyip.com or ipchicken.com and get your external IP and put that in the DNS at your registrar. Most likely you will need a Dynamic DNS provider as your ISP probably gives you a dynamic public IP address that will change occasionally.

If you just want to resolve mydomain.tld INTERNALLY so you can use a mydomain.tld HTTPs certificate then you just need to add mydomain.tld to your INTERNAL DNS server pointing at your INTERNAL IP address for your server. Likely your router is set up as a DNS server but it just forward all requests to the external DNS which is why you just get sent to mydomain.tld instead of your internal server.

[-] TheHobbyist@lemmy.zip 2 points 1 week ago

It does work. In my first edit I'm sharing multiple examples of others making it work, and I've made it work in some cases which I explain in my second edit. I'm not using an HTTP challenge, but a DNS challenge which is not specific to any IP address and does not require the IP address to be reachable from outside my network. I only care about accessing the endpoint from within my home network. The use of a real domain allows me to make use of the public chain of trust infrastructure and DNS allowing me to reach my homeserver using any device without having to setup any specific local DNS or installing any custom certificate on any of my devices.

load more comments (12 replies)
[-] princessnorah 2 points 1 week ago* (last edited 1 week ago)

It's very likely that DNS servers aren't going to propagate that A name record because of it being an internal IP. What DNS settings are you using for Tailscale? You could also check that the address is resolving locally with the command host mydomain.tld which should return mydomain.tld has address 192.168.10.20 if things are set up correctly.

Edit: you can also do a reverse lookup with host 192.168.10.20 which should spit out 20.10.168.192.in-addr.arpa domain name pointer mydomain.tld.

[-] Davel23@fedia.io 2 points 1 week ago

The easy answer is to enable NAT loopback (also sometimes called NAT hairpinning) on your edge router.

load more comments (2 replies)
load more comments
view more: next ›
this post was submitted on 05 Oct 2025
64 points (100.0% liked)

Selfhosted

52122 readers
41 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS