[-] nico@r.dcotta.eu 2 points 10 months ago

Good luck on your Nix journey! Happy to help if you have questions.

Of all the tech I use, I think Nix is the most 'avant-garde' in that it is super different from the usual methods (scripting, stateful things), but works very well once past the paradigm shift and the learning curve that entails.

[-] nico@r.dcotta.eu 2 points 10 months ago

The problem with using seaweedfs to a back your DBs is more on the filesystem than the implementations of POSIX features. When you are writing to a file, and the connection to seaweedfs breaks (container restart, wifi, you name it), then you might end up with a half-written file. If you upload pictures, this is unlikely, but DBs are doing several writes per second usually. So it is more likely one of those gets interrupted. In my case, my grafana sqlite DB would get corrupted every other week.

What I recommend is using DBs natively in your node's filesystem, and backing them up to seaweedfs periodically instead. That way your DBs 'work' but you can get them running again, and the backup is replicated in the distributed filesystem.

[-] nico@r.dcotta.eu 2 points 10 months ago* (last edited 10 months ago)

Good question! So it depends, but TLDR: imo it's worth it, or it's fine, but it's easy to try yourself and see

most services in their docs will show how to deploy with kubernetes or docker, but rarely Nomad

You are absolutely correct, but I do find that for the large large majority of things, either you can find an online Nomad config, or the Nomad config is easy enough to translate from Docker compose. Only some complicated larger deployments (think Immich) are harder to translate, but even then it just takes some trial and error. I really do think that extra trouble of translating is very much worth the pain you save yourself in terms of deploying k8s though. You might spend a bit longer typing out the Nomad job file yourself, but in exchange you are thankfully not maintaining the k8s cluster.

As far Nomad-specific documentation goes, I think it the official one is more than good enough.

You mentioned compatibility. So far I have not found anything I really wanted that was not possible to set up in Nomad. Nomad does CNI and CSI, which is the same API k8s uses, so thinkgs working there will work for Nomad. Other things you would use with docker compose or k8s don't work with Nomad, but you don't need them (for example: portainer or metrics exporters) because Nomad has them natively already (this blog discusses that).

As you can see I am pretty opinionated towards Nomad - I have been using it in my previous job in prod, and in my home-lab for a year now, and I am very happy with it. If you would like to read more I recommend this blog post. For Nomad on NixOS I wrote this one.

For now my advice is: just try nomad yourself (as simple as running nomad agent -dev on your laptop), run the tutorial, and see if it was easy enough that you see yourself using it for the rest of your containers. If you need more help you are welcome to DM me :)

[-] nico@r.dcotta.eu 2 points 10 months ago

I struggled a bit to get it up and running well, but now I am happy with it. It's not too hard to deploy (at least easier than the alternatives), it has CSI which for me was big, and it has erasure coding. The dev that maintains it (yes, the one dev) is very responsive.

It has trade offs, so depending on your needs, I recommend it. Backing store for stateful workloads like postgres DBs? Absolutely not. Large S3 store (with an option for filesystem mount) for storing lots of files? Yes! In that regard it's good for stuff like Lemmy's pictrs or immich. I use it as my own Google drive. You can easily replicate in your own cluster, or back it up to an external cloud provider. You can mount it via FUSE on your personal machine too.

Feel free to browse through my setup - if you have specific questions I am happy to answer them.

[-] nico@r.dcotta.eu 2 points 1 year ago* (last edited 1 year ago)

I think there are two approaches to infrastructure as code (and even code in general):

  • as steps (ansible, web UI like pihole...)
  • declarative (nix, k8s, nomad, terraform...)

Both should scale (in my company we use templating a lot) but I find the latter easier to debug, because you can 'see' the expected end result. But it boils down to personal preference really.

As for your case, ideally you don't write custom code to generate your template (I agree with you in that it's tedious!), but you use the templating tool of your framework of choice. You can see this example, it's on grimd (what I forked leng from) and Nomad, but it might be useful to you.

P.S also added to the docs on the signal reloading here

[-] nico@r.dcotta.eu 1 points 1 year ago

Leng will cache each step of recursion, and it relies on upstream resolvers to do recursion for it as well (like grimd), so you should not be seeing 200ms resolution in any scenario.

I am keen for you to give it a shot - if you do please make an issue if it's not behaving like you were hoping for

[-] nico@r.dcotta.eu 1 points 1 year ago

I think the answer is yes (as leng is recursive) but can you explain your use-case and expected behaviour a bit so I can get a better idea of what you want unbound to do that blocky is not doing?

[-] nico@r.dcotta.eu 1 points 1 year ago

If you mean CNAME flattening I have an issue for it. If you mean recursively resolving CNAME until the end record is found, it does support it.

For example, if you set a custom record mygoogle.lol IN CNAME google.com Leng will return a response with an A record with a google.com IP address when you visit mygoogle.lol

[-] nico@r.dcotta.eu 1 points 1 year ago

What you described is correct! How to replicate this will depend heavily on your setup.

In my specific scenario, I make the containers of all my apps use leng as my DNS server. If you use plain docker see here, if you use docker compose you can do:

version: 2
services:
 application:
  dns: [10.10.0.0] # address of leng server here!

Personally, I use Nomad, so I specify that in the job file of each service.

Then I use wireguard as my VPN and (in my personal devices) I set the DNS field to the address of the leng server. If you would like more details I can document this approach better in leng's docs :). But like I said, the best way to do this won't be the same if you don't use docker or wireguard.

If you are interested in Nomad and calling services by name instead of IP, you can see this tangentially related blog post of mine as well

[-] nico@r.dcotta.eu 2 points 1 year ago

Including SRV records? I found that some servers (blocky as well) only support very basic CNAME or A records, without being able to specify parameters like TTL, etc.

I also appreciate being able to define this in a file rather than a web UI

[-] nico@r.dcotta.eu 1 points 1 year ago
  • Can you show the diff with your previous WG config?
  • Is 10.11.12.0/24 also on enp3s0?

I am able to connect and can ping 10.11.12.77, the IP address of the server, but nothing else

Including the wider internet, if you set your phone's AllowedIPs to 0.0.0.0/0? This makes me think it's a problem with the NAT, not so much wireguard. Also make sure ipv4 forwarding is enabled:

sysctl -w net.ipv4.conf.default.forwarding=1
sysctl -w net.ipv4.conf.enp3s0.forwarding=1

Reading this article might help! I know this is not what you asked, but otherwise, my approach to accessing devices on my LAN is to also include them in the WG VPN - so that they all have an IP address on the VPN subnet (in your case 10.11.13.0/24). Bonus points for excluding your LAN guests from your selfhosted subnet.

[-] nico@r.dcotta.eu 1 points 1 year ago* (last edited 1 year ago)

Yep I am using traefik -> nginx. I simply add the traefik tags to the nginx service. I didn't include that in the example file to keep it simple.

As for the storage, I use SeaweedFS (has a CSI plugin, really cool, works well with nomad) but as a CSI volume it's not suitable for backing postgres' filesystem. The lookups are so noticeably slower that your Lemmy instance will be laggy. So I decided to use a normal host volume, so the DB writes to disk directly, and you can back that up to an S3-compatible storage with this (also cool). Could be SeaweedFS, AWS, Backblaze...

I think SeaweedFS is suitable for your pictrs storage though, be it through its S3 API (supported by pictrs) or through a SeaweedFS CSI volume that stores the files directly.

I hope that answers it! Do let me know what you end up with

view more: ‹ prev next ›

nico

joined 1 year ago