[-] koala@programming.dev 3 points 2 months ago

https://charity.wtf/2021/08/09/notes-on-the-perfidy-of-dashboards/

Graphs and stuff might be useful for doing capacity planning or observing some trends, but most likely you don't need either.

If you want to know when something is down (and you might not need to know), set up alerts. (And do it well, you should only receive "actionable" alerts. And after setting alerts, you should work on reducing how many actionable things you have to do.)

(I did set up Nagios to send graphs to Clickhouse, plotted by Grafana. But mostly because I wanted to learn a few things and... I was curious about network latencies and wanted to plan storage a bit long term. But I could live perfectly without those.)

[-] koala@programming.dev 2 points 3 months ago

But now with the end of Windows 10 looming, I need to upgrade a family member’s computer to Linux.

Why?

Did they ask for Linux? Do you have authority over them?

So this needs to be something that both is not going to break on its own (e.g. while doing automatic updates) and also won’t be accidentally broken by the users. ... There’s no way I’m going to be able to handle long-distance tech support if things break more than once in a blue moon.

Issues appear. I would be more focused on setting up remote access than choosing a distro.

I'd choose something LTS that has been around for a while (Debian, Ubuntu, RHEL-derivatives, SuSE if there's a freely-available LTS, etc.).

If you are not against the use of Google products, ChromeOS devices are about the best well-designed low maintenance operating systems. (Not Flex, a ChromeOS device.) But you would be sacrificing Firefox and LibreOffice, which might not be an option. (And technically, it's running a Linux kernel, if I remember correctly.)

[-] koala@programming.dev 3 points 3 months ago

I think Cloudflare Tunnels will require a different setup on k8s than on regular Linux hosts, but it's such a popular service among self-hosters that I have little doubt that you'll find a workable process.

(And likely you could cheat, and set up a small Linux VM to "bridge" k8s and Cloudflare Tunnels.)

Kubernetes is different, but it's learnable. In my opinion, K8S only comes into its own in a few scenarios:

  • Really elastic workloads. If you have stuff that scales horizontally (uncommon), you really can tell Amazon to give you more Kubernetes nodes when load grows, and destroy the nodes when load goes down. But this is not really applicable for self hosting, IMHO.

  • Really clustered software. Setting up say a PostgreSQL cluster is a ton of work. But people create K8S operators that you feed a declarative configuration (I want so many replicas, I want backups at this rate, etc.) and that work out everything for you... in a way that works in all K8S implementations! This is also very cool, but I suspect that there's not a lot of this in self-hosting.

  • Building SaaS platforms, etc. This is something that might be more reasonable to do in a self-hosting situation.

Like the person you're replying to, I also run Talos (as a VM in Proxmox). It's pretty cool. But in the end, I only run there 4 apps I've written myself, so using K8S as a kind of SaaS... and another application, https://github.com/avaraline/incarnator, which is basically distributed as container images and I was too lazy to deploy in a more conventional way.

I also do this for learning. Although I'm not a fan of how Docker Compose is becoming dominant in the self-hosting space, I have to admit it makes more sense than K8S for self-hosting. But K8S is cool and might get you a cool job, so by all means play with it- maybe you'll have fun!

[-] koala@programming.dev 2 points 4 months ago

If you speak Spanish, a month ago or so I was pointed at https://foro.autoalojado.es/, might be interesting to discuss the in-person stuff, although it doesn't seem like it's reaching a critical mass of activity :(

[-] koala@programming.dev 3 points 4 months ago

Yup, came here to mention PaperWM. I used xmonad in the past, but I executed it on top of Mate to have an "easy" desktop environment.

Nowadays Gnome extensions providing tiling is the equivalent "easy" method. Gnome is not for everyone, but it works out of the box- then you add the fancy tiling window management on top.

For people who have bounced off systems that require much more set up, I think they are a good option.

[-] koala@programming.dev 2 points 5 months ago

Incus has a great selection of images that are ready to go, plus gives scripted access to VMs (and LXC containers) very easily; after incus launch to create a VM, incus exec can immediately run commands as root for provisioning.

[-] koala@programming.dev 3 points 6 months ago

IMHO, it really depends on the specific services you want to run. I guess you are most familiar with Docker and everything that you want to run has a first-class-citizen Docker container for it. It also depends on whether the services you want to run are suitable for Internet exposure or not (and how comfortable you are with the convenience tradeoff).

LXC is very different. Although you can run Docker nested within LXC, you gotta be careful because IIRC, there are setups that used to not work so well (maybe it works better now, but Docker nested within LXC on a ZFS file system used to be a problem).

I like that Proxmox + LXC + ZFS means that it's all ZFS file systems, which gives you a ton of flexibility; if you have VMs and volumes, you need to assign sizes to them, resize if needed, etc.; with ZFS file systems you can set quotas, but changing them is much less fuss. But that would likely require much more effort for you. This is what I use, but I think it's not for everyone.

[-] koala@programming.dev 3 points 7 months ago

Eh, my Nextcloud LXC container idles at less than 4.5% CPU usage ("max over the week" from Proxmox). I use PostgreSQL as the backend on a separate LXC container that has some peaks of 9% CPU usage, but is normally at 5% too.

I only have two users, though. But both containers have barely IO activity.

[-] koala@programming.dev 2 points 7 months ago

Web-accessible Emacs? What are you using?

[-] koala@programming.dev 2 points 7 months ago

YunoHost is a non-profit. Things could change, of course, but I'd fear more that YunoHost dies than it tries to monetize.

TrueNAS is backed by a for-profit company that so far has a good track record and looks pretty sustainable. Plus, while YunoHost might be a bit more troublesome, TrueNAS Scale is pretty much based around "open" things- their app catalog is basically Helm charts, for example.

Docker Compose is quite portable too, but if you are re-using YAML compose definitions from the Internet, or non-official container images by third-parties, there's also risks involved- not everything is easy to migrate! I prefer a very hands-on approach to my personal infra (I package some RPMs!), so I think I wouldn't personally use YunoHost, but I feel somewhat comfortable recommending it to others.

[-] koala@programming.dev 2 points 7 months ago

Nope, just tested. There are hardware OTP devices that have no Internet connectivity. As far as I know, all OTP protocols are offline-friendly.

[-] koala@programming.dev 2 points 7 months ago

My crazy idea is: write software so that Flatpaks can run on Windows and macOS. Plus, make high-quality Flatpak-building templates available for as many programming languages, UI toolkits, etc. as possible.

Because everything that Flatpaks provide is OSS, making shims for Windows and macOS compatibility would be tedious, but doable.

Same with crosscompiling Flatpaks, compared to the difficulties of crosscompiling for Windows or macOS from any other OS, multiplatform Flatpaks should be doable to crosscompile.

So this would lead to a world where a very convenient way to package for Windows and macOS... is creating a Flatpak that works on Linux!

view more: ‹ prev next ›

koala

joined 7 months ago