12

What other approaches do folks use to deterministically customize Linux?

all 42 comments
sorted by: hot top controversial new old
[-] bjoern_tantau@swg-empire.de 7 points 20 hours ago

What other approaches do folks use to deterministically customize Linux?

It's just not something I need or desire.

[-] onlinepersona@programming.dev 17 points 1 day ago* (last edited 1 day ago)

NixOS could be the future if it had a better community, documentation, and a user interface to manage the system. Right now, it's still completely unusable for even tech literate folk. In fact it's unusable for people without time.

If NixOS is to become the future, it has to become more user friendly. Not only as a system but as a community. A community that ridicules people asking questions or responds with "just read the source code" might as well just continue believing in "self-documenting" code.

And let's not even dive into the close-source source forge dependency they have.

Anti Commercial-AI license

[-] ruffsl@programming.dev 5 points 15 hours ago

As a prior proponent of graphical programming interfaces, I've been thinking there'd be a good use case for a GUI based control panel for NixOS, something that could transcompile standard user selected options down to a nix config that could be abstract of the way from most users, like any sort of game save file.

Given all options and packages in nixpkgs are already machine readable and indexed, supplying a GUI based tool to procedurally generate nix codes doesn't at first seem initially daunting, but given the past discussions around this idea perhaps proves it to be on the contrary:

Although SnowflakeOS in particular looks promising:

SnowflakeOS Simple, Immutable, Reproducible SnowflakeOS is a NixOS based Linux distribution focused on beginner friendliness and ease of use.

https://snowflakeos.org/

[-] onlinepersona@programming.dev 3 points 4 hours ago* (last edited 4 hours ago)

I think Snowflake OS is dead. The NixOS community didn't really get behind it for whatever reason. My guess is that many people don't understand enough about NixOS to contribute, but those that do are too proud of their ability to understand it that making it easier for others would seemingly devalue it. I have noticed that many nerds attach a sense of self-worth to understanding difficult things and will fight tooth and nail when those things are made more simple as it will diminish their self-worth.

Given all options and packages in nixpkgs are already machine readable and indexed, supplying a GUI based tool to procedurally generate nix codes

Nix can create attribute sets from JSON, so there isn't a need to generate nix code. Projects like npins do this albeit for another purpose (locking dependencies without flakes).

Anyway, NixOS is lightyears away from a noon friendly OS because it's also terribly far away from a dev friendly OS.

Anti Commercial-AI license

[-] ruffsl@programming.dev 2 points 3 hours ago

Ah, that's a shame. Thanks for the context though.

I did feel a little bit of that slight dismissal or elitism from the thread I linked above about the graphical installer ISO. Although I think the relative surge of new users after graphical ISO's implementation did end up changing some minds on the merit of its continual development.

It seems like some tools just never fully realize their potential market demand until they're finally implemented and consequently adopted. Quite the catch 22.

I also wonder if it's a bit of a motivational aspect for individual contributors, as in demand with mostly originate from novice users who've yet to master the Nix language, yet by the time one's gained enough experience to contribute to Snowflake OS, you've kind of grown out of outgrow the need for it. That kind of reflects my personal interests around graphical programming, as I became more familiar with various languages, my inkling for a graphical representation of control flow gradually waned.

Still, I think lowering the barrier to adoption is in the long run best serves the community and in sustaining new contributors. Sort of like the conventional Greek proverb:

A society grows great when old men plant trees whose shade they know they shall never sit in.


Nix can create attribute sets from JSON, so there isn't a need to generate nix code.

Is there a good way of mixing and mashing JSON attribute sets with conventional nix config files? Perhaps relegating some config to machine-generated JSON, and some hand crafted configs?

[-] Auth@lemmy.world 14 points 1 day ago

I think ublue will win over nix in the longrun. Its layered approach to system design seems far more sane. Being able to take a base image and apply gaming patches over it then apply your own personal layer is such a great way for multiple people to build off the same base and not have to resign the wheel everytime.

NixOS is more fun for the user from a tinker point of view but Ublue is better for the distributors and the non tinker end users.

[-] ruffsl@programming.dev 6 points 1 day ago

Oh, neat! Is this the project you're referring to?

Looks like Bazzite is listed as an example derivative image. I've heard good things about that OS from newer Linux users' perspectives. But is ublue something an individual user could personally customize, or more like something a development team or community project would build up from?

The landing page referencing layers and the Open Container Initiative, so is this more like a bootable container using overlay file system drivers?

One attraction I appreciate with Nix is the ability to overlay or overload default software options from base packages, without having to repeat/redefine everything else upstream, e.g. enabling Nvidia support for btop to visualize GPU utilization via a simple cuda flag. Replicating lazy-level-evaluation with something buildkit ARGs would be hectic, so do they have their own Dockerfile/Containerfile DSL?

[-] Neptr 2 points 15 hours ago

Individuals can make there own custom images using blue build and templates for the starting image.

[-] ruffsl@programming.dev 1 points 14 hours ago

Oh, I see. Looks like one can use this method to create custom forks of downstream images such as Bazzite:

https://docs.bazzite.gg/Advanced/creating_custom_image/

[-] boredsquirrel@slrpnk.net 15 points 1 day ago

NixOS is great, when it works

[-] HelloRoot@lemy.lol 14 points 1 day ago* (last edited 1 day ago)

and also only once you've invested the multiple weekends of migrating your whole setup and config to a completely new syntax/concept and invest the necessary time and brainpower to learn everything related.

[-] Oinks 4 points 20 hours ago

That's not entirely true, unless you choose to nixify everything. You can just have a basic Nix configuration that installs whatever programs you need, then use stow (or whatever symlink manager you prefer) to manage the rest of your config.

You can't entirely forget that you're on NixOS because of FHS noncompliance but even then getting nix-ld to work doesn't require a lot of effort.

[-] ruffsl@programming.dev 4 points 16 hours ago

nix-ld has been really helpful. I wish there were some automated tools where you could feed at the binary, or a directory of binaries, and it would just return all of the nix package names you should add include with nix-ld.

Also if there were some additional flags to filter out redundant packages because of overlapping recursive dependencies or suggest a decent scope of meta package to start with for desktop environments, that'd be handy.

[-] phaer@programming.dev 3 points 14 hours ago

I wish there were some automated tools where you could feed at the binary, or a directory of binaries, and it would just return all of the nix package names you should add include with nix-ld.

https://github.com/Lassulus/nix-autobahn and specifically its nix-autobahn-find-libs comes pretty close at least? Were you aware of that already and is there something missing?

[-] ruffsl@programming.dev 3 points 13 hours ago

Indeed, I was unaware of this project. Project commit history looks inactive, but I'm guessing its feature-complete? Looks like someone has rewriten it with an added TUI:

[-] phaer@programming.dev 2 points 13 hours ago

I'd say it's fairly feature complete, but not super polished - as so many 1 person projects. I still find it very useful, author is also still active in the community and pretty responsive :)

[-] boredsquirrel@slrpnk.net 5 points 1 day ago* (last edited 1 day ago)

Pretty accurate

But it is worth it for me

Also if people honestly try to help and share understandable configs, it is way easier. Some people escalate quite a bit and make a computer program from their configs XD

codeberg.org/boredsquirrel/NixOS-Config

[-] ruffsl@programming.dev 7 points 1 day ago

It's a steep learning curve, but because much of the community publishes their personal configs, I find it a lot simpler to browse public code repos with complete declarative examples to achieve a desired setup than it is to follow meandering tutorials that subtly gloss over steps or omit prerequisite assumptions and initial conditions.

There are also plenty of outcroppings and plateaus buttressing the learning cliff that one can comfortably camp at. Once you've got a working MVP to boot and play from, you can experiment and explore at your own pace, and just reach for escape hatches like dev containers, flatpacks or appomages when you don't feel like the juice is worth the squeeze just yet.

[-] sukhmel@programming.dev 2 points 23 hours ago

Community publishing the configs sometimes confuses even more, because everyone does the same things differently, and some are deprecated, and some are experimental, and I was lost way more times than once while trying to make sense of it.

I like Nix, and I use it on my Mac and in our production for cross-compiling a service, but man is it a pain to fix issues. That is beside the point that for some reason Nix behaves a bit different on my machine and on co-workers', and the only thing I wanted from it is to be absolutely reproducible

[-] ruffsl@programming.dev 2 points 15 hours ago

Yep, with a Turing-complete DSL, there's never just one way to do something in Nix. I find the interaction between modules and overlays particularly quirky, and tricky to replicate from public configs that make advance uses of both.

That said, I do appreciate being able to git blame into public configs, as most will include insightful commit messages or references to ticketed issues that include more discussion with informed community members you can follow up with. Being able to peek at how others fixed something before and after helps give context, and with the commits being timestamped, it also helps gauge current relevancy or chronological order to correlate with upstream changelogs.

Are you using flakes with lock files, or nixpins to fix down the hashes of your nix channel inputs? I like fixating my machines to the same exact inputs so that my desktop can serve as a warm local cache when upgrading my laptop.

[-] sukhmel@programming.dev 2 points 13 hours ago

Personally I use flakes.

On the work we use an abomination that creates flake.lock but then parses it and uses to pin versions, it took me a while to realise this is why setting a flake input to something local never seemed to have any effect, for instance

[-] ruffsl@programming.dev 1 points 13 hours ago

I'm using flakes as well, so that abomination sounds terrifying...

[-] sukhmel@programming.dev 3 points 13 hours ago

I think, it's based on an old flake-compat package or something. It's not inherently bad, but it displays what I dislike the most about Nix design, it's very opaque and magical until you go out of your way to understand it.

The globals are another example of this, I know I can do with something; [ other ] but I am never sure if other comes from something or not. And if it's a package parameter, the values also come seemingly out of nowhere.

[-] dustyData@lemmy.world 10 points 1 day ago

And your extraordinary result after all that is… exactly what you would've gotten in a few minutes downloading another distro.

[-] ruffsl@programming.dev 5 points 1 day ago

However, you then don't have to mentally remember every change you made when you eventually migrate to a new machine or replicate your setup across your laptop and desktop while keeping them synchronized. It takes me a few hours to setup and verify that everything is how I need on a normal distro, though that may be a byproduct of my system requirements. Re-patching and packaging kernel modules on Debian for odd hardware is not fun, nor is manually fixing udev and firewall rules for the same projects again and again.

[-] dinckelman@lemmy.world 8 points 1 day ago

This is what people don’t fully understand. Last week I was setting up a new machine. All it took was 1 command, and it was in the fully identical state to my main, not even 10 minutes later. No manual dotfiles, no install scripts, no anything

[-] Sxan@piefed.zip 3 points 1 day ago

Þis is such an interesting use case which I completely don't understand.

Every time I set up a new machine, it has different configurations. I'm not setting up postfix or Caddy on every server I stand; I certainly don't want all of þe software I install on my desktop to be installed on my servers, and my desktop has a wildly different configuration þan my laptop (which is optimized for battery life). Even in corporate, "cloning" systems are an exception raþer þan a rule, IME.

I have an rsync config for þe few $HOME þings þat get cloned, but most of þose experience drift based on demands of þe system. Sure, .gnupg and .ssh are invariable, but .zshrc and even .tmux.conf are often customized for þe machine. Oþer þan þat, þere are only a handful of software packages I consistently install everywhere: yay, helix, zsh, mosh, tmux, ripgrep, fd, gnupg, Mercurial, and Go. I mean, maybe a couple more, but no more þan a dozen; I've never felt a need for an entire OS to run a single yay -S command.

Firewalls differ on nearly every machine. Wireguard configs absolutely differ on every machine. Þe differences are more common þan þe similarities.

I completely believe þat you find cloning useful; I struggle to imagine how, where puppet wouldn't work better. Can you clarify how your environment benefits from cloning like þis? I feel as if I'm missing a key puzzle piece.

[-] ruffsl@programming.dev 2 points 16 hours ago

Let's say you're building a gaming desktop, and after a day of experimentation with steam, wine, and mods, you finally have everything running smoothly except for HDR and VRR. While you still remember all your changes, you commit your setup commands to a puppet or chef config file. Later you use puppet to clone your setup onto your laptop, only to realize installing gamescope and some other random packages were the source of VRR issues, as your DE also works fine with HDR natively. So you removed them from the package list in the puppet file, but then have to express some complex logic to opportunistically remove the set of conflicting packages if already, so that you don't have to manually fix every machine you apply your puppet script too. Rinse and repeat for every other small paper cut.

I find a declarative DSL easier to work with and manage system state than a sequence of instructions from arbitrary initial conditions, as removing a package or module in Nix config effectively reverts it from your system, making experimentation much simpler and without unforeseen side effects. I don't even use Nix to manage my home or dot files yet, as simply having a deterministic system install to build on top of has been helpful enough.

[-] Sxan@piefed.zip 1 points 15 hours ago

Interesting. I mostly handle þis sort of stuff wiþ a combination of snapper and Stow. I can see how you might prefer doing all of þat work up front, þough.

[-] dinckelman@lemmy.world 4 points 22 hours ago

You have another misconception entirely misleading your understanding of what’s possible here. Just because I said i’ve setup an exact clone, it doesn’t mean that’s the only way to set it up. My configuration manages 6 different machines, all with different options

[-] dustyData@lemmy.world 4 points 1 day ago

I was mostly joking, of course. I appreciate the use case. It's just that 99% of people are spinning new machines once every decade. Having a reproducible setup is something of interest for a very narrow band of system managers.

I truly believe that for those who are spinning new hardware every day and need an ideal setup every time, a system image is far more practical. With much more robust tooling available. I've read other replies and for them all, I notice that using Universal Blue to package and deploy a system image would take a tiny fraction of the time it takes just learning Nix basic syntax. It's so niche it seems almost not worth any of the effort to learn.

[-] sukhmel@programming.dev 2 points 13 hours ago

Sometimes it's also the updates, rolling back a failed update is much simpler with Nix even if it took some elaborate set-up. This might be not wildly useful but it happens more often than spinning up a new machine entirely

[-] ruffsl@programming.dev 2 points 16 hours ago

I think the other 99% would appreciate having some deterministic config, and not necessarily even using Nix either.

I'm kind of perplexed as to why no other distro hasn't already supported something similar. Instead of necessitating file system level disk snapshots, if the OS is already fully aware of what packages the user has installed, chron jobs and systemd services they've scheduled, desktop environment settings and XDG dot files, any Debian or Fedora based distro could already export something like a archive tarball that encapsulates 99% of your system, that could still probably fit on a floppy disk. Users could backup that file up regularly with their other photos and documents, and simplify system restoration if ever they get their laptops stolen or their hard drive crashes.

I think Apple and Android ecosystems already support this level of system restoration natively, and I think it'd be cool if Linux desktops in general could implement the same user ergonomics.

[-] dustyData@lemmy.world 1 points 13 hours ago

That would be super rad. But it is also the kind of things that only a tiny group of people like us enjoy tinkering with. The average computer user has no interest whatsoever on being a sysadmin. If the service is offered and neatly package, they will use and enjoy it. But Nix manages to be even more user hostile than old package manamegement style.

[-] gudu@programming.dev 3 points 1 day ago

Same story. Ssd of my machine for work crashed and after the replacement I was ready for work with everything customized and configured 30 minutes later.

A new node for my cluster arrives? 30 minutes later the new one is setup and integrated in my k8s home setup. Reusing complete profiles combined with files for hardware specifics.

I can even upgrades major versions fearlessly and had 0 problems the last years.

[-] boredsquirrel@slrpnk.net 1 points 1 day ago

This makes no sense

[-] hisao@ani.social 7 points 1 day ago

For me, NixOS feels like something from the 2010s. I used it a bit about a decade ago. It’s great and powerful, but still pretty niche and not for everyone. Right now I’m on Bazzite, which seems to aim for the same goals but in a much easier and more forgiving way.

If I really need to overlay something onto the system, I can use rpm-ostree, but that’s rare since almost everything I need runs fine in toolbox or distrobox. Using those is super easy and forgiving—it’s basically like having super-efficient containers where you can mess around without worrying about breaking the host OS.

Personally, I mostly stick to a single Ubuntu distrobox, where I build graphical/audio/gaming apps from source and just launch them directly from the container—they work perfectly. Distrobox feels like having as many Debians, Arch installs, or Fedoras as you want, all running at near-native efficiency. Toolbox is similar, but I use it more for system-level stuff that would otherwise require rpm-ostree —like being able to run dnf in a sandboxed way that can’t mess anything up.

[-] ruffsl@programming.dev 3 points 1 day ago

How does distrobox implement display forwarding? Does it support Wayland, or is it using bind mounts for xauth and X11 unix sockets?

With approach does it use for hardware acceleration? Does it abstract over Open Container Initiative's plugin system, e.g. Nvidia container tool kit or AMD's equivalent?

Is it inconvenient if any of your applications use shared memory, like many middleware transports used for robotics or machine learning software development?

I'm more familiar with plain docker and dev containers, but am interested in learning more about distrobox (and toolbox?) as another escape hatch while working with NixOS.

[-] hisao@ani.social 3 points 21 hours ago

Distrobox uses bind mounts by default to integrate with the host: X11 and Wayland sockets for display, PulseAudio/PipeWire sockets for audio, /dev/dri for GPU acceleration, and /dev/shm for shared memory. On NVIDIA systems it relies on the standard NVIDIA container toolkit, while AMD/Intel GPUs just work with Mesa. Compared to plain Docker, where you usually have to manually mount X11/Wayland sockets, Pulse/PA sockets, /dev/shm, and GPU devices, Distrobox automates all of this so GUI, audio, and hardware-accelerated apps run at near-native efficiency out of the box. Toolbox works the same way but is more tailored for Fedora/rpm-ostree systems, while Distrobox is distro-agnostic and more flexible.

[-] ruffsl@programming.dev 2 points 16 hours ago

Thank you for the detailed reply, much appreciated!

Any rough edges you've encountered yet? Like using USB peripherals, or networking shenanigans? I'm assuming it's using the host network driver by default, and maybe bind mounting /dev/bus/usb for USB pass through?

Think I'll really dig into distrobox today.

[-] hisao@ani.social 2 points 15 hours ago

Any rough edges you’ve encountered yet?

No problems so far, but I didn't try anything USB-related. Two of the more interesting programs I use it actively for are Ubuntu distrobox for Ultimate Doom Builder (level editor, works with GPU) and toolbox for natpmpc (utility for port-forwarding). I made a systemd service on my host system that calls toolbox run natpmpc -a 1 0 tcp 60 -g "$GATEWAY" 2>/dev/null in a loop to establish port-forwarding for my ProtonVPN connection (running on the host ofc), parses the assigned port and calls qbittorrent's web api to set forwarded port there.

this post was submitted on 19 Aug 2025
12 points (100.0% liked)

Linux

9018 readers
215 users here now

A community for everything relating to the GNU/Linux operating system (except the memes!)

Also, check out:

Original icon base courtesy of lewing@isc.tamu.edu and The GIMP

founded 2 years ago
MODERATORS