12
Why NixOS is the Future - YouTube
(youtube.com)
A community for everything relating to the GNU/Linux operating system (except the memes!)
Also, check out:
Original icon base courtesy of lewing@isc.tamu.edu and The GIMP
For me, NixOS feels like something from the 2010s. I used it a bit about a decade ago. It’s great and powerful, but still pretty niche and not for everyone. Right now I’m on Bazzite, which seems to aim for the same goals but in a much easier and more forgiving way.
If I really need to overlay something onto the system, I can use rpm-ostree, but that’s rare since almost everything I need runs fine in toolbox or distrobox. Using those is super easy and forgiving—it’s basically like having super-efficient containers where you can mess around without worrying about breaking the host OS.
Personally, I mostly stick to a single Ubuntu distrobox, where I build graphical/audio/gaming apps from source and just launch them directly from the container—they work perfectly. Distrobox feels like having as many Debians, Arch installs, or Fedoras as you want, all running at near-native efficiency. Toolbox is similar, but I use it more for system-level stuff that would otherwise require rpm-ostree —like being able to run dnf in a sandboxed way that can’t mess anything up.
How does distrobox implement display forwarding? Does it support Wayland, or is it using bind mounts for xauth and X11 unix sockets?
With approach does it use for hardware acceleration? Does it abstract over Open Container Initiative's plugin system, e.g. Nvidia container tool kit or AMD's equivalent?
Is it inconvenient if any of your applications use shared memory, like many middleware transports used for robotics or machine learning software development?
I'm more familiar with plain docker and dev containers, but am interested in learning more about distrobox (and toolbox?) as another escape hatch while working with NixOS.
Distrobox uses bind mounts by default to integrate with the host: X11 and Wayland sockets for display, PulseAudio/PipeWire sockets for audio, /dev/dri for GPU acceleration, and /dev/shm for shared memory. On NVIDIA systems it relies on the standard NVIDIA container toolkit, while AMD/Intel GPUs just work with Mesa. Compared to plain Docker, where you usually have to manually mount X11/Wayland sockets, Pulse/PA sockets, /dev/shm, and GPU devices, Distrobox automates all of this so GUI, audio, and hardware-accelerated apps run at near-native efficiency out of the box. Toolbox works the same way but is more tailored for Fedora/rpm-ostree systems, while Distrobox is distro-agnostic and more flexible.
Thank you for the detailed reply, much appreciated!
Any rough edges you've encountered yet? Like using USB peripherals, or networking shenanigans? I'm assuming it's using the host network driver by default, and maybe bind mounting
/dev/bus/usb
for USB pass through?Think I'll really dig into distrobox today.
No problems so far, but I didn't try anything USB-related. Two of the more interesting programs I use it actively for are Ubuntu distrobox for Ultimate Doom Builder (level editor, works with GPU) and toolbox for natpmpc (utility for port-forwarding). I made a systemd service on my host system that calls
toolbox run natpmpc -a 1 0 tcp 60 -g "$GATEWAY" 2>/dev/null
in a loop to establish port-forwarding for my ProtonVPN connection (running on the host ofc), parses the assigned port and calls qbittorrent's web api to set forwarded port there.