93

Some folks on the internet were interested in how I had managed to ditch Docker for local development. This is a slightly overdue write up on how I typically do things now with Nix, Overmind and Just.

top 50 comments
sorted by: hot top controversial new old
[-] CodeBlooded@programming.dev 23 points 1 year ago

You can have my Docker development environment when you pry it from my cold dead hands!

[-] chickenf622@sh.itjust.works 6 points 1 year ago

And keep it that way! We need people on both sides to further spur progress. Plus I'm jealous cause I still don't have a firm grasp on docker.

[-] CodeBlooded@programming.dev 5 points 1 year ago

Fair!

Python, and its need for virtual environments, is what really drove me to master Docker.

load more comments (1 replies)
[-] astral_avocado@programming.dev 18 points 1 year ago

I wish he had written why he's so anti-container/docker. That's a pretty unusual stance I haven't been exposed to yet.

[-] LGUG2Z@lemmy.world 18 points 1 year ago

Hi!

First I'd like to clarify that I'm not "anti-container/Docker". 😅

There is a lot of discussion on this article (with my comments!) going on over at Tildes. I don't wanna copy-paste everything from there, but I'll share the first main response I gave to someone who had very similar feedback to kick-start some discussion on those points here as well:

Some high level points on the "why":

  • Reproducibility: Docker builds are not reproducible, and especially in a company with more than a handful of developers, it's nice not to have to worry about a docker build command in the on-boarding docs failing inexplicably (from the POV of the regular joe developer) from one day to the next

  • Cost: Docker licenses for most companies now cost $9/user/month (minimum of 5 seats required) - this is very steep for something that doesn't guarantee reproducibility and has poor performance to boot (see below)

  • Performance: Docker performance on macOS (and Windows), especially storage mount performance remains poor; this is even more acutely felt when working with languages like Node where the dependencies are file-count heavy. Sure, you could just issue everyone Linux laptops, but these days hiring is hard enough without shooting yourself in the foot by not providing a recent MBP to new devs by default

I think it's also worth drawing a line between containers as a local development tool and containers as a deployment artifact, as the above points don't really apply to the latter.

[-] Hexarei@programming.dev 14 points 1 year ago

If your dev documentation includes your devs running docker build, you're doing docker wrong.

The whole point is that you can build a working container image and then ship it to a registry (including private registries) so that your other developers/users/etc don't have to build them and can just run the existing image.

Then for development, you simply use a bind mount to ensure your local copy of the code is available in the container instead of the copy the container was built with.

That doesn't solve the performance issues on Windows and Mac, but it does prevent the "my environment is broke" issues that docker is designed to solve

[-] LGUG2Z@lemmy.world 2 points 1 year ago

The whole point is that you can build a working container image and then ship it to a registry (including private registries) so that your other developers/users/etc don’t have to build them and can just run the existing image.

Agreed, we still do this in the areas where we use Docker at day job.

I think the mileage with this approach can vary depending on the languages in use and the velocity of feature iteration (ie. if the company is still tweaking product-market fit, pivoting to a new vertical, etc.).

I've lost count of the number of times where a team decides they need to npm install something with a heavy node-gyp step to build native modules which require yet another obscure system dependency that is not in the base layer. 😅

[-] firelizzard@programming.dev 13 points 1 year ago

Cost: Docker licenses for most companies now cost $9/user/month

Are you talking about Docker Desktop and/or Docker Hub? Because plain old docker is free and open source, unless I missed something bug. Personally I've never had much use for Docker Desktop and I use GitLab so I have no reason to use Docker Hub.

[-] LGUG2Z@lemmy.world 6 points 1 year ago

I believe this is the Docker Desktop license pricing.

On an individual scale and even some smaller startup scales, things are a little bit different (you qualify for the free tier, everyone you work with is able to debug off-the-beaten-path Docker errors, knowledge about fixes is quick and easy to disseminate, etc.), but the context of this article and the thread on Mastodon that spawned it was a "unicorn" company with an engineering org comprised of hundreds of developers.

[-] firelizzard@programming.dev 9 points 1 year ago

My point is that Docker Desktop is entirely optional. On Linux you can run Docker Engine natively, on Windows you can run it in WSL, and on macOS you can run it in a VM with Docker Engine, or via something like hyperkit and minikube. And Docker Engine (and the CLI) is FOSS.

[-] LGUG2Z@lemmy.world 5 points 1 year ago

I understood your point, and while there are situations where it can be optional, in a context and scale of hundreds of developers, who mostly don't have any real docker knowledge, and who work almost exclusively on macOS, let alone enough to set up and maintain alternatives to Docker Desktop, the only practical option becomes to pay the licensing fees to enable the path of least resistance.

[-] mundane@feddit.nu 10 points 1 year ago* (last edited 1 year ago)

We are over 1000 developers and use docker ce just fine. We use a self hosted repository for our images. IT is configuring new computers to use this internal docker repository by default. So new employees don't even have to know about it to do their first docker build.

We all use Linux on our workstations and laptops. That might make it easier.

[-] LGUG2Z@lemmy.world 3 points 1 year ago* (last edited 1 year ago)

We all use Linux on our workstations and laptops. That might make it easier.

You are living my dream!

I think this is the key piece; the experience of Docker on Linux (including WSL if it's not hooking into Docker Desktop on Windows) and on macOS is just so wildly difference when it comes to performance, reliability and stability.

load more comments (1 replies)
[-] CodeBlooded@programming.dev 11 points 1 year ago

Docker builds are not reproducible

What makes you say that?

My team relies on Docker because it is reproducible…

[-] LGUG2Z@lemmy.world 5 points 1 year ago

Highly recommended viewing if you'd like to learn more about the limits of reproducibility in the Docker ecosystem.

[-] PipedLinkBot@feddit.rocks 3 points 1 year ago

Here is an alternative Piped link(s): https://piped.video/watch?v=pfIDYQ36X0k

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source, check me out at GitHub.

[-] CodeBlooded@programming.dev 1 points 1 year ago

I’m going to give it a watch. Thanks for sharing!

[-] uthredii@programming.dev 3 points 1 year ago* (last edited 1 year ago)

You might be interested in this article that compares nix and docker. It explains why docker builds are not considered reproducible:

For example, a Dockerfile will run something like apt-get-update as one of the first steps. Resources are accessible over the network at build time, and these resources can change between docker build commands. There is no notion of immutability when it comes to source.

and why nix builds are reproducible a lot of the time:

Builds can be fully reproducible. Resources are only available over the network if a checksum is provided to identify what the resource is. All of a package's build time dependencies can be captured through a Nix expression, so the same steps and inputs (down to libc, gcc, etc.) can be repeated.

Containerization has other advantages though (security) and you can actually use nix's reproducible builds in combination with (docker) containers.

[-] nickwitha_k@lemmy.sdf.org 10 points 1 year ago

That seems like an argument for maintaining a frozen repo of packages, not against containers. You can only have a truly fully-reproducible build environment if you setup your toolchain to keep copies of every piece of external software so that you can do hermetic builds.

I think this is a misguided way to workaround proper toolchain setup. Nix is pretty cool though.

[-] uthredii@programming.dev 4 points 1 year ago* (last edited 1 year ago)

That seems like an argument for maintaining a frozen repo of packages, not against containers.

I am not arguing against containers, I am arguing that nix is more reproducible. Containers can be used with nix and are useful in other ways.

an argument for maintaining a frozen repo of packages

This is essentially what nix does. In addition it verifies that the packages are identical to the packages specified in your flake.nix file.

You can only have a truly fully-reproducible build environment if you setup your toolchain to keep copies of every piece of external software so that you can do hermetic builds.

This is essentially what Nix does, except Nix verifies the external software is the same with checksums. It also does hermetic builds.

[-] nickwitha_k@lemmy.sdf.org 4 points 1 year ago

Nix is indeed cool. I just see it as less practical than maintaining a toolchain for devs to use. Seems like reinventing the wheel, instead of airing-up the tires. I could well be absolutely wrong there - my experience is mainly enterprise software and not every process or tool there is used because it is the best one.

[-] huantian@fosstodon.org 2 points 1 year ago

@nickwitha_k @uthredii I’d like to think a better analogy would be that nix is like using a 3D model of a wheel instead of a compass and a straightedge to make wheels hehe 🙃

load more comments (1 replies)
load more comments (2 replies)
[-] CodeBlooded@programming.dev 2 points 1 year ago

I’ll certainly give this a read!

Are you saying that nix will cache all the dependencies within itself/its “container,” or whatever its container replacement would be called?

load more comments (1 replies)
[-] astral_avocado@programming.dev 4 points 1 year ago

Appreciate the in-depth response! I've always been interested in Nix but I'm scared of change lol. And I'm a single systems administrator on a team of mostly non-technicals so large changes like that are .. less necessary. Plus you know, mostly dealing with enterprise software on windows unfortunately. One of these days.

[-] Dasnap@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

Docker performance on macOS (and Windows), especially storage mount performance remains poor

I remember when I first got a work Macbook and was confused why I had to install some 'Docker Desktop' crap.

I also learnt how much Docker images care about the silicon they're built on... Fucking M1 chip can be a pain...

[-] CodeBlooded@programming.dev 7 points 1 year ago

Docker is like, my favorite utility tool, for both deployment AND development (my replacement for Python virtual environments). I wanted to hear more of why I shouldn’t use it also.

[-] astral_avocado@programming.dev 3 points 1 year ago

Right? If it's about ease of insight into containers for debugging and troubleshooting, I can kinda see that. Although I'm so used to working with containers it isn't a barrier really to me anymore.

[-] sip@programming.dev 4 points 1 year ago* (last edited 1 year ago)

yup. it's a breeze especially for interpreted langs. mount the source code, expose the ports and voila. need a db?

services:
  pg:
    image: postgres
[-] huantian@fosstodon.org 6 points 1 year ago

@astral_avocado @LGUG2Z That definitely would’ve been helpful for readers new to the Nix scene, but I don’t think that’s the purpose of this article. It’s written as more of an example of a way to move to Nix, rather than an opinion piece on why you should move away from Docker.

I won’t try to argue why you should switch. However, I would recommend you look into the subject more, Docker is a great tool, but Nix is on a diffeeent level 🙃

load more comments (5 replies)
[-] mdhughes@lemmy.ml 10 points 1 year ago

I know you won't believe this, but you don't need any of these GTOS (giant towers of shit) to write & ship code. "Replace one GTOS with another" is a horizontal move to still using a GTOS.

You can just install the dev tools you need, write code & libraries yourself, or maybe download one. If you don't go crazy with the libraries, you can even tell a team "here's the 2 or 3 things you need" and everyone does it themselves. I know Make is scary, with the mandatory tabs, but you can also just compile with a shell script.

Deployment is packing it up in a zip and unzipping it on your server.

[-] LGUG2Z@lemmy.world 14 points 1 year ago

Lot's of (incorrect) assumptions here and generally a very poorly worded post that doesn't make any attempt to engage in good faith. These are the reasons for what I believe is my very first down-vote of a comment on Lemmy.

[-] mdhughes@lemmy.ml 6 points 1 year ago

You're advocating switching to another OS with a complex package manager, to avoid using a package manager that's basically a whole new OS. Giant Tower of Shit may be too generous for that.

But I was of course correct, I said you wouldn't believe it.

[-] yogsototh@programming.dev 9 points 1 year ago

nix does not need nixOS to run but is a complex package manager. At least for me, it doesn't seem more complex than docker ecosystem.

I personally use nix to take care of downloading compatible dependencies in isolation for me. And the rest of the code is really, just basic script shell or Makefile too.

I also could add a fancy mergeShells function I have written in nix to support a docker-compose-like composition of nix-shell files. But you could go a very long way with nix before you even want to do something like this.

[-] LGUG2Z@lemmy.world 5 points 1 year ago

Tutorial != advocation. As I said, no attempt to engage in good faith.

[-] gdrhnvfhj@lemmynsfw.com 9 points 1 year ago

Try to develop on a system that just has node16 3 different projects at the same time that each require different node versions. Nix rocks.

[-] sip@programming.dev 2 points 1 year ago
[-] gdrhnvfhj@lemmynsfw.com 3 points 1 year ago

And now these need different GCC compilers. And building should be easy and reproducible.

load more comments (1 replies)

Sometimes you need complex tools for complex problems. We just have a homegrown GTOS at my work instead, I wish we had something that made as much sense as Nix!

[-] yogsototh@programming.dev 8 points 1 year ago* (last edited 1 year ago)

I use a similar approach, but I went further by creating a system that compose like docker-compose would. The trick was to write my own nix function mergeShells.

https://her.esy.fun/posts/0024-replace-docker-compose-with-nix-shell/index.html

For now, I am pretty happy with it. Also, I put the init script inside nix-shell and not in external files and use exit signal to cleanup the state.

[-] LGUG2Z@lemmy.world 2 points 1 year ago

Thanks for sharing this! Added to my weekend inspiration/reading pile. 🙏

[-] malloc@programming.dev 8 points 1 year ago

Kind of cool if your production infrastructure can match. But for most companies (ie, Fortune 500 and some medium companies) implementing this would need a force majeure.

Decades of software rot, change in management, change in architecture, waxing and waning of software and hardware trends, half assed implementations, and good ole bottom tier software consultation/contractors brought into the mix make such things impossible to implement at scale.

Once worked at a company where their onprem infra was a mix of mainframe, ibm / dell proprietary crap, Oracle vendor locked, and some rhel/centos servers. Of course some servers were on different versions of the OS. So it was impossible to setup a development environment to replicate issues.

For the most part, that’s why I still use docker for most jobs. Much easier to pull in the right image, configure app deployment declaratively, and reproduce the bug(s). I would say 90% of the time it was reproducible. Before docker/containerization it was much less than that and we had to reproduce in some non production environment that was shared amongst team.

[-] Planet9@lemmy.world 3 points 1 year ago

I’m surprised no one has mentioned either of the following solutions as alternatives to this explanation (and docker) that still uses Nix:

[-] uthredii@programming.dev 3 points 1 year ago

Related, this article talks about combining nix and direnv: https://determinate.systems/posts/nix-direnv

Using these tools you are able to load a reproducible environment (defined in a nix flake) by simply cding into a directory.

[-] nebiros@programming.dev 2 points 1 year ago
load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 21 Jul 2023
93 points (100.0% liked)

Programming

17540 readers
66 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS