[-] koorogi@kbin.social 1 points 1 year ago

I'm not knocking the idea of running various maintenance tasks while the computer is asleep. The original post mentioned installing updates, and I agree that and your ideas make a lot of sense. It's not even a very new idea — I seem to remember the Wii would download updates using its ARM processor while the console was asleep.

OP specifically mentioned "discord or slack showing [them] online", and that's the use case I was questioning.

I do think that, even for legitimately useful uses, I'd still want the ability to turn it off. No matter how low the power draw, there may be times when I need to stretch my battery life a little longer, and I'm in a better position to know and plan for that than the OS is.

[-] koorogi@kbin.social 2 points 1 year ago

Sure, there are things that make sense to do in the background. The example of installing updates was a good one. But I was asking specifically about the example that was given of making you appear online on a chat service, because I just can't see the use case for that.

[-] koorogi@kbin.social 5 points 1 year ago

For a phone I'm are more likely than not to have with me, I could understand. But for a laptop, and especially for a desktop, if the machine is asleep, I'm not at it. Why is it great for a computer I don't have with me to show me as online in discord or slack?

[-] koorogi@kbin.social 11 points 1 year ago

I disagree with so much of this.

You might not care about the extra disk space, network bandwidth, and install time required by having each application package up duplicate copies of all the libraries they depend on, but I sure do. Memory use is also higher, because having separate copies of common libraries means that each copy needs to be loaded into memory separately, and that memory can't be shared across multiple processes. I also trust my distribution to be on top of security updates much more than I trust every random application developer shipping Flatpaks.

But tbh, even if you do want each application to bundle its own libraries, there was already a solution for that which has been around forever: static linking. I never understood why we're now trying to create systems that look like static linking, but using dynamic linking to do it.

I think it's convenient for developers to be able to know or control what gets shipped to users, but I think the freedom of users to decide what they will run on their own system is much more important.

I think the idea that it's not practical for different software to share the same libraries is overblown. Most common libraries are generally very good about maintaining backwards compatibility within a major version, and different major versions can be installed side-by-side. I run gentoo on my machines, and with the configurability the package manager exposes, I'd wager that no two gentoo installations are alike, either in version of packages installed, or in the options those packages are built with. And for a lot of software that tries to vendor its own copies of libraries, gentoo packages often give the option of forcing them to use the system copy of the library instead. And you know what? It's actually works almost all the time. If gentoo can make it work across the massive variability of their installs, a distribution which offers less configurability should have virtually no problem.

You are right that some applications are a pain to package, and that the traditional distribution model does have some duplication of effort. But I don't think it's as bad as it's made out to be. Distributions push a lot of patches upstream, where other distributions will get that work for free. And even for things that aren't ready to go upstream, there's still a lot of sharing across distributions. My system runs musl for its C library, instead of the more common glibc. There aren't that many musl-based distributions out there, and there's some software that needs to be patched to work -- though much less than used to be the case, thanks to the work of the distributions. But it's pretty common for other musl-based distributions to look at what Alpine or Void have done when packaging software and use it as a starting point.

In fact, I'd say that the most important role distributions play is when they find and fix bugs and get those fixes upstreamed. Different distributions will be on different versions of libraries at different times, and so will run into different bugs. You could make the argument that by using the software author's "blessed" version of each library, everybody can have a consistent experience with the software. I would argue that this means that bugs will be found and fixed more slowly. For example, a rolling release distro that's packaging libraries on the bleeding edge might find and fix bugs that would eventually get hit in the Flatpak version, but might do so far sooner.

The one thing I've heard about Flatpak/Snap/etc that sounds remotely interesting to me is the sandboxing.

[-] koorogi@kbin.social 2 points 1 year ago

My employer already requires that I live within a 45 minute commute.

[-] koorogi@kbin.social 1 points 1 year ago

There were even digital cameras that used floppies to store the photos they took.

koorogi

joined 1 year ago