8

Linux is a branch of development of the old unix class of systems. Unix is not necessarily open and free. FOSS is what is classified as open and free software. Unix since its inception was deeply linked to specific industrial private interests, let's not forget all this while we examine the use of linux by left minded activists. FOSS is nice and cool, but it is nearly 99.99% run on non-open and non-free hardware. A-political proposals of crowd-funding and diy construction attempts have led to ultra-expensive idealist solutions reserved for the very few and the eccentric affluent experimenters

Linux vs Windows is cool and trendy, is it? Really is it alone containing any political content? If there is such what is it? So let's examine it from the base.

FOSS, People, as small teams or individuals "producing as much as they can and want" offering what they produced to be shared, used, and modified by anyone, or "as much as they need". This is as much of a communist system of production and consumption as we have experienced in the entirety of modern history. No exchange what so ever, collective production according to ability and collective consumption according to need.

BUT we have corporations, some of them mega-corps, multinationals who nearly monopolize sectors of computing markets, creating R&D departments specifically to produce and offer open and free code (or conditionally free). Why? Firstly because other idiots will join their projects and offer further development (labor), contribute to their projects, for "free", but they still retain the leadership and ownership of the project. Somehow, using their code, without asking why they were willing to offer it in the first place, it is cool to use it as long as we can say we are anti/against/ms-win free.

Like false class consciousness we have fan-boys of IBM, Google, Facebook, Oracle, Qt, HP, Intel, AMD, ... products against MS.

Back when unix would only run on enterprise ultra-expensive large scale systems and expensive workstations (remember Dec, Sun, Sgi, .. workstations that were priced similarly to 2 brand new fast sportscars each) and the PC market was restricted to MS or the alternative Apple crap, people tried and tried to port forms of unix into a PC. Some really gifted hacking experts were able to achieve such marvels, but it was so specific to hardware that the examples couldn't be generalized and utilized massively.

Suddenly this genious Finn and his friends devised a kernel that could make most PC hardware available work and unix with a linux kernel could boot and run.

IBM saw eventually a way back into the PC market it lost by handing dos out to the subcontractors (MS), and saw an opportunity to take over and steer this "project" by promoting RedHat. After 2 decades of behind the scenes guidance since the projected outcome was successful in cornering the market, IBM appeared to have bought RH.

Are we all still anti-MS and pro-IBM,google,Oracle,FB,Intel/AMD?

The bait thrown to dumb fish was an automated desktop that looked and behaved just like the latest MS-win edition.

What is the resistance?

Linus Trovalds and a few others who sign the kernel today make 6figure salaries ALL paid by a handful of computing giants that by offering millions to the foundation control what it does. Traps like rust, telemetry, .. and other "options" are shoved daily into the kernel to satisfy the paying clients' demands and wishes.

And we, in the left are fans of a multimilioner's "team" against a "trilioner's" team. This is not football or cricket, or F1. This is your data in the hands of multinationals and their fellow customer/agencies. Don't forget which welfare system maintains the hierarchy of those industries whether the market is rosy or gray. Do I need to spell out the connection?

Beware of multinationals bearing gifts.

Yes there are healthier alternatives requiring a little more work and study to employ, the quick and easy has a "cost" even when it is FOSS.

.

top 50 comments
sorted by: hot top controversial new old
[-] m532@lemmygrad.ml 22 points 1 year ago
  1. Unix is not linux
  2. All of this does not have anything to do with windows. What is this both-sides-bad liberalism? Windows is clearly so much worse at all of this it isn't even worth talking about in this context.
  3. It's not the fault of FOSS devs that many live in bourgeois dictatorships. Of course the bourgeioise will steal the code. Of course they will subvert the license.
  4. Are you expecting FOSS devs to somehow conjure their own chip fabs? That's not how material conditions work.
load more comments (1 replies)
[-] Prologue7642@lemmygrad.ml 10 points 1 year ago

I don't really get the point of this post. If you want to say that quite a lot of FOSS code is funded by huge corporations, then yeah, sure. Most people I would assume know that. But not really sure what that has to do with title, even if Linux is mostly run by corporations it is still much better than alternatives.

Also, not really sure what you mean by traps like Rust and telemetry. There is no telemetry on Linux and the only reason why I can think of you included it is recent Go telemetry, which I don't really get how it is relevant. With Rust, I also don't get it, Rust wasn't added because some company wanted it or whatever, it was added because it is a popular (and extremely loved) language that is suitable for kernel development. Not many people nowadays want to code in C.

[-] iriyan@lemmygrad.ml 2 points 1 year ago

linux and unix were built on alternatives. If you don't like a piece of code offered as a tool to do something you write something better and offer it/share it with others. So you as a user have a choice among similar tools. Even the most basic ones like gnu-utilities have busybox and other specific alternatives.

The latest trend is to have NO-ALTERNATIVES, to get everyone to use 1 core system. So instead of diverging as a system (as some of the BSD-unix projects did) linux is showing a tendency to converge into one system (fedora,debian,arch) with little differences among them.

You get corporate media publishing articles of the "top -ten" linux distributions, or "top-ten" desktops, all based on the very same edition of IBM software, no exception, as there is none. This is marketing and steering the public into a single direction. The question you should answer to yourself is why! Without somone spelling it out to you drawing the attention of 3 lettered agencies.

[-] Prologue7642@lemmygrad.ml 4 points 1 year ago

That just depends on what you use. There are loads of distros that allow you to use whatever you want. There are only so many ways you can do stuff, and it doesn't make much sense to differentiate if you don't have reason to. You have some genuinely diverging distros like NixOS that are significantly different.

Not really sure what corporate media you read. In my experience, most of those are just a popularity contest. And usually there are non-corporate distros like arch, Debian, etc. And with desktops I mean I am not even sure there are ten desktop environments (at least with some reasonable amount of users).

[-] maard@lemmygrad.ml 3 points 1 year ago* (last edited 1 year ago)

okay i can totally see why you wouldn't like linux as a whole becoming "one thing", but what is your opinion on the growth of linux on the desktop? By far the biggest factor in my opinion that's pushing people away (consumers as well as devs) is having to deal with so many different distros, packaging apps with different libraries on so many different systems. Having standards that aim to reduce that load can only be beneficial for the masses to adopt an objectively better operating system, even if not perfect, wouldn't it ? i.e. the rise of appimages and flatpaks as a means to curb that issue is to me a good thing, even if not "the most optimal way of doing things"

[-] Prologue7642@lemmygrad.ml 3 points 1 year ago

I always actually wonder if that is an actual issue. Apart from some duplicate effort with things like packaging for different distros (which is something that distro maintainers do anyway) I don't really get this point. For me, this only makes sense for proprietary packages and not for open source.

Apart from some small differences in how you install packages, using most distros is basically the same.

I am always confused by this point because I see it repeated everywhere, but never with a good argument supporting it.

[-] FuckBigTech347@lemmygrad.ml 3 points 1 year ago* (last edited 1 year ago)

I only ever see people who work on proprietary software make this argument. For FOSS this is a non-issue. If you have the source code available you can just compile it against the libs on your system and it will just work In most cases unless there was a major change in some lib's API. And even then you can make some adjustments yourself to make it work. Distro maintainers tend to do this.

[-] maard@lemmygrad.ml 2 points 1 year ago

For many admittedly smaller apps, it's always a bit of a pain to have to install it manually because the dev simply gave up trying to package it for "the big 3" and distro maintainers can't care about all small programs, although the current system works well enough for most programs.

However i am not a developer, so i can't speak firsthand about the difficulty of packaging and maintaining your app on different distros across years, and i'm not sure if the brunt of maintaining all these apps should fall onto distro maintainers.

About users and using distros, i can agree that it's roughly the same either way with the only real difference most of the time being "do you use apt or pacman to install packages"

[-] Prologue7642@lemmygrad.ml 2 points 1 year ago* (last edited 1 year ago)

Fair enough, but I only see that for some niche projects. And at that point you are probably not a regular user and can do it yourself.

There is an issue on the other side, if you only provide appimage/flatpak it is much less customizable. You can't optimize your software for your CPU, you can't mix and match what version of the libraries your software uses. Personally, I think it is always a good idea to provide a flatpak alternative for those that want it, but I don't see it as a replacement for regular packaging.

Edit: I would much rather see something like nix being used to describe the dependencies. That is in my opinion the best solution, which also allows you to more easily port it to other systems.

[-] maard@lemmygrad.ml 3 points 1 year ago

Ideally, it'd be good enough to simply have say, an appimage/flatpak and have the source code and then let distro maintainers/end users build it how they want/need to, i have had the pleasure of trying to get NVENC working in OBS under Debian 10 and that was a massive pain, due to both outdated nvidia-drivers, i had to recompile ffmpeg with the right flags and that would break after every update, the easiest way was to get an OBS flatpak that came prebuilt with it all IIRC I guess my problems with that were mainly because i used debian stable at that time, it's probably not as much of a pain now that i'm on sid.

I don't know anything about Nix, i heard a lot of good about it and how it's "all config files" or something but the prospect of learning a whole new world scares me, but i trust your judgment on that. I'll stick to what i know on my boring ass debian sid :D

[-] FuckBigTech347@lemmygrad.ml 3 points 1 year ago

The real problem in your specific example is the fact that NVidia only distributes proprietary drivers and user-space libraries. If their driver was open like Intel's and AMD's then it too would be in the Kernel tree and abstracted by the same interfaces. And at that point you wouldn't have to worry about incompatibilities like the one you're describing.

[-] maard@lemmygrad.ml 2 points 1 year ago

You're right, that was just the only example that came to my mind when thinking about the one time that a flatpak was more convenient to me than the alternative.

[-] Prologue7642@lemmygrad.ml 1 points 1 year ago

I would imagine that if you weren't on Debian stable, it would be much better. From what I've seen, dealing with anything Nvidia on stable distros is pain.

I just recently started working with it and it is really nice. You have NixOS, where you can define basically everything with just nix config files. You want to run MPD on some port, sure just use add this option, and we will create config file and put it in right place. It is really easy to define your entire system with all the options in one place. I don't think I've ever had to change anything in /etc I just need to change an option in my system config. I think something like this is probably the future of Linux.

Nix by itself is just a language that is used to configure things. You can do things like to define all the dependencies for your project with it, so it is easy to build by anyone with nix (which you can install basically anywhere). By doing it like this you can be sure all the dependencies are defined, so it is really easy to port the software to other distros even if you weren't using Nix.

[-] maard@lemmygrad.ml 3 points 1 year ago

Sounds amazing, i heard you could walk around with a usb stick and end up reinstalling your entire system just the way you backed it up using NixOS. Maybe some day i'll give it a shot, nuke whatever is currently on my old thinkpad (of course) and try this NixOS thing out.

[-] Prologue7642@lemmygrad.ml 3 points 1 year ago

Yep, basically, it is really nice. The only issue is that it is still a pretty niche distro, and if something is not supported it can be a bit annoying (but not much worse than in a normal distro). And the documentation is rather lacking, but both of these issues are something I hope will get better with time and more users.

[-] maard@lemmygrad.ml 1 points 1 year ago

oh wow i had misread your initial statement, yeah i wasn't arguing for a flatpak/appimage distro only concept like silverblue or anything lmao i just like the possibility of having something that's distro agnostic

[-] iriyan@lemmygrad.ml 1 points 1 year ago

https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config

This is a pretty vanilla standard config file with which you compile the kernel, 6.3 in the above example. Search for words as telemetry, rust, IFS ... tell me what linux you use without it.

[-] Prologue7642@lemmygrad.ml 8 points 1 year ago

The fact that it has word telemetry in it doesn't mean it spies on you. CONFIG_WILCO_EC_TELEMETRY -> allows you to read telemetry from some chrome specific hardware CONFIG_INTEL_PMT_TELEMETRY -> allows you to access telemetry that intel platform monitoring provides CONFIG_INTEL_TELEMETRY -> allows you to configure telemetry and query some events from intel hardware.

None of these options spy on you or do anything nefarious. It just means that you can have an application that queries some data from them, nothing more.

Again, not sure what your issue with Rust is.

And with IFS it is same as above, someone here already linked you an article on it.

[-] rostselmasch@lemmygrad.ml 3 points 1 year ago

And that is exactly the point. Working with many servers means that you have to collect data. How am I supposed to know when it's time to replace something and so on? I remember my boss not wanting to spend money on Nagios (servers etc.) until one day everything blew up. No one could work for two days. After that the idiot finally spent money on a monitoring system and you could finally see when the RAID failed.

[-] Prologue7642@lemmygrad.ml 3 points 1 year ago

Exactly if something I want more telemetry in my system that is more easily accessible. I can't imagine living for example without SMART.

[-] relay@lemmygrad.ml 9 points 1 year ago

Use Redox os. Completely community run. I recommend this because I see how much you love rust.

https://www.redox-os.org/

[-] Prologue7642@lemmygrad.ml 4 points 1 year ago
[-] ComradeChairmanKGB@lemmygrad.ml 6 points 1 year ago

I'm anti windows because they are structural weaknesses.

[-] silent_clash@lemmygrad.ml 4 points 1 year ago* (last edited 1 year ago)

The existence of enterprise contributors to Linux is symbiotic with volunteer devs and helps drive development. There are benefits to having full time talented devs and engineers paid for their time working on Linux, and for the most part, the whole community is better for it.

load more comments (1 replies)
[-] CompadredeOgum@lemmygrad.ml 3 points 1 year ago

rust, telemetry

is there telemetry in the kernel?

why is rust a trap?

[-] rostselmasch@lemmygrad.ml 1 points 1 year ago

There is no telemetry in the sense, that something is sendes to Intel. Look here. And this is quite hand, if you use Linux in a datacenter

[-] iriyan@lemmygrad.ml 1 points 1 year ago

not on 5.10 but most 6.xx kernel do have telemetry and very few distros disable it, if possible.

[-] rostselmasch@lemmygrad.ml 5 points 1 year ago

Show me the code, where something is beeing send to Intel. Not, that a module is loaded. Telemetry has also the meaning, that you collected data of your assets in a datacenter. I couldnt find anything in the code of Intel-PMT.

[-] iriyan@lemmygrad.ml 1 points 1 year ago

rust and maybe go, in a way evade what open and free code really meant (which contains the characteristic of being self contained). Many rust written software demand to the minute release of dependencies, automatically drawn and utilized while you compile the piece of software. First there is no way you can audit this then at any given moment this drawn code can change affecting what you compiled, exponentially making it difficult to audit and certify as secure. It also transfers the responsibility to 2nd and 3rd parties of what the code contains, making it legally impossible for being responsible or being accused of creating back-doors and other weaknesses in software.

But it is modern and it is being pushed everywhere. In general, when you hear buzz words and terms, and technologies, making noise and be utilized everywhere be ware of the trojan.

Facebook which had contributed 0 to the FOSS community, suddenly released zstd which they bought from someone (or so they say) and made him rich. This FOSS within months was incorporated and utilized all across the linux community on very false data supporting its superiority, like publishing comparative compression/decompression numbers of multi-thread software vs a mandated single thread on the competitor. At the end nobody really even used this optimized condition under which zstd has a tiny superiority in speed while still lacking in space (compression/decompression software).

Someone and something drives this "rush", like gold in Columbia river advertised by tool merchants for gold diggers.

At least on the left we should have a bit more critical tendency than anti-windows fan boys clubs. The price you pay to have a usb stick automounted rw as a user automatically upon insertion is one of security and privacy. All this overhead instead of 5lines of script.

[-] Prologue7642@lemmygrad.ml 5 points 1 year ago

Most of the code written nowadays isn't self-contained. And basically it is impossible to do so. I mean, I guess you have some exceptions like the Linux Kernel itself and some low level utilities, but you use libraries and others people code everywhere. In that way, Rust is much better than most other options because it at least lets you pin your dependencies really easily. The idea that everyone who uses some code is auditing it is just ridiculous. You should be able to sure, and in some cases it might be a good idea to do so, at least for parts of your code. But if you are using Linux, did you audit the entire Linux source code? What about C standard libraries. Even just that would take a ridiculous amount of time.

I would also argue that rust isn't pushed everywhere, people just like it because it is a wonderful language. There are much more people who use it in their own projects than do it professionally for example.

I could understand your argument if it was based on how Rust is run, what licenses it uses etc. But this is rather baffling to me. Basically the only thing you mention is the issue of statically linked vs. dynamically linked.

With zstd again not really sure what you are even trying to say. That Facebook had impact on what is used? Ok, so? Zstd is completely open source and if someone decided to use it, that is up to them. I am pretty sure that every software I used that uses zstd also let me use another compression algorithms. And from what I found zstd in some cases is superior to alternatives, but feel free to provide sources, I am sure that I could be incorrect.

[-] iriyan@lemmygrad.ml 1 points 1 year ago

yes, you always have some dependencies, even in the lowest form of linux utilities, there is a c library usually (glibc or musl) but the dependencies needed you choose and provide and are specific. Here we have a dynamic process that draws (not always but sometimes) the latest commit from someone's git as a dependency, and a minute later I try to build the same, someone pushes a commit replacing the previous change, and my package builds as well. The two results are not identical, one may contain a backdoor, and we didn't even notice a difference.

When you build from glibc 2.3.4 and I build from the same, it IS the same.

[-] Prologue7642@lemmygrad.ml 5 points 1 year ago

Basically no one uses rust with dependencies from git. Except in some cases when you are working with very unstable software. Everyone just uses versions that are published to crates.io. If you are concerned about reproducibility that is a valid concern but for that cargo is pretty good, or you can use things like nix.

[-] Prologue7642@lemmygrad.ml 4 points 1 year ago

Who says the distribution of glibc 2.3.4 you and I have are the same? It only depends on where you got it from. And even then we can build it with different flags etc. Not really sure how rust is worse in that one. On the contrary, usually when you build software in C/C++ you dynamically link. So you have no idea what version of libraries someone is using or where they got it. In that sense, Rust's approach is actually safer.

load more comments (3 replies)
[-] maard@lemmygrad.ml 2 points 1 year ago

what kind of telemetry is pushed into the linux kernel, exactly?

[-] iriyan@lemmygrad.ml 1 points 1 year ago

wilco intel and possibly hidden amd There is also this INTEL IFS which is pushed as "good telemetry" or telemetry you want, as a super -enterprise admin to know when to replace equipment.

https://gitlab.archlinux.org/archlinux/packaging/packages/linux/-/blob/main/config

Many of those things didn't exist in pre-6 editions, they have crawled up dew to pressure by manufacturers. The current 6.xx kernels are more than double of what 5.10-lts was and nearly double of 5.15-lts .. Much of the firmware included is not even under production but alpha/beta versions of hardware under testing by manufacturers.

What do users commonly do? Seek to have the latest and newest published, without reading release and changelogs ever. "Continuous development and modern equipment and code are always better."

Critical abilities are characteristics of "toxic personalities", another capitalist buzz-word incorporated "not-critically" by the masses.

[-] rostselmasch@lemmygrad.ml 3 points 1 year ago

I dont really understand your point. What is so bad about those telemetry drivers? Thay have to be loaded and there is no use for them for simple users.

[-] iriyan@lemmygrad.ml 1 points 1 year ago

when telemetry is enabled it is not the user utilizing it but a manufacturer drawing data from the user's machine.

[-] maard@lemmygrad.ml 2 points 1 year ago

Genuine question then; do the distros using these kernels disable these telemetry upon installing them, should you tick the "no telemetry pls" options during the installation process ?

load more comments (14 replies)
load more comments
view more: next ›
this post was submitted on 20 Jun 2023
8 points (100.0% liked)

Linux for Leftists

28 readers
1 users here now

A Community for all leftists wanting to join and being part of a community that talks about Linux, Unix and the Free Software Community

founded 3 years ago
MODERATORS