64
submitted 1 week ago by Amaterasu@lemmy.world to c/linux@lemmy.ml

I have this question. I see people, with some frequency, sugar coating the Nvidia GPU marriage with Linux. I get that if you already have a Nvidia GPU or you need CUDA or work with AI and want to use Linux that is possible. Nevertheless, this still a very questionable relationship.

Shouldn’t we be raising awareness about in case one plan to game titles that uses DX12? I mean 15% to 30% performance loss using Nvidia compared to Windows, over 5% to 15% and some times same performance or better using AMD isn't something to be alerting others?

I know we wanna get more people on Linux, and NVIDIA’s getting better, but don’t we need some real talk about this? Or is there some secret plan to scare people away from Linux that I missed?

Am I misinformed? Is there some strong reason to buy a Nvidia GPU if your focus is gaming in Linux?

top 41 comments
sorted by: hot top controversial new old
[-] data1701d@startrek.website 48 points 1 week ago

I feel like most people who use Nvidia on Linux just got their machine before they were Linux users, with a small subset for ML stuff.

Honestly, I hear ROCm may finally be getting less horrible, is getting wider distro support, and supports more GPUs than it used to, so I really hope AMD will become as livable ML dev platform as it is a desktop GPU.

[-] SkabySkalywag@lemmy.world 10 points 1 week ago

That is correct in my case, startedwith linux earlier this year. Will be switching to AMD for the next upgrade.

[-] warmaster@lemmy.world 5 points 1 week ago

I did this.

From:

Intel i7 14700K + 3080 TI

To:

Ryzen 7700X + RX 7900 XTX.

The difference on Wayland is very big.

[-] Nerdulous@lemmy.zip 1 points 1 week ago

Did you see any performance change because that setup seems pretty equivalent to me

[-] Uebercomplicated@lemmy.ml 1 points 1 week ago

The 7900XTX has 24 gigabytes of video memory vs. the 3080 Ti's 12. That's a big difference...

Here's a couple Linux benchmarks by phoronix (don't take this as literal game performance, rather as relative performance between different cards): https://www.phoronix.com/review/rx7900xt-rx7900xtx-linux/6.

As you can see, the 7900XTX performs much better.

[-] utopiah@lemmy.ml 2 points 1 week ago

Yep, that'd be me. That said if I were to buy a new GPU today (well, tomorrow, waiting on Valve announcement for its next HMD) I might still get an NVIDIA because even though I'm convinced 99% of LLM/GenAI is pure hype, if 1% might be useful, might be built ethically and might run on my hardware, I'd be annoyed if it wouldn't because ROCm is just a tech demo but is too far performance wise. That'd say the percentage is so ridiculously low I'd probably pick the card which treats the open ecosystem best.

[-] MalReynolds@piefed.social 4 points 1 week ago* (last edited 1 week ago)

ROCm works just fine on consumer cards for inferencing and is competetive or superior in $/Token/s and beats NVIDIA power consumption. ROCm 7.0 seems to be giving >2x uplift on consumer cards over 6.9, so that's lovely. Haven't tried 7 myself yet, waiting for the dust to settle, but I have no issues with image gen, text gen, image tagging, video scanning etc using containers and distroboxes on Bazzite with a 7800XT.

Bleeding edge and research tends to be CUDA, but mainstream use cases are getting ported reasonably quickly. TLDR unless you're training or researching (unlikely on consumer cards) AMD is fine and performant, plus you get stable linux and great gaming.

[-] data1701d@startrek.website 2 points 1 week ago

From what I've heard, ROCm may be finally getting out of its infancy; at the very least, I think by the time we get something useful, local, and ethical, it will be pretty well-developed.

Honestly, though, I'm in the same boat as you and actively try to avoid most AI stuff on my laptop. The only "AI" thing I use is I occasionally do an image upscale. I find it kind of useless on photos, but it's sometimes helpful when doing vector traces on bitmap graphics with flat colors; Inkscape's results aren't always good with lower resolution images, so putting that specific kind of graphic through "cartoon mode" upscales sometimes improves results dramatically for me.

Of course, I don't have GPU ML acceleration, so it just runs on the CPU; it's a bit slow, but still less than 10 minutes.

[-] Junglist@lemmy.zip 1 points 1 week ago

Ollama works just fine for me with an AMD GPU.

[-] nfreak@lemmy.ml 2 points 1 week ago

I completely upgraded my desktop like a month before I decided to make the switch. If I planned ahead just a bit more I would've gone with an AMD card for sure. This 4090 is still new enough that I can probably trade it in, but that's such a pain in the ass.

[-] Pika@sh.itjust.works 1 points 1 week ago

I fall into this category. Went Nvidia back in 16 when I built my gaming rig expecting that I would be using windows for awhile as gaming on Linux at that point wasn't the greatest still, ended up deciding to try out a 5700xt (yea piss poor decision i know) a few years later because I wanted to future proof if I decided to swap to linux. The 5700XT had the worst reliability I've ever seen in a graphics card driver wise, and eventually got so sick of it that I ended up going back to Nvidia with a 4070. Since then my life opened up more so I had the time to swap to Linux on my gaming rig, and here we are.

Technically I guess I could still put the 5700XT back in, and it would probably work better than being in my media server since Nvidia seems to have better isolation support in virtualized environments but, I haven't bothered doing so, mostly because getting the current card to work on my rig was a pain, and I don't feel like taking apart two machines to play hardware musical chairs.

[-] megopie 36 points 1 week ago

I’d say in general, the advantages of Nvidia cards are fairly niche even on windows. Like, multi frame generation (fake frames) and upscaling are kind of questionable in terms of value add most of the time, and most people probably aren’t going to be doing any ML stuff on their computer.

AMD in general offers better performance for the money, and that’s doubly so with Nvidia’s lackluster Linux support. AMD has put the work in to get their hardware running well on Linux, both in terms of work from their own team and being collaborative with the open source community.

I can see why some people would choose Nvidia cards, but I think, even on windows, a lot of people who buy them probably would have been better off with AMD. And outside of some fringe edge cases, there is no good reason to choose them when building or buying a computer you intend to mainly run Linux on.

[-] filister@lemmy.world 8 points 1 week ago

Even though I hate Nvidia, they have a couple of advantages:

  • CUDA
  • Productivity
  • Their cards retain higher resale values

So if you need this card for productivity and not only gaming, Nvidia is probably better, if you buy second hand or strictly for gaming, AMD is better.

[-] megopie 3 points 1 week ago* (last edited 1 week ago)

It depends on the type of productivity TBH. Like, sure some productivity use cases need CUDA, but a lot of productivity use cases are just using the cards as graphics cards. The places where you need CUDA are real, but not ubiquitous.

And “this is my personal computer I play games on, but also the computer I do work on, and that work needs CUDA specifically” is very much an edge case.

[-] filister@lemmy.world 2 points 1 week ago

As far as I am aware they are also better at video encoding and if you want to use Blender or similar software, yes, it is niche, but a credible consideration. As always, it really depends on the use case.

[-] Mihies@programming.dev 4 points 1 week ago* (last edited 1 week ago)

From just hardware perspective, Nvidia cards are more energy efficient.

Edit: I stand corrected, series 9070 is much more energy efficient.

[-] iopq@lemmy.world 9 points 1 week ago* (last edited 1 week ago)

That's not quite true. AMD cards just get clocked higher from the factory. So when a 9070xt beats a 5070 by an average of 17%, you can easily cap the power limit to match the performance. That's with more VRAM which of course increases the power requirements

The prices don't quite match up, though since it's between the 5070 and the ti (although in the US it's often more expensive for some reason)

The problem is that AMD is selling the chips to OEMs for a price that's too high to enable to sell at MSRP while giving a discount for small batches of MSRP models. It becomes a lottery where the quickest people can get $600 models refreshing ever rarer restocks.

One of the reasons is... tariffs, but I'm not sure how Nvidia got the prices down on its models

[-] Mihies@programming.dev 3 points 1 week ago

While measuring power efficiency is not an easy task, I doubt that AMD is better. Here is an older article https://www.tomshardware.com/news/geforce-rtx-4070-vs-radeon-rx-6950-xt-which-gpu-is-better But you are right, things seem changing https://gamersnexus.net/gpus/incredibly-efficient-amd-rx-9070-gpu-review-benchmarks-vs-9070-xt-rtx-5070 Hopefully AMD improves on efficiency in future.

[-] eldebryn@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

The 9070xt is a very fine exception for efficiency really. I was able to cap mine at 230watt with a minor underclock and it barely lost 5% perf compared to stock.

Even base models were clocked way too high, probably to help it ensure vfm victory over NVIDIA midrange in the market when reviewers were looking at it.

[-] Mihies@programming.dev 1 points 1 week ago

That's great.

[-] mybuttnolie@sopuli.xyz 30 points 1 week ago

yes, HDMI 2.1. if you use a tv as a monitor, you won't get 4k120 with amd cards on linux because hdmi forum is assholes

[-] Amaterasu@lemmy.world 6 points 1 week ago

That is a fair reason and a good remind actually. Thanks!

[-] Core_of_Arden@lemmy.ml 29 points 1 week ago

I use AMD, where ever it is possible. Simply because they support Linux. There's really no other reason needed. I don't care about CUDA or anything else, that is vaguely not relevant. I'd rather drive a medium car, that gives me freedom, than a high end car, that ties me down.

[-] LeFantome@programming.dev 17 points 1 week ago

I think the answer is if you are shooting for the high-end. AMD is better cost / performance but NVIDIA is still unchallenged for absolute performance if budget is not a consideration.

And if you need CUDA…

[-] Amaterasu@lemmy.world 4 points 1 week ago

I agree with that, because there is no offering from AMD to compete with the absolute performance.

[-] Lukemaster69@lemmy.ca 12 points 1 week ago

it is better to go with AMD because AMD drivers are built into the iso and less headache for gaming

[-] filister@lemmy.world 6 points 1 week ago

CUDA acceleration.

[-] LeFantome@programming.dev 6 points 1 week ago

Two pretty massive facts for anybody trying to answer this question:

  1. Since driver version 555, explicit sync has been supported. This makes a massive difference to the experience on Wayland. Most of the problems people report are for drivers earlier than this (eg. black screens and flicker).

  2. Since driver version 580, NVIDIA uses Open Source modules to interact with the kernel. These are not Open Source drivers. They are the proprietary drivers from NVIDIA that should now “just work” across kernel upgrades (like AMD has forever). This solves perhaps the biggest hassle of dealing with NVIDIA on Linux.

Whether you get to enjoy these significant improvements depends on how long it takes stuff to make it to your distribution. If you are on Arch, you have this stuff today. If you are on Debian, you are still waiting (even on Debian 13).

This is not an endorsement of either distro. They are simply examples of the two extremes regarding how current the software versions are in those distros. Most other distros fall somewhere in the middle.

All this stuff will make it to all Linux users eventually. They are solved problems. Just not solved for everyone.

[-] UntouchedWagons@lemmy.ca 3 points 1 week ago

Does KMS work with an nvidia gpu now? I remember ages ago the boot sequence would be stuck at 640x480 until X started.

[-] melfie@lemy.lol 4 points 1 week ago* (last edited 1 week ago)

NVIDIA definitely dominates for specialized workloads. Look at these Blender rendering benchmarks and notice AMD doesn’t appear until page 3. Wish there were an alternative to NVIDIA Optix that were as fast for path tracing, but there unfortunately is not. Buy an AMD card if you’re just gaming, but you’re unfortunately stuck with NVIDIA if you want to do path traced rendering cost effectively:

https://opendata.blender.org/benchmarks/query/?compute_type=OPTIX&compute_type=CUDA&compute_type=HIP&compute_type=METAL&compute_type=ONEAPI&group_by=device_name&blender_version=4.5.0

Edit:

Here’s hoping AMD makes it to the first page with next generation hardware like Radiance Cores:

https://wccftech.com/amd-unveils-radiance-cores-neural-arrays-universal-compression-next-gen-rdna-gpu-architecture/

[-] herseycokguzelolacak@lemmy.ml 3 points 1 week ago
[-] Admetus@sopuli.xyz 3 points 1 week ago

I only play older games, opensource games (like Pioneer Space Sim, Luanti), and emulate PS2 mostly (could do PS3/4 you bet) so AMD is fine for my use case and works out of the box. I know Nvidia Linux support has improved which means the latest graphics cards also pretty much work out of the box too. But by principle, I support AMD for the work they put into working on Linux.

[-] reagansrottencorpse@lemmy.ml 3 points 1 week ago

Im putting together my new Nvidia PC build tonight. I was planning on putting bazzite on it, should I just use windows then?

[-] prole 4 points 1 week ago

No, you should at least try Bazzite first. I've seen people recently talking about how they have no issues with Nvidia and Linux.

[-] Turtle@aussie.zone 4 points 1 week ago

Nvidia cards work just fine on Linux, old issues are parroted around by people who don't know any better.

[-] cevn@lemmy.world 1 points 1 week ago

My nvidia 1080 just failed driver upgrades on the most recent edition of fedora. Cant parrot myself..

[-] notthebees@reddthat.com 2 points 1 week ago

Literally only CUDA. Rocm mostly works.

[-] muusemuuse@sh.itjust.works 2 points 1 week ago
[-] pineapple@lemmy.ml 1 points 1 week ago

I am probably an anomaly here but I had a really bad experience with linux on an amd card. The card would not output at all whenever there was something linux related going on I could not fix the issue and it did not matter which distro I tried. Windows worked totally fine, the bios worked fine. I switched to my backup nvidia card and all of a sudden linux was working a treat.

[-] DarkAri 1 points 1 week ago

If you are buying then you probably want AMD for Linux, but Nvidia seems to be working fine in my laptop, which has a 1070. The only thing I think people really should know is to use the proprietary driver because the performance and stability is supposedly much better.

[-] daggermoon@lemmy.world 1 points 1 week ago* (last edited 1 week ago)

~~If you want to use Linux, please choose AMD. I helped install CachyOS on my sister's RTX 5080 system and its horrible. 40% performance loss. She's going back to Windows.~~

Edit: Not entirely accurate

this post was submitted on 10 Oct 2025
64 points (100.0% liked)

Linux

59139 readers
865 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 6 years ago
MODERATORS