188
submitted 2 days ago by SnotFlickerman to c/techtakes@awful.systems

Sam "wrong side of FOSS history" Altman must be pissing himself.

Direct Nitter Link:

https://nitter.lucabased.xyz/jiayi_pirate/status/1882839370505621655

you are viewing a single comment's thread
view the rest of the comments
[-] iltg@sh.itjust.works 1 points 10 hours ago

your statement is so extreme it gets nonsensical too.

compilers will usually produce higher optimized asm than writing it yourself, but there is room to improve usually. it's not impossible that deepseek team obtained some performance gains hand-writing some hot sections directly in assembly. llvm must "play it safe" because doesn't know your use case, you do and can avoid all safety checks (stack canaries, overflow checks) or cleanups (eg, make memory arenas rather than realloc). you can tell LLVM to not do those, but it may happen in the whole binary and not be desirable

claiming c# gets faster than C because of jit is just ridicolous: you need yo compile just in time! the runtime cost of jitting + the resulting code would be faster than something plainly compiled? even if c# could obtain same optimization levels (and it can't: oop and .net runtime) you still pay the jit cost, which plainly compiled code doesn't pay. also what are you on with PGO, as if this buzzword suddenly makes everything as fast as C?? the example they give is "devirtualization" of interfaces. seems like C just doesn't have interfaces and can just do direct calls? how would optimizing up to C level make it faster than C?

you just come off as a bit entitled and captured in MS bullshit claims

[-] bitofhope@awful.systems 7 points 8 hours ago

GPU programs (specifically CUDA, although other vendors' stacks are similar) combine code for the host system in a conventional programming language (typically C++), and code for the GPU written in CUDA language. Even if the C++ code for the host system can be optimized with hand written assembly, it's not going to lead to significant gains when the performance bottleneck is on the GPU side.

The CUDA compiler translates the high level CUDA code into something called PTX, machine code for a "virtual ISA" which is then translated by the GPU driver into native machine language for the proprietary instruction set of the GPU. This seems to be somewhat comparable to a compiler intermediate representation, such as LLVM. It's plausible that hand written PTX assembly/IR language could have been used to optimize parts of the program, but that would be somewhat unusual.

For another layer or assembly/machine languages, technically they could have reverse engineered the actual native ISA of the GPU core and written machine code for it, bypassing the compiler in the driver. This is also quite unlikely as it would practically mean writing their own driver for latest-gen Nvidia cards that vastly outperforms the official one and that would be at least as big of a news story as Yet Another Slightly Better Chatbot.

While JIT and runtimes do have an overhead compared to direct native machine code, that overhead is relatively small, approximately constant, and easily amortized if the JIT is able to optimize a tight loop. For car analogy enjoyers, imagine a racecar that takes ten seconds to start moving from the starting line in exchange for completing a lap one second faster. If the race is more than ten laps long, the tradeoff is worth it, and even more so the longer the race. Ahead of time optimizations can do the same thing at the cost of portability, but unless you're running Gentoo, most of the C programs on your computer are likely compiled for the lowest common denominator of x86/AMD64/ARMwhatever instruction sets your OS happens to support.

If the overhead of a JIT and runtime are significant in the overall performance of the program, it's probably a small program to begin with. No shame to small programs, but unless you're running it very frequently, it's unlikely to matter if the execution takes five or fifty milliseconds.

[-] froztbyte@awful.systems 4 points 8 hours ago

For another layer or assembly/machine languages, technically they could have reverse engineered the actual native ISA of the GPU core and written machine code for it, bypassing the compiler in the driver. This is also quite unlikely as it would practically mean writing their own driver for latest-gen Nvidia cards that vastly outperforms the official one

yeah, and it'd be a pretty fucking immense undertaking, as it'd be the driver and the application code and everything else (scheduling, etc etc). again, it's not impossible, and there's been significant headway across multiple parts of industry to make doing this kind of thing more achievable... but it's also an extremely niche, extremely focused, hard-to-port thing, and I suspect that if they actually did do this it'd be something they'd be shouting about loudly in every possible PR outlet

a look at every other high-optimisation field, from the mechanical sympathy lot stemming from HFT etc all the way through to where that's gotten to in modern usage of FPGAs in high-perf runtime envs also gives a good backgrounder in the kind of effort cost involved for this shit, and thus gives me some extra reasons to doubt claims kicking around (along with the fact that everyone seems to just be making shit up)

[-] skillissuer@discuss.tchncs.de 5 points 7 hours ago

yeah, would you look at this https://www.tomshardware.com/tech-industry/artificial-intelligence/deepseek-might-not-be-as-disruptive-as-claimed-firm-reportedly-has-50-000-nvidia-gpus-and-spent-usd1-6-billion-on-buildouts

However, industry analyst firm SemiAnalysis reports that the company behind DeepSeek incurred $1.6 billion in hardware costs and has a fleet of 50,000 Nvidia Hopper GPUs

[-] froztbyte@awful.systems 3 points 6 hours ago

yep, a completely normal amount of non-specialist hardware that basically everyone has in their back shed. you just don't turn it on all the time because the neighbours keep complaining about the fan noise. practically anyone could do this!

[-] froztbyte@awful.systems 5 points 10 hours ago

for the love of god read the sidebar

this post was submitted on 01 Feb 2025
188 points (100.0% liked)

TechTakes

1596 readers
242 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS