[-] lagrangeinterpolator@awful.systems 9 points 2 days ago* (last edited 2 days ago)

This is what happens when your worldview is based on anime.

(A lot of anime has heavy themes, but most people understand that it's not real life, just like all such art. Unlike Yud, most people's worldviews on coding and math are based on actual coding and math.)

We can see that one 9 of availability is 90% = 0.9, two 9s is 99% = 0.99, three 9s is 99.9% = 0.999, etc. In general, for positive integers n, n 9s of availability is 1 - (1/10)^n, and we can extrapolate that to non-integer values of n. The value γ needed for 87.5% availability is the solution to 1 - (1/10)^γ = 7/8, or γ = log_10(8) = 0.903089987. γ is transcendental by Gelfond-Schneider (see this for a reference proof).

Right now, Sora is at zero 9s of availability.

[-] lagrangeinterpolator@awful.systems 8 points 3 days ago* (last edited 3 days ago)

By far the dumbest "feature" in the codebase is this thing called "Buddy" (described in a few places such as here). Honestly, I don't really know what it's for or what the point is.

BUDDY - A Tamagotchi Inside Your Terminal

I am not making this up.

Claude Code has a full Tamagotchi-style companion pet system called "Buddy." A deterministic gacha system with species rarity, shiny variants, procedurally generated stats, and a soul description written by Claude on first hatch like OpenClaw.

...

On top of that, there's a 1% shiny chance completely independent of rarity. So a Shiny Legendary Nebulynx has a 0.01% chance of being rolled. Dang.

Great, so they were planning on a gacha system where you can get an ASCII virtual pet that, uhh, occasionally makes comments? Truly a serious feature for a serious tool for the serious discipline of software engineering. Imagine if IntelliJ decided to pull this bullshit.

But also, Claude Code is leaning hard into gambling addiction — the “Hooked” model. You reward the user with an intermittent, variable reward. This keeps them coming back in the hope of the big win. And it turns them into gambling addicts.

The Onion could not have come up with a better way to illustrate this very point.

Good luck telling the promptfondlers that LLMs are only useful for entertainment and not for any useful work.

I'm sure these English instructions work because they feel like they work. Look, these LLMs feel really great for coding. If they don't work, that's because you didn't pay $200/month for the pro version and you didn't put enough boldface and all-caps words in the prompt. Also, I really feel like these homeopathic sugar pills cured my cold. I got better after I started taking them!

No joke, I watched a talk once where some people used an LLM to model how certain users would behave in their scenario given their socioeconomic backgrounds. But they had a slight problem, which was that LLMs are nondeterministic and would of course often give different answers when prompted twice. Their solution was to literally use an automated tool that would try a bunch of different prompts until they happened to get one that would give consistent answers (at least on their dataset). I would call this the xkcd green jelly bean effect, but I guess if you call it "finetuning" then suddenly it sounds very proper and serious. (The cherry on top was that they never actually evaluated the output of the LLM, e.g. by seeing how consistent it was with actual user responses. They just had an LLM generate fiction and called it a day.)

[-] lagrangeinterpolator@awful.systems 18 points 2 weeks ago* (last edited 2 weeks ago)

The article's entire premise is Musk saying some random shit. Remember how Musk said that he would land a man on Mars in 10 years 13 years ago? Honestly, I am incensed that people like Musk and Trump can just say shit and many people will just accept it. I can no longer tolerate it.

Putting aside the very real human ability to screw up such a concept and turn any fair system into an unfair one, ...

He says this after mentioning UBI. He really doesn't want to confront the unfortunate fact that UBI is entirely a political issue. Whatever magical beliefs one may have about how AI can create wealth, the question of how to distribute it is a social arrangement. What exactly stops the wealthy from consolidating all that wealth for themselves? The goodness of their hearts? Or is it political pushback (and violence in the bad old days), as demonstrated in every single example we have in history?

I'd say the problem is even worse now. In previous eras, some wealthy people funded libraries and parks. Nowadays we see them donate to weirdo rationalist nonsense that is completely disconnected from reality.

No getting up early and commuting on public transit. ...

This is followed by four whole paragraphs about how the office sucks and wouldn't it be wonderful if AI got rid of all that. Guess what, we have remote work already! Remember how, during COVID, many software engineering jobs went fully remote, and it turned out that the work was perfectly doable and the workers' lives improved? But then there were so many puff pieces by managers about the wonderful environment of the office, and back to the office they went. Don't worry, when the magical AI is here, they'll change their minds.

Yes, there are "mindless, stupid, inane things" like chores that are unavoidable. There are also other mindless, stupid, inane things that are entirely avoidable but exist anyway because some people base their entire lives around number go up.

[-] lagrangeinterpolator@awful.systems 20 points 1 month ago* (last edited 1 month ago)

I decided to take a look at the bitcoin white paper.

Usually, the introduction of a technical paper is fluff and people quickly move on to the technical parts. However, the casual claims made in the first paragraph of this paper have aged extremely poorly, to say the least. In a better world, Bitcoin would have remained as an obscure academic toy, and this introduction would have remained fluff.

While the system works well enough for most transactions, it still suffers from the inherent weaknesses of the trust based model.

What weaknesses are there in the trust based model? Let's find out!

Completely non-reversible transactions are not really possible, since financial institutions cannot avoid mediating disputes. The cost of mediation increases transaction costs, limiting the minimum practical transaction size and cutting off the possibility for small casual transactions, and there is a broader cost in the loss of ability to make non-reversible payments for non-reversible services. With the possibility of reversal, the need for trust spreads. Merchants must be wary of their customers, hassling them for more information than they would otherwise need.

It seems like this guy really loves non-reversible transactions! But as we've seen with the history of crypto, non-reversible transactions sound really good until you fall victim to a crypto scam and there is no way to appeal to the bank to reverse the charges. Reversibility actually increases trust because you no longer need to be absolutely certain that you're dealing with an honest person.

A certain percentage of fraud is accepted as unavoidable.

Almost like that is a problem of human nature. And it's not like cryptocurrency has a spotless record when dealing with fraud! The problem with fraud is not the third party (the bank), but with the second party (the merchant or customer you're dealing with).

The introduction is not long, and most of the paper concerns the technical details of the construction of Bitcoin. By itself, there really is no way to complain about a pile of definitions. But there are still dumb comments that have aged poorly in retrospect.

A block header with no transactions would be about 80 bytes. If we suppose blocks are generated every 10 minutes, 80 bytes * 6 * 24 * 365 = 4.2MB per year. With computer systems typically selling with 2GB of RAM as of 2008, and Moore's Law predicting current growth of 1.2GB per year, storage should not be a problem even if the block headers must be kept in memory.

But why would you want a block header with no transactions? If you wanted to, I don't know, replace the world's financial system, you would need to handle millions of transactions every 10 minutes. How big would the blocks be then? And remember that many copies of the same blockchain would need to be stored (certainly, every miner would need to store a copy). How many thousands or millions of times would that multiply things?

Businesses that receive frequent payments will probably still want to run their own nodes for more independent security and quicker verification.

Turns out it was a bold assumption to think that businesses would just run their own bitcoin miners.

The proof of security (Section 11) is extremely sketchy by modern standards. (They're assuming that all attackers would follow a certain format to attack and not try something different. I get it, proper proofs of security in cryptography are very subtle and difficult.) There is also a page of fluff making random calculations with the Poisson distribution. In any case, the security of Bitcoin requires that the collective computational power of the defenders exceeds the power of any attacker (so the defenders can make new blocks faster).

Bitcoin is very strange as a cryptographic system in that the defender must have more resources than any possible attacker. In most cryptographic systems, the system should be secure even if the attacker has vastly more resources than the defender. Your phone's cryptography should be secure even if some government agency dedicated their supercomputers to try and break it. This means that Bitcoin must waste tons of energy, since that is required to maintain security. Any more energy dumped into it will only increase security and not make the actual transactions faster, which makes Bitcoin horrendously inefficient.

As a purely academic idea in cryptography, it is an interesting curiosity, but the arguments for why it's useful are sketchy. There are other such curiosities that are much more interesting, like homomorphic encryption or secure multiparty computation. It would be a nice line on a CV, but not "incredible".

The true significance of Bitcoin was the terrible libertarian economic argument for it, and the chain of events that would transform it into nothing more than a speculative fashion trend. It has nothing to do with the technical details of Bitcoin. The technical and economic arguments for Bitcoin turned out to be so weak that nowadays, the only real support for Bitcoin is that maybe you can sell it for a higher price to a greater fool.

“California is, I believe, the only state to give health insurance to people who come into the country illegally,” Kauffman said nervously. “I think we probably should not be providing that.”

“So you’d rather everyone just be sick, and get everyone else sick?” another reporter asked.

“That’s not what I’m saying,” said Kauffman.

“Isn’t that effectively what happens?” the reporter countered. “They don’t have access to health care and they just have to get sick, right?”

Kauffman contemplated that one for a moment. “Then they have to just get sick,” he said. “I mean, it’s unfortunate, but I think that it’s sort of impossible to have both liberal immigration laws and generous government benefits.”

Do I need to comment on this one?

[-] lagrangeinterpolator@awful.systems 18 points 3 months ago* (last edited 3 months ago)

It is how professors talk to each other in ... debate halls? What the fuck? Yud really doesn't have any clue how universities work.

I am a PhD student right now so I have a far better idea of how professors talk to each other. The way most professors (in math/CS at least) communicate in a spoken setting is through giving talks at conferences. The cool professors use chalkboards, but most people these days use slides. As it turns out, debates are really fucking stupid for scientific research for so many reasons.

  1. Science assumes good faith out of everyone, and debates are needlessly adversarial. This is why everyone just presents and listens to talks.
  2. Debates are actually really bad for the kind of deep analysis and thought needed to understand new research. If you want to seriously consider novel ideas, it's not so easy when you're expected to come up with a response in the next few minutes.
  3. Debates generally favor people who use good rhetoric and can package their ideas more neatly, not the people who really have more interesting ideas.
  4. If you want to justify a scientific claim, you do it with experiments and evidence (or a mathematical proof when applicable). What purpose does a debate serve?

I think Yud's fixation on debates and "winning" reflects what he thinks of intellectualism. For him, it is merely a means to an end. The real goal is to be superior and beat up other people.

Just had a conversation about AI where I sent a link to Eddy Burback's ChatGPT Made Me Delusional video. They clarified that no, it's only smart people who are more productive with AI since they can filter out all the bad outputs, and only dumb people would suffer all the negative effects. I don't know what to fucking say.

[-] lagrangeinterpolator@awful.systems 16 points 5 months ago* (last edited 5 months ago)

More AI bullshit hype in math. I only saw this just now so this is my hot take. So far, I'm trusting this r/math thread the most as there are some opinions from actual mathematicians: https://www.reddit.com/r/math/comments/1o8xz7t/terence_tao_literature_review_is_the_most/

Context: Paul Erdős was a prolific mathematician who had more of a problem-solving style of math (as opposed to a theory-building style). As you would expect, he proposed over a thousand problems for the math community that he couldn't solve himself, and several hundred of them remain unsolved. With the rise of the internet, someone had the idea to compile and maintain the status of all known Erdős problems in a single website (https://www.erdosproblems.com/). This site is still maintained by this one person, which will be an important fact later.

Terence Tao is a present-day prolific mathematician, and in the past few years, he has really tried to take AI with as much good faith as possible. Recently, some people used AI to search up papers with solutions to some problems listed as unsolved on the Erdős problems website, and Tao points this out as one possible use of AI. (I personally think there should be better algorithms for searching literature. I also think conflating this with general LLM claims and the marketing term of AI is bad-faith argumentation.)

You can see what the reasonable explanation is. Math is such a large field now that no one can keep tabs on all the progress happening at once. The single person maintaining the website missed a few problems that got solved (he didn't see the solutions, and/or the authors never bothered to inform him). But of course, the AI hype machine got going real quick. GPT5 managed to solve 10 unsolved problems in mathematics! (https://xcancel.com/Yuchenj_UW/status/1979422127905476778#m, original is now deleted due to public embarrassment) Turns out GPT5 just searched the web/training data for solutions that have already been found by humans. The math community gets a discussion about how to make literature more accessible, and the rest of the world gets a scary story about how AI is going to be smarter than all of us.

There are a few promising signs that this is getting shut down quickly (even Demis Hassabis, CEO of DeepMind, thought that this hype was blatantly obvious). I hope this is a bigger sign for the AI bubble in general.

EDIT: Turns out it was not some rando spreading the hype, but an employee of OpenAI. He has taken his original claim back, but not without trying to defend what he can by saying AI is still great at literature review. At this point, I am skeptical that this even proves AI is great at that. After all, the issue was that a website maintained by a single person had not updated the status of 10 problems inside a list of over 1000 problems. Do we have any control experiments showing that a conventional literature review would have been much worse?

[-] lagrangeinterpolator@awful.systems 16 points 8 months ago* (last edited 8 months ago)

OpenAI claims that their AI can get a gold medal on the International Mathematical Olympiad. The public models still do poorly even after spending hundreds of dollars in computing costs, but we've got a super secret scary internal model! No, you cannot see it, it lives in Canada, but we're gonna release it in a few months, along with GPT5 and Half-Life 3. The solutions are also written in an atrociously unreadable manner, which just shows how our model is so advanced and experimental, and definitely not to let a generous grader give a high score. (It would be real interesting if OpenAI had a tool that could rewrite something with better grammar, hmmm....) I definitely trust OpenAI's major announcements here, they haven't lied about anything involving math before and certainly wouldn't have every incentive in the world to continue lying!

It does feel a little unfortunate that some critics like Gary Marcus are somewhat taking OpenAI's claims at face value, when in my opinion, the entire problem is that nobody can independently verify any of their claims. If a tobacco company released a study about the effects of smoking on lung cancer and neglected to provide any experimental methodology, my main concern would not be the results of that study.

Edit: A really funny observation that I just thought of: in the OpenAI guy's thread, he talks about how former IMO medalists graded the solutions in message #6 (presumably to show that they were graded impartially), but then in message #11 he is proud to have many past IMO participants working at OpenAI. Hope nobody puts two and two together!

view more: next ›

lagrangeinterpolator

joined 10 months ago