Unfortunately, our problem right now is not Donna the below-average Democrat but Donald the fascist. And when it comes to fascists I do not ask if they are above or below average.

[-] lagrangeinterpolator@awful.systems 15 points 3 days ago* (last edited 3 days ago)

The fire code thing really is an excellent example of LessWrong Brain. Fire truck drivers insist on needlessly large trucks (no citation) which makes roads 30% wider than they would otherwise be (no citation) which has "probably" "non-trivially" contributed to larger cars (no citation) leading to enough additional road fatalities to cancel out the lives saved by stricter fire codes (no citation).

The LessWrong Brain argument starts with a deliberately contrarian conclusion and proves it with a Rube Goldberg chain of logical syllogisms. Of course, citations are strictly optional, and they are free to misinterpret them as they see fit. The only real standard of each claim is "looks good to me", but you are supposed to be impressed that they managed to string a dozen of them together to reveal some shocking, deep truth of the world that nobody else knows about. The AI 2027 nonsense is an infamous example of this.

He uses the word "fermi" which is cult jargon based on Fermi estimation, a.k.a. guessing shit with back-of-the-envelope calculations. Not exactly what you want if you want to convince people to reform fire codes, especially if you have zero citations for anything.

I guess people just aren't rational enough, and the only reason the fire codes are so irrational is because people are emotional about fire codes. Firefighters are apparently revered as heroes, when it is the LWers who should be the heroes. After all, firefighters merely save people from fires, while LWers buy multimillion dollar mansions to talk about saving quadrillions of hypothetical people from hypothetical basilisks!

It's fine, spyware is only a risk when it's bad people's spyware. It's totally fine when it's Anthropic™-approved spyware!

As for Mythos catching things, maybe they should have used Mythos on their very own Claude Code considering that it has hilariously obvious security exploits, such as this one which inserts an arbitrary string into a shell command. Actually, never mind I don't see anything wrong here, maybe we should burn another $20k in electricity running Mythos on it again to find out.

[-] lagrangeinterpolator@awful.systems 9 points 6 days ago* (last edited 6 days ago)

In basically every case in history where people decided to kill a bad king, there was a period of chaos and violence that followed it. The killing of Charles I happened during the English Civil War, and the killing of Louis XVI happened during the French Revolution. This has happened many times in Chinese history, with the fall of an imperial dynasty leading to several decades of civil war (most recently in the early 1900s). But I guess if you have a big clever brain with big clever thoughts, you don't need to look at history.

If the only way to get rid of a bad king is to kill him, he will do anything he can to defend his power, including using as much violence as necessary. (People generally do not like being killed.) Even if you successfully get rid of him, good luck establishing a proper government afterwards with all the violence you've caused. And who knows if the new king is gonna be better or worse? A better system would instead have a mechanism that replaces officials on a regular basis, say every few years, and ensure that these replacements are peaceful. Oh wait, that's liberal democracy. If we do something boring like support democracy, how will people ever think of us as special, clever thinkers with bold, contrarian thoughts?

It’s still One Person. A mortal, fleshy person. Their defence is that they’re inoffensive, things are stable, nothing is directly their fault and people are bound by law and oath.

Bro, your system involves giving all the power to one person. You cannot then say they have no responsibility or that they're "inoffensive" when they abuse it.

I've seen this story play out in software engineering: people were very impressed when the AI does unexpectedly well in one out of 50 attempts on an easy task, and so people decided to trust it for everything and turn their codebases into disasters. There was no great wave of new high-quality software. Instead, the only real result was that existing software has become far more buggy and insecure.

Now we have people using AI in science and math because it was impressive in random demonstrations of solving math problems. I now have friends asking me why I'm not using AI, and also saying that AI will be better than all mathematicians in 30 years or whatever. Do you really think I refuse to use AI out of ignorance? No, I know too much about it! I have seen the same story play out in software engineering, and what makes this any different?

[-] lagrangeinterpolator@awful.systems 18 points 1 month ago* (last edited 1 month ago)

The article's entire premise is Musk saying some random shit. Remember how Musk said that he would land a man on Mars in 10 years 13 years ago? Honestly, I am incensed that people like Musk and Trump can just say shit and many people will just accept it. I can no longer tolerate it.

Putting aside the very real human ability to screw up such a concept and turn any fair system into an unfair one, ...

He says this after mentioning UBI. He really doesn't want to confront the unfortunate fact that UBI is entirely a political issue. Whatever magical beliefs one may have about how AI can create wealth, the question of how to distribute it is a social arrangement. What exactly stops the wealthy from consolidating all that wealth for themselves? The goodness of their hearts? Or is it political pushback (and violence in the bad old days), as demonstrated in every single example we have in history?

I'd say the problem is even worse now. In previous eras, some wealthy people funded libraries and parks. Nowadays we see them donate to weirdo rationalist nonsense that is completely disconnected from reality.

No getting up early and commuting on public transit. ...

This is followed by four whole paragraphs about how the office sucks and wouldn't it be wonderful if AI got rid of all that. Guess what, we have remote work already! Remember how, during COVID, many software engineering jobs went fully remote, and it turned out that the work was perfectly doable and the workers' lives improved? But then there were so many puff pieces by managers about the wonderful environment of the office, and back to the office they went. Don't worry, when the magical AI is here, they'll change their minds.

Yes, there are "mindless, stupid, inane things" like chores that are unavoidable. There are also other mindless, stupid, inane things that are entirely avoidable but exist anyway because some people base their entire lives around number go up.

[-] lagrangeinterpolator@awful.systems 20 points 1 month ago* (last edited 1 month ago)

I decided to take a look at the bitcoin white paper.

Usually, the introduction of a technical paper is fluff and people quickly move on to the technical parts. However, the casual claims made in the first paragraph of this paper have aged extremely poorly, to say the least. In a better world, Bitcoin would have remained as an obscure academic toy, and this introduction would have remained fluff.

While the system works well enough for most transactions, it still suffers from the inherent weaknesses of the trust based model.

What weaknesses are there in the trust based model? Let's find out!

Completely non-reversible transactions are not really possible, since financial institutions cannot avoid mediating disputes. The cost of mediation increases transaction costs, limiting the minimum practical transaction size and cutting off the possibility for small casual transactions, and there is a broader cost in the loss of ability to make non-reversible payments for non-reversible services. With the possibility of reversal, the need for trust spreads. Merchants must be wary of their customers, hassling them for more information than they would otherwise need.

It seems like this guy really loves non-reversible transactions! But as we've seen with the history of crypto, non-reversible transactions sound really good until you fall victim to a crypto scam and there is no way to appeal to the bank to reverse the charges. Reversibility actually increases trust because you no longer need to be absolutely certain that you're dealing with an honest person.

A certain percentage of fraud is accepted as unavoidable.

Almost like that is a problem of human nature. And it's not like cryptocurrency has a spotless record when dealing with fraud! The problem with fraud is not the third party (the bank), but with the second party (the merchant or customer you're dealing with).

The introduction is not long, and most of the paper concerns the technical details of the construction of Bitcoin. By itself, there really is no way to complain about a pile of definitions. But there are still dumb comments that have aged poorly in retrospect.

A block header with no transactions would be about 80 bytes. If we suppose blocks are generated every 10 minutes, 80 bytes * 6 * 24 * 365 = 4.2MB per year. With computer systems typically selling with 2GB of RAM as of 2008, and Moore's Law predicting current growth of 1.2GB per year, storage should not be a problem even if the block headers must be kept in memory.

But why would you want a block header with no transactions? If you wanted to, I don't know, replace the world's financial system, you would need to handle millions of transactions every 10 minutes. How big would the blocks be then? And remember that many copies of the same blockchain would need to be stored (certainly, every miner would need to store a copy). How many thousands or millions of times would that multiply things?

Businesses that receive frequent payments will probably still want to run their own nodes for more independent security and quicker verification.

Turns out it was a bold assumption to think that businesses would just run their own bitcoin miners.

The proof of security (Section 11) is extremely sketchy by modern standards. (They're assuming that all attackers would follow a certain format to attack and not try something different. I get it, proper proofs of security in cryptography are very subtle and difficult.) There is also a page of fluff making random calculations with the Poisson distribution. In any case, the security of Bitcoin requires that the collective computational power of the defenders exceeds the power of any attacker (so the defenders can make new blocks faster).

Bitcoin is very strange as a cryptographic system in that the defender must have more resources than any possible attacker. In most cryptographic systems, the system should be secure even if the attacker has vastly more resources than the defender. Your phone's cryptography should be secure even if some government agency dedicated their supercomputers to try and break it. This means that Bitcoin must waste tons of energy, since that is required to maintain security. Any more energy dumped into it will only increase security and not make the actual transactions faster, which makes Bitcoin horrendously inefficient.

As a purely academic idea in cryptography, it is an interesting curiosity, but the arguments for why it's useful are sketchy. There are other such curiosities that are much more interesting, like homomorphic encryption or secure multiparty computation. It would be a nice line on a CV, but not "incredible".

The true significance of Bitcoin was the terrible libertarian economic argument for it, and the chain of events that would transform it into nothing more than a speculative fashion trend. It has nothing to do with the technical details of Bitcoin. The technical and economic arguments for Bitcoin turned out to be so weak that nowadays, the only real support for Bitcoin is that maybe you can sell it for a higher price to a greater fool.

“California is, I believe, the only state to give health insurance to people who come into the country illegally,” Kauffman said nervously. “I think we probably should not be providing that.”

“So you’d rather everyone just be sick, and get everyone else sick?” another reporter asked.

“That’s not what I’m saying,” said Kauffman.

“Isn’t that effectively what happens?” the reporter countered. “They don’t have access to health care and they just have to get sick, right?”

Kauffman contemplated that one for a moment. “Then they have to just get sick,” he said. “I mean, it’s unfortunate, but I think that it’s sort of impossible to have both liberal immigration laws and generous government benefits.”

Do I need to comment on this one?

[-] lagrangeinterpolator@awful.systems 18 points 4 months ago* (last edited 4 months ago)

It is how professors talk to each other in ... debate halls? What the fuck? Yud really doesn't have any clue how universities work.

I am a PhD student right now so I have a far better idea of how professors talk to each other. The way most professors (in math/CS at least) communicate in a spoken setting is through giving talks at conferences. The cool professors use chalkboards, but most people these days use slides. As it turns out, debates are really fucking stupid for scientific research for so many reasons.

  1. Science assumes good faith out of everyone, and debates are needlessly adversarial. This is why everyone just presents and listens to talks.
  2. Debates are actually really bad for the kind of deep analysis and thought needed to understand new research. If you want to seriously consider novel ideas, it's not so easy when you're expected to come up with a response in the next few minutes.
  3. Debates generally favor people who use good rhetoric and can package their ideas more neatly, not the people who really have more interesting ideas.
  4. If you want to justify a scientific claim, you do it with experiments and evidence (or a mathematical proof when applicable). What purpose does a debate serve?

I think Yud's fixation on debates and "winning" reflects what he thinks of intellectualism. For him, it is merely a means to an end. The real goal is to be superior and beat up other people.

Just had a conversation about AI where I sent a link to Eddy Burback's ChatGPT Made Me Delusional video. They clarified that no, it's only smart people who are more productive with AI since they can filter out all the bad outputs, and only dumb people would suffer all the negative effects. I don't know what to fucking say.

[-] lagrangeinterpolator@awful.systems 16 points 6 months ago* (last edited 6 months ago)

More AI bullshit hype in math. I only saw this just now so this is my hot take. So far, I'm trusting this r/math thread the most as there are some opinions from actual mathematicians: https://www.reddit.com/r/math/comments/1o8xz7t/terence_tao_literature_review_is_the_most/

Context: Paul Erdős was a prolific mathematician who had more of a problem-solving style of math (as opposed to a theory-building style). As you would expect, he proposed over a thousand problems for the math community that he couldn't solve himself, and several hundred of them remain unsolved. With the rise of the internet, someone had the idea to compile and maintain the status of all known Erdős problems in a single website (https://www.erdosproblems.com/). This site is still maintained by this one person, which will be an important fact later.

Terence Tao is a present-day prolific mathematician, and in the past few years, he has really tried to take AI with as much good faith as possible. Recently, some people used AI to search up papers with solutions to some problems listed as unsolved on the Erdős problems website, and Tao points this out as one possible use of AI. (I personally think there should be better algorithms for searching literature. I also think conflating this with general LLM claims and the marketing term of AI is bad-faith argumentation.)

You can see what the reasonable explanation is. Math is such a large field now that no one can keep tabs on all the progress happening at once. The single person maintaining the website missed a few problems that got solved (he didn't see the solutions, and/or the authors never bothered to inform him). But of course, the AI hype machine got going real quick. GPT5 managed to solve 10 unsolved problems in mathematics! (https://xcancel.com/Yuchenj_UW/status/1979422127905476778#m, original is now deleted due to public embarrassment) Turns out GPT5 just searched the web/training data for solutions that have already been found by humans. The math community gets a discussion about how to make literature more accessible, and the rest of the world gets a scary story about how AI is going to be smarter than all of us.

There are a few promising signs that this is getting shut down quickly (even Demis Hassabis, CEO of DeepMind, thought that this hype was blatantly obvious). I hope this is a bigger sign for the AI bubble in general.

EDIT: Turns out it was not some rando spreading the hype, but an employee of OpenAI. He has taken his original claim back, but not without trying to defend what he can by saying AI is still great at literature review. At this point, I am skeptical that this even proves AI is great at that. After all, the issue was that a website maintained by a single person had not updated the status of 10 problems inside a list of over 1000 problems. Do we have any control experiments showing that a conventional literature review would have been much worse?

[-] lagrangeinterpolator@awful.systems 16 points 9 months ago* (last edited 9 months ago)

OpenAI claims that their AI can get a gold medal on the International Mathematical Olympiad. The public models still do poorly even after spending hundreds of dollars in computing costs, but we've got a super secret scary internal model! No, you cannot see it, it lives in Canada, but we're gonna release it in a few months, along with GPT5 and Half-Life 3. The solutions are also written in an atrociously unreadable manner, which just shows how our model is so advanced and experimental, and definitely not to let a generous grader give a high score. (It would be real interesting if OpenAI had a tool that could rewrite something with better grammar, hmmm....) I definitely trust OpenAI's major announcements here, they haven't lied about anything involving math before and certainly wouldn't have every incentive in the world to continue lying!

It does feel a little unfortunate that some critics like Gary Marcus are somewhat taking OpenAI's claims at face value, when in my opinion, the entire problem is that nobody can independently verify any of their claims. If a tobacco company released a study about the effects of smoking on lung cancer and neglected to provide any experimental methodology, my main concern would not be the results of that study.

Edit: A really funny observation that I just thought of: in the OpenAI guy's thread, he talks about how former IMO medalists graded the solutions in message #6 (presumably to show that they were graded impartially), but then in message #11 he is proud to have many past IMO participants working at OpenAI. Hope nobody puts two and two together!

view more: next ›

lagrangeinterpolator

joined 11 months ago