[-] Architeuthis@awful.systems 19 points 1 month ago

No shot is over two seconds, because AI video can’t keep it together longer than that. Animals and snowmen visibly warp their proportions even over that short time. The trucks’ wheels don’t actually move. You’ll see more wrong with the ad the more you look.

Not to mention the weird AI lighting that makes everything look fake and unnatural even in the ad's dreamlike context, and also that it's the most generic and uninspired shit imaginable.

[-] Architeuthis@awful.systems 21 points 3 months ago* (last edited 3 months ago)

On each step, one part of the model applies reinforcement learning, with the other one (the model outputting stuff) “rewarded” or “punished” based on the perceived correctness of their progress (the steps in its “reasoning”), and altering its strategies when punished. This is different to how other Large Language Models work in the sense that the model is generating outputs then looking back at them, then ignoring or approving “good” steps to get to an answer, rather than just generating one and saying “here ya go.”

Every time I've read how chain-of-thought works in o1 it's been completely different, and I'm still not sure I understand what's supposed to be going on. Apparently you get a strike notice if you try too hard to find out how the chain-of-thinking process goes, so one might be tempted to assume it's something that's readily replicable by the competition (and they need to prevent that as long as they can) instead of any sort of notably important breakthrough.

From the detailed o1 system card pdf linked in the article:

According to these evaluations, o1-preview hallucinates less frequently than GPT-4o, and o1-mini hallucinates less frequently than GPT-4o-mini. However, we have received anecdotal feedback that o1-preview and o1-mini tend to hallucinate more than GPT-4o and GPT-4o-mini. More work is needed to understand hallucinations holistically, particularly in domains not covered by our evaluations (e.g., chemistry). Additionally, red teamers have noted that o1-preview is more convincing in certain domains than GPT-4o given that it generates more detailed answers. This potentially increases the risk of people trusting and relying more on hallucinated generation.

Ballsy to just admit your hallucination benchmarks might be worthless.

The newsletter also mentions that the price for output tokens has quadrupled compared to the previous newest model, but the awesome part is, remember all that behind-the-scenes self-prompting that's going on while it arrives to an answer? Even though you're not allowed to see them, according to Ed Zitron you sure as hell are paying for them (i.e. they spend output tokens) which is hilarious if true.

[-] Architeuthis@awful.systems 20 points 3 months ago

"When asked about buggy AI [code], a common refrain is ‘it is not my code,’ meaning they feel less accountable because they didn’t write it.”

Strong they cut all my deadlines in half and gave me an OpenAI API key, so fuck it energy.

He stressed that this is not from want of care on the developer’s part but rather a lack of interest in “copy-editing code” on top of quality control processes being unprepared for the speed of AI adoption.

You don't say.

[-] Architeuthis@awful.systems 21 points 5 months ago* (last edited 5 months ago)

Former Oath Keeper police chief says best he can do is keep fining them $500 for noise pollution as often as possible, supposedly there's no legal way to force stop the source of the noise complaint, and Texas counties can't pass their own ordinances, only cities can. It also says someone is exploring if they can get the installation declared a public nuisance or something along those lines to open more legal avenues.

I feel that once old people start dying of stress and children are getting sleep deprivation torture while bleeding from their ears, more drastic options should have been on the table down at militia central, but I guess they have other priorities and/or know which side their bread is buttered.

[-] Architeuthis@awful.systems 19 points 5 months ago

It hasn't worked 'well' for computers since like the pentium, what are you talking about?

The premise was pretty dumb too, as in, if you notice that a (very reductive) technological metric has been rising sort of exponentially, you should probably assume something along the lines of we're probably still at the low hanging fruit stage of R&D, it'll stabilize as it matures, instead of proudly proclaiming that surely it'll approach infinity and break reality.

There's nothing smart or insightful about seeing a line in a graph trending upwards and assuming it's gonna keep doing that no matter what. Not to mention that type of decontextualized wishful thinking is emblematic of the TREACLES mindset mentioned in the community's blurb that you should check out.

So yeah, he thought up the Singularity which is little more than a metaphysical excuse to ignore regulations and negative externalities because with tech rupture around the corner any catastrophic mess we make getting there won't matter. See also: the whole current AI debacle.

[-] Architeuthis@awful.systems 21 points 6 months ago* (last edited 6 months ago)

Great quote from the article on why prediction markets and scientific racism currently appear to be at one degree of separation:

Daniel HoSang, a professor of American studies at Yale University and a part of the Anti-Eugenics Collective at Yale, said: “The ties between a sector of Silicon Valley investors, effective altruism and a kind of neo-eugenics are subtle but unmistakable. They converge around a belief that nearly everything in society can be reduced to markets and all people can be regarded as bundles of human capital.

[-] Architeuthis@awful.systems 19 points 6 months ago

Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.

You make his position sound way more measured and responsible than it is.

His 'effective safety measures' are something like A) solve ethics B) hardcode the result into every AI, I.e. garbage philosophy meets garbage sci-fi.

[-] Architeuthis@awful.systems 19 points 6 months ago* (last edited 6 months ago)

So LLM-based AI is apparently such a dead end as far as non-spam and non-party trick use cases are concerned that they are straight up rolling out anti-features that nobody asked or wanted just to convince shareholders that ground breaking stuff is still going on, and somewhat justify the ocean of money they are diverting that way.

At least it's only supposed to work on PCs that incorporate so-called neural processor units, which if I understand correctly is going to be its own thing under a Windows PC branding.

edit: Yud must love that instead of his very smart and very implementable idea of the government enforcing strict regulations on who gets to own GPUs and bombing non-compliants we seem to instead be trending towards having special deep learning facilitating hardware integrated in every new device, or whatever NPUs actually are, starting with iPhones and so-called Windows PCs.

edit edit: the branding appears to be "Copilot+ PCs" not windows pcs.

[-] Architeuthis@awful.systems 21 points 9 months ago

If I remember correctly SBF taking the stand was completely against his lawyers' recommendations, and in general he seems to have a really hard time doing what people who know better tell him to, such as don't DM journalists about your crimes and definitely don't start a substack detailing how you felt justified in doing them, and also trying to 'explain yourself' to prosecution witnesses is witness tampering and will get your bail revoked.

[-] Architeuthis@awful.systems 19 points 10 months ago* (last edited 10 months ago)

Sticking numbers next to things and calling it a day is basically the whole idea behind bayesian rationalism.

[-] Architeuthis@awful.systems 21 points 1 year ago* (last edited 1 year ago)

On one hand it's encouraging that the comments are mostly pushing back.

On the other hand a lot of them do so on the basis of a disagreement over the moral calculus of how many chickens a first trimester fetus should be worth, and whether that makes pushing for abortion bans inefficient compared to efforts to reduce the killing of farm animals for food.

Which, while pants-on-head bizarre in any other context, seems fairly normal by EA standards.

[-] Architeuthis@awful.systems 20 points 1 year ago* (last edited 1 year ago)

This reads very, uh, addled. I guess collapsing the wavefunction means agreeing on stuff? And the uncanny valley is when the vibes are off because people are at each others throats? Is 'being aligned' like having attained spiritual enlightenment by way of Adderall?

Apparently the context is that he wanted the investment firms under ftx (Alameda and Modulo) to completely coordinate, despite being run by different ex girlfriends at the time (most normal EA workplace), which I guess paints Elis' comment about Chinese harem rules of dating in a new light.

edit: i think the 'being aligned' thing is them invoking the 'great minds think alike' adage as absolute truth, i.e. since we both have the High IQ feat you should be agreeing with me, after all we share the same privileged access to absolute truth. That we aren't must mean you are unaligned/need to be further cleansed of thetans.

view more: ‹ prev next ›

Architeuthis

joined 2 years ago