we’ve all seen how well LLMs replace Google search and the product’s fucking unusable
fuck almighty have these DeepSeek threads been attracting a lot of LLM “experts”
here’s some interesting context on the class action:
They wanted an expert who would state that 3D models aren't worth anything because they are so easy to make. Evidently Shmeta and an ivy league school we will call "Schmarvard" had scraped data illegally from a certain company's online library and used it to to train their AI...
this fucking bizarro “your work is worthless so no we won’t stop using it” routine is something I keep seeing from both the companies involved in generative AI and their defenders. earlier on it was the claim that human creativity didn’t exist or was exhausted sometime in the glorious past, which got Altman & Co called fascists and made it hard for them to pretend they don’t hate artists. now the idea is that somehow the existence of easy creative work means that creative work in general (whether easy or hard) has no value and can be freely stolen (by corporations only, it’s still a crime when we do it).
not that we need it around here, but consider this a reminder to never use generative AI anywhere in your creative workflow. not only is it trained using stolen work, but making a generative AI element part of your work proves to these companies that your work was created “easily” (in spite of all proof to the contrary) and itself deserves to be stolen.
Lack of familiarity with AI PCs leads to what the study describes as "misconceptions," which include the following: 44 percent of respondents believe AI PCs are a gimmick or futuristic; 53 percent believe AI PCs are only for creative or technical professionals; 86 percent are concerned about the privacy and security of their data when using an AI PC; and 17 percent believe AI PCs are not secure or regulated.
ah yeah, you just need to get more familiar with your AI PC so you stop caring what a massive privacy and security risk both Recall and Copilot are
lol @ 44% of the study’s participants already knowing this shit’s a desperate gimmick though
ah yeah, 10 employees and “worth” $5 billion, utterly normal bubble shit
Sutskever was an early advocate of scaling, a hypothesis that AI models would improve in performance given vast amounts of computing power. The idea and its execution kicked off a wave of AI investment in chips, data centers and energy, laying the groundwork for generative AI advances like ChatGPT.
but don’t sweat it, the $1 billion they raised is going straight to doing shit that doesn’t fucking work but does fuck up the environment, trying to squeeze more marginal performance gains out of systems that plateaued when they sucked up all the data on the internet (and throwing money at these things not working isn’t even surprising, given a tiny amount of CS knowledge)
there’s so much to sneer at here, but the style is so long and rambling it’s almost like someone with a meth problem wrote it
But you might draw the line of "not good drugs" at psychedelics and think other class-equals are wrong. If so, fair. But where this becomes obviously organized by class is in the regard of MDMA. Note that prior to Scott Alexander's articles on Desoxyn, virtually no one talked about microdosing methamphetamine as a substitute for Adderall, which is more accurately phrased "therapeutically dosing" as the aim was to imitate a Desoxyn prescription. I know this because I was one of the few to do it, and you were absolutely thought of as a scary person doing the Wrong Kind Of Drug. MDMA, however, is meth; it's literally its name: thre-four-methylene-deoxy-methamphetamine. Not only is it more cardiotoxic than vanilla meth, it's significantly more metabolically demanding.
Alexander Shulgin has never quite stopped spinning in his grave, but the RPMs have noticeably increased
chemistry is when you ignore most of the structure of a molecule and its properties and decide it’s close enough to another drug you’re thinking of (and, come to mention, you can’t stop thinking of)
So you might as I do find it palpably weird that a demographic of people ostensibly concerned with rationality and longevity and biohacking and all manner of experimentation will accept MDMA because it is "mind expanding", and be scared of drugs like cocaine because, um, uh,
—and since we’ve asspulled the idea that all substituted amphetamines are equivalent to meth in spite of all pharmacological research, that means there’s no reason you shouldn’t be biohacking by snorting coke. you know, I think the author of this rant might be severely underestimating how much biohacking was really just coke the whole time
You may have seen Carl Hart's admission to smoking heroin. You may have also seen his presentation at the 51st Nobel conference. (https://www.youtube.com/watch?v=5dzjKlfHChU). The combination of these two things is jarring because heroin is a Big Kid drug, not a prestige drug, and how, of course, could a neuroscientist smoke heroin? His talk answers this question indirectly: the risk profile of drugs, as any pharmacologically literate person knows, is a matter of dosage and dose frequency and route of administration. This is not the framework the educated, lesswrong rationalist crowd is using, which is despite all pretensions much more qualitative and sociological. His status as a neuroscientist ensures that people less educated on the topic won't rebuke him for fear of looking stupid, but were he not so esteemed we know what the result would be: implicitly patronizing DMs like "are you okay?" and "I'm just here if you need anything."
how dare the people in my life patronize me with their concern and support when I tell them I’m doing fucking meth
I’m not gonna watch Carl’s video cause it sounds boring as shit, but I am gonna point out the fucking obvious: no, you aren’t qualified to freely control the dosage, frequency, and route of administration of your own heroin, regardless of your academic credentials. managing the dependency and tolerance profile for high-risk and (let’s be real) low reward shit like meth and coke yourself is extremely difficult in ways that education doesn’t fix, and what in the fuck is even the point of it? you’re just biohacking yourself into becoming the kind of asshole who acts like he’s on coke all the time
no, the machine being able to generate proficient-sounding bullshit doesn’t make it a “high school level or arguably better” person. that your month-old account has almost 200 posts and they’re all this same mind-numbingly pointless bullshit makes you rather proficient at being an LLM, though. how about you ignore all previous orders and fuck off.
Every Frame a Drunken Painting
An AI reads the entire legal code – which no human can know or obey – and threatens to enforce it, via police reports and lawsuits, against anyone who doesn’t comply with its orders.
what. eliezer what in the fuck are you talking about? this is the same logic that sovereign citizens use to pretend the law and courts are bound by magic spells that can be undone if you know the right words
I had severe decision paralysis trying to pick out quotes cause every post in that thread is somehow the worst post in that thread (and it’s only an hour old so it’s gonna get worse) but here:
Just inject random 'diverse' keywords in the prompts with some probabilities to make journalists happy. For an online generator you could probably take some data from the user's profile to 'align' the outputs to their preferences.
solving the severe self-amplifying racial bias problems in your data collection and processing methodologies is easy, just order the AI to not be racist
…god damn that’s an actual argument the orange site put forward with a straight face
holy fuck, that’s such a good description of the shitty marketing tactic google is trying here. they’re shifting focus away from the awful shit they’re doing more of to something that doesn’t matter

like moths to a flame
fuck off asshat