Somehow I had missed the boat on Donald Boat and now I have so many questions. Absolutely wild read.

It does seem more and more like the most relevant parallel is radicalization, particularly the concerns about algorithmic radicalization and stochastic terrorism we got back in the early 2010s. The machine system feeds the user back what they've put into it, validating that input and pushing the user into more extreme positions. When it happens through a community ("classical" radicalization) the fact that the community needs to persist serves to mediate or at least slow the destructive elements of the spiral. Your Nazi book club/street gang stops meeting if people go to prison, lose their jobs/homes, etc. Online communities reduce this friction and allow the spiral to accelerate to a great degree, but the group can still start eating itself if it accepts the wrong level of unhingedness and toxicity.

Algorithmic/Stochastic radicalization, where the user moves through a succession of media environments and (usually online) communities can allow things to accelerate even more because the user no longer actually has to maintain long-term social ties to remain engaged in the spiral. Rather than increasingly-destructive ideas echoing around a social space, the user can chase them across communities, with naive content algorithms providing a solid nudge in the right direction (pun wholly intended). However, the spiral is still dependent on the ability of the relevant media figures and communities to persist, even if the individual users no longer need a persistent connection to them. If the market doesn't have space for a creator then their role in that network drops. Getting violent or destructive content deplatformed also helps slow down the spiral by adding friction back into the process of jumping to the next level of radicalism. Past a certain point you find yourself back in the world of needing to maintain a community because the ideology has gotten so rotten that there's no profit in entertaining it. Past that you end up back with in-person or otherwise high-friction high-trist groups because the openness of a low-friction online community compromises internal security in ways that can't be allowed when you're literally doing crimes.

Chatbot-induced radicalization combines the extreme low friction of online interactions with an extremely high value validation and a complete lack of social restrictions. You don't have to retain a baseline connection to reality to maintain a relationship with a chatbot. You don't have to make connections and put in the work to find a chatbot to validate your worst impulses the same way that you do to join a militia. Your central cause doesn't have to be something to motivate anyone outside yourself. Your local KKK chapter probable has more on its agenda than hating your ex-wife (not that it doesn't make the list, of course), but your chatbot instance will happily give you an even stronger echo chamber no matter how narrow the focus. And unlike the stigma associated with the kinds of hate groups and cults that would normally fill this role for people, the entire weight if the trillion-dollar tech industry seems to be invested in promoting these chatbots as reliable and trustworthy -- even more so than the experts and institutions that are supposed to provide an anchor to counter this kind of descent. That's the most dangerous part of our Very Good Friends' projects on the matter. That's how you get relatively normal people to act like they're talking to God and He',s telling them everything they don't want to admit they want to hear.

Fixed-position office chairs? What goddamn vibe-brained rat-pilled techbro started convincing people to forgo the single best redeeming factor of office chairs?

Since the advent of ChatGPT in November 2022, the number of monthly submissions to the arXiv preprint repository has risen by more than 50% and the number of articles rejected each month has risen fivefold to more than 2,400 (see ‘Rejection rates climb’).

If I'm interpreting this right then the growth in the number of rejections is wildly outpacing the growth in submissions, which means not only are we getting a tsunami of slop but that the bad papers are actively chasing away good ones.

diamondoid. None of the derivatives I can come up with sound anywhere near as dumb as the actual word.

In economic terms it's less rent seeking and more rent creation. Like, taking advantage of public sidewalk space may not be a rent in the strictest sense given that the revenue model is still people paying for the service, but the ability to provide that service is absolutely predicated on taking over and monopolizing this public resource to the maximal degree possible.

By historical allegory, harkening back to the original destruction of the Commons, we're looking at Enclosure 2: Frisco Drift.

Let's also not lose sight of the fact that those sidewalks aren't a natural formation, and that it's the city government who ultimately takes on the burden of their construction and maintenance. This kind of neo-enclosure of public resources is then another kind of invisible subsidy.

"even safer" in this case means some combination of two things:

  1. The new organization is more ideologically aligned with the transhumanist doom cult that apparently managed to eat the brains of the people with money to burn.

  2. The new organization, largely as a result of this, is capable of sinking an unending amount of capital into buying compute time and Nvidia chips but due to their commitments to safety is even less inclined to actually deliver anything.

[-] YourNetworkIsHaunted@awful.systems 9 points 4 days ago* (last edited 4 days ago)

Microsoft is really putting the "git" in GitHub thanks to copilot.

I found the comment about models creating very old-fashioned "18th century style" proofs very interesting. Not surprising in retrospect since older proofs are going to be reproduced more across the training data compared to newer ones, but it's still interesting to note and indicative of the reproduction that these things are doing.

[-] YourNetworkIsHaunted@awful.systems 4 points 5 days ago* (last edited 5 days ago)

I would go so far as to try and find a suitably precocious undergrad to run the test that they themselves are capable of guiding and nudging the model the way OpenAI's team did but not of determining on their own that the conjecture in question is false. OpenAI's results here needed a fair bit of cajoling and guidance, and without that I can only assume it would give the same kind of non-answer regardless of whether the question is in fact solvable.

The point about heavy artillery is actually pretty salient, though a more thorough examination would also note that "Lethal Autonomous Weapons Systems" is a category that includes goddamn land mines. Of course this would serve to ground the discussion in reality and is thus far less interesting to people who start organizations like the Future of Life Institute.

20

Apparently we get a shout-out? Sharing this brings me no joy, and I am sorry for inflicting it upon you.

14

I don't have much to add here, but I know when she started writing about the specifics of what Democrats are worried about being targeted for their "political views" my mind immediately jumped to members of my family who are gender non-conforming or trans. Of course, the more specific you get about any of those concerns the easier it is to see that crypto doesn't actually solve the problem and in fact makes it much worse.

view more: next ›

YourNetworkIsHaunted

joined 2 years ago