[-] self@awful.systems 24 points 2 months ago

you’ve seen it from me as “the most toxic part of the Wayland ecosystem, and that’s saying something”

[-] self@awful.systems 24 points 1 year ago

speaking of the Godot engine, here’s a layered sneer from the Cruelty Squad developer (via Mastodon):

image descriptiona post from Consumer Softproducts, the studio behind Cruelty Squad:

weve read the room and have now successfully removed AI from cruelty squad. each enemy is now controlled in almost real time by an employee in a low labor cost country

[-] self@awful.systems 24 points 1 year ago

read the fucking room before you come in here and advocate for your favorite plagiarism machine

[-] self@awful.systems 24 points 1 year ago

just a little violation of my trust for the company I pay for privacy and encryption services. as a treat.

[-] self@awful.systems 25 points 1 year ago

presented without comment:

(via @Colophonscrawl)

[-] self@awful.systems 25 points 1 year ago

Other AI companies like Cerebras are much better, running at quite sane voltages. Ironically (or perhaps smartly), the Saudis invested in them.

it’s real bizarre you edited this in after getting upvoted by a few people

[-] self@awful.systems 24 points 2 years ago

thanks to this article specifically, Ludic’s going on a podcast with Robert Evans:

I just agreed to go on a podcast on a whim, then a friend told me it is with the host of Behind the Bastards, and I spat tea everywhere.

My to-do list today said "go for piano class" and "prepare for rental inspection".

I am unprepared for this level of prime time.

[-] self@awful.systems 25 points 2 years ago

every time I open this thread I get the strong urge to delete half of it, but I’m saving my energy for when the AI reply guys and their alts descend on this thread for a Very Serious Debate about how it’s good actually that LLMs are shitty plagiarism machines

[-] self@awful.systems 24 points 2 years ago

oh come the fuck off it, OpenAI’s marketing presents sora as exactly a magic automate entire movie clip button. here’s OpenAI marketing the stupid thing as a world simulator which is fucking laughable if it can’t maintain even basic consistency. here’s an analysis of how disappointing sora actually is

tonight’s promptfans are fucking boring and I’m cranky from openai’s shitty sora page crashing my browser so I guess all you folks doing free marketing for Sam Altman can fuck off now

[-] self@awful.systems 25 points 2 years ago

“Reds should be welcomed there, and people should wear their tribal colors,” said Srinivasan, who compared his color-coded apartheid system to the Bloods vs. Crips gang rivalry. “No Blues should be welcomed there.”

Balaji goes on—and on. The Grays will rename city streets after tech figures and erect public monuments to memorialize the alleged horrors of progressive Democratic governance. Corporate logos and signs will fill the skyline to signify Gray dominance of the city. “Ethnically cleanse,” he said at one point, summing up his idea for a city purged of Blues (this, he says, will prevent Blues from ethnically cleansing the Grays first).

got it, Balaji’s future is a lot like if the classic dystopian game Syndicate was written by a 6 year old. but something tells me this crayola Nazi shit mostly exists to make Tan’s equally monstrous and insane plans look reasonable by comparison, so Tan can continue to pay journalists to call him a moderate Democrat and shift the Overton window as far right as he can.

[-] self@awful.systems 25 points 2 years ago

the supposed life extending properties of a glass of red wine every day are an excellent way to turn a wine mom into a full blown alcoholic

[-] self@awful.systems 25 points 2 years ago

There is a good case that abortion is morally impermissible – or at least there is significant moral uncertainty.

it’s actually kind of rare that one of these loses me in the first sentence (cause TESCREALs don’t know about brevity so usually their point is buried under an avalanche of words) but here we are. the only people who can’t imagine a morally permissible abortion just don’t give a fuck about women

3
submitted 2 years ago* (last edited 2 years ago) by self@awful.systems to c/techtakes@awful.systems

the API is called Web Environment Integrity, and it’s a way to kill ad blockers first and a Google ecosystem lock-in mechanism second, with no other practical use case I can find

3

Winter is coming and Collapse OS aims to soften the blow. It is a Forth (why Forth?) operating system and a collection of tools and documentation with a single purpose: preserve the ability to program microcontrollers through civilizational collapse.

imagine noticing that civilization is collapsing around you and not immediately opening an emacs lisp buffer so you can painstakingly recreate the entire compiler toolchain and runtime environment for the microcontrollers around you as janky code running in your editor. fucking amateurs

3

Wolfram’s post is fucking interminable and consists of about 20% semi-interesting math and 80% goofy shit like deciding that the creepy (to Wolfram) images in the AI model’s probability space must represent how aliens perceive the world. to my memory, this is about par for the course for Wolfram

the orange site decides that the reason why the output isn’t very interesting is because the AI isn’t a robot:

What we see from AI is what you get when you remove the "muscle module", and directly apply the representations onto the paper. There's no considering of how to fill in a pixel; there's just a filling of the pixel directly from the latent space.

It's intriguing. Also makes me wonder if we need to add a module in between the representational output and the pixel output. Something that mimics how we actually use a brush.

this lack of muscle memory is, of course, why we have never done digital art once in the history of humanity. all claims to the contrary are paid conspirators in the pocket of Big Dick Blick

Of course, the AIs can't wake up if we use that analogy. They are not capable of anything more than this state right now.

But to me, lucid dreaming is already a step above the total unconsciousness of just dreaming, or just nothing at all. And wakefulness always follows shortly after I lucid dream.

only 10x lucid dreamers wake up after falling asleep

we can progressively increase the numerical values of the weights—eventually in some sense “blowing the mind” of the network (and going a bit “psychedelic” in the process)

I wonder if there's a more exact analog of the action of psychedelics on the brain that could be performed on generative models?

I always find it interesting how a hero dose of LSD gives similar visuals to what these image AI's do to achieve a coherent image.

[more nonsense]

I feel like the more we get AI to act like humans, and the more those engineers and others use LSD, the more convergence we are going to have with curiosity and breakthroughs about how we function.

the next time you’re in an altered state, I want you to close your eyes and just imagine how annoyed you’d be if one of these shitheads was there with you, trying to get you to “form a BCI” or whatever by typing free association words into ChatGPT

1
submitted 2 years ago* (last edited 2 years ago) by self@awful.systems to c/servernews@awful.systems

big update, awful.systems is now a federated lemmy instance. let me know if anything looks broken! here's what to expect:

  • to pull up an awful.systems community on another instance, just paste that community's URL into the other instance's search bar
  • federation with other lemmy instances should work, and probably kbin too? there's no way I can find to pull in pre-federation posts on remote instances though, so send your friends here to read the backlogs
  • we can't federate with most of mastodon right now because lemmy doesn't implement authorized_fetch, which is a best practice setting for mastodon instances. if your instance doesn't use it, try entering something like @sneerclub@awful.systems into your mastodon search; lemmy communities are represented to mastodon as users
  • this is pretty much an experimental thing so if we have to turn it off, I'll send out another post
  • reply to this post with ideas for moderation tools and instances you'd like to see blocked (and a reason why) and we'll take action on anything that sounds like a good idea

federation was made possible by

  • lemmy's devs skipping their release process and not telling anyone 0.18.2 was released on friday? so we're on 0.18.2 now
  • updating all of the deployment cluster's flake inputs just in case
  • @dgerard@awful.systems shouting yolo
2

you know it’s a fucking banger when you try to collapse the top comment in the thread to skip all the folks litigating over the value of an ebike and more than 2/3rds of the comments in an 884 comment long thread disappear

also featuring many takes from understanders of statistics:

I'm wary about using public roads to test these, but I think the way the data is presented is misleading. I'm not sure how it's misleading, but separating "incidents" into categories (safety, traffic, accident, etc) might be a good start.

For example, I could start coning cruise cars, and cause these numbers to skyrocket. While that's an inconvenience to other drivers, it's not a safety issue at all.

By the way, as a motorcyclist (and thus hyper annoyed at bad driving), I find Uber/Lyft/Food drivers to be both much more dangerous and inconveniencing than these self driving cars.

2
submitted 2 years ago* (last edited 2 years ago) by self@awful.systems to c/techtakes@awful.systems

see also the github thread linked in the mastodon post, where the couple of gormless AI hypemen responsible for MDN’s AI features pick a fight with like 30 web developers

from that thread I’ve also found out that most MDN content is written by a collective that exists outside of Mozilla (probably explaining why it took them this long to fuck it up), so my hopes that somebody forks MDN are much higher

2

there’s a fun drinking game you can play where you take a shot whenever the spec devolves into flowery nonsense

§1. Purpose and Scope

The purpose of DIDComm Messaging is to provide a secure, private communication methodology built atop the decentralized design of DIDs.

It is the second half of this sentence, not the first, that makes DIDComm interesting. “Methodology” implies more than just a mechanism for individual messages, or even for a sequence of them. DIDComm Messaging defines how messages compose into the larger primitive of application-level protocols and workflows, while seamlessly retaining trust. “Built atop … DIDs” emphasizes DIDComm’s connection to the larger decentralized identity movement, with its many attendent virtues.

you shouldn’t have pregamed

1

I was gonna do this quietly since I was doing it mostly for security fixes, but now I guess I gotta announce that I deployed lemmy 0.18.1 to the awful.systems cluster. changes include

  • sweet christ did this UI get smaller and uglier? whose idea was this.
  • we have more theme options! most of them are terrible. there is a vaporwave theme I kinda like in a geocities way. if you come here and it looks like geocities I switched to that one
  • they fixed like 3 out of the 4 webdev 101 security holes they left in the code
  • there's some small new UI features?
  • sometimes they just make changes for no reason
  • let me know if anything looks broken
2

today Mozilla published a blog post about the AI Help and AI Explain features it deployed to its famously accurate MDN web documentation reference a few days ago. here’s how it’s going according to that post:

We’re only a handful of days into the journey, but the data so far seems to indicate a sense of skepticism towards AI and LLMs in general, while those who have tried the features to find answers tend to be happy with the results.

got that? cool. now let’s check out the developer response on github soon after the AI features were deployed:

it seems like this feature was conceived, developed, and deployed without even considering that an LLM might generate convincing gibberish, even though that's precisely what they're designed to do.

oh dear

That is demonstrably wrong. There is no demo of that code showing it in action. A developer who uses this code and expects the outcome the AI said to expect would be disappointed (at best).

That was from the very first page I hit that had an accessibility note. Which means I am wary of what genuine user-harming advice this tool will offer on more complex concepts than simple stricken text.

So the "solution" is adding a disclaimer and a survey instead of removing the false information? 🙃 🙃 🙃

This response is clearly wrong in its statement that there is no closing tag, but also incorrect in its statement that all HTML must have a closing tag; while this is correct for XHTML, HTML5 allows for void elements that do not require a closing tag

that doesn’t sound very good! but at least someone vetted the LLM’s answers, right?

MDN core reviewer/maintainer here.

Until @stevefaulkner pinged me about this (thanks, Steve), I myself wasn’t aware that this “AI Explain” thing was added. Nor, as far as I know, were any of the other core reviewers/maintainers aware it’d been added. Nor, as far as I know, did anybody get an OK for this from the MDN Steering Committee (the group of people responsible for governance of MDN) — nor even just inform the Steering Committee about it at all.

The change seems to have landed in the sources two days ago, in e342081 — without any associated issue, instead only a PR at #9188 that includes absolutely not discussion or background info of any kind.

At this point, it looks to me to be something that Mozilla decided to do on their own without giving any heads-up of any kind to any other MDN stakeholders. (I could be wrong; I've been away a bit — a lot of my time over the last month has been spent elsewhere, unfortunately, and that’s prevented me from being able to be doing MDN work I’d have otherwise normally been doing.)

Anyway, this “AI Explain” thing is a monumentally bad idea, clearly — for obvious reasons (but also for the specific reasons that others have taken time to add comments to this issue to help make clear).

(note: the above reply was hidden in the GitHub thread by Mozilla, usually something you only do for off topic replies)

so this thing was pushed into MDN behind the backs of Mozilla’s experts and given only 15 minutes of review (ie, none)? who could have done such a thing?

…so anyway, some kind of space alien comes in and locks the thread:

Hi there, 👋

Thank you all for taking the time to provide feedback about our AI features, AI Explain and AI Help, and to participate in this discussion, which has probably been the most active one in some time. Congratulations to be a part of it! 👏

congratulations to be a part of it indeed

2

hopefully this is alright with @dgerard@awful.systems, and I apologize for the clumsy format since we can’t pull posts directly until we’re federated (and even then lemmy doesn’t interact the best with masto posts), but absolutely everyone who hasn’t seen Scott’s emails yet (or like me somehow forgot how fucking bad they were) needs to, including yud playing interference so the rats don’t realize what Scott is

2

there’s just so much to sneer at in this thread and I’ve got choice paralysis. fuck it, let’s go for this one

everyone thinking Prompt Engineering will go away dont understand how close Prompt Engineering is to management or executive communications. until BCI is perfect, we'll never be done trying to serialize our intent into text for others to consume, whether AI or human.

boy fuck do I hate when my boss wants to know how long a feature will take, so he jacks straight into my cerebral cortex to send me email instead of using zoom like a normal person

1

it’s a short comment thread so far, but it’s got a few posts that are just condensed orange site

The constant quest for "safety" might actually be making our future much less safe. I've seen many instances of users needing to yell at, abuse, or manipulate ChatGPT to get the desired answers. This trains users to be hateful to / frustrated with AI, and if the data is used, it teaches AI that rewards come from such patterns. Wrote an article about this -- https://hackernoon.com/ai-restrictions-reinforce-abusive-user-behavior

But you think humans (by and large) do know what "facts" are?

view more: ‹ prev next ›

self

joined 2 years ago
MODERATOR OF