[-] ebu@awful.systems 12 points 1 month ago

maybe i'm a weirdo but i actually really like this a lot. if there weren't armies of sycophants chanting outside of all our collective windows about how AI is the future of gaming... if you look at this "game" as an art object unto itself i think it is actually really engaging

it reminds me of other "games" like Marian Kleineberg's Wave Function Collapse and Bananaft's Yedoma Globula. there's one other on the tip of my tongue where you uploaded an image and it constantly reprojected the image onto the walls of a first-person walking simulator, but i don't recall the name

[-] ebu@awful.systems 19 points 3 months ago* (last edited 3 months ago)

there were bits and pieces that made me feel like Jon Evans was being a tad too sympathetic to Elizer and others whose track record really should warrant a somewhat greater degree of scepticism than he shows, but i had to tap out at this paragraph from chapter 6:

Scott Alexander is a Bay Area psychiatrist and a writer capable of absolutely magnificent, incisive, soulwrenching work ... with whom I often strongly disagree. Some of his arguments are truly illuminatory; some betray the intellectual side-stepping of a very smart person engaged in rationalization and/or unwillingness to accept the rest of the world will not adopt their worldview. (Many of his critics, unfortunately, are inferior writers who misunderstand his work, and furthermore suggest it’s written in bad faith, which I think is wholly incorrect.) But in fairness 90+% of humanity engages in such rationalization without even worrying about it. Alexander does, and challenges his own beliefs more than most.

the fact that Jon praises Scott's half-baked, anecdote-riddled, Red/Blue/Gray trichotomy as "incisive" (for playing the hits to his audience), and his appraisal of the meandering transhumanist non-sequitur reading of Allen Ginsberg's Howl as "soulwrenching" really threw me for a loop.

and then the later description of that ultimately rather banal New York Times piece as "long and bad" (a hilariously hypocritical set of adjectives for a self-proclaimed fan of some of Scott's work to use), and the slamming of Elizabeth Sandifer as being a "inferior writer who misunderstands Scott's work", for uh, correctly analyzing Scott's tendencies to espouse and enable white supremacist and sexist rhetoric... yeah it pretty much tanks my ability to take what Jon is writing at face value.

i don't get how after so many words being gentle but firm about Elizer's (lack of) accomplishments does he put out such a full-throated defense of Scott Alexander (and the subsequent smearing of his """enemies"""). of all people, why him?

[-] ebu@awful.systems 15 points 4 months ago* (last edited 4 months ago)

Would you rather have a dozen back and forth interactions?

these aren't the only two possibilities. i've had some interactions where i got handed one ref sheet and a sentence description and the recipient was happy with the first sketch. i've had some where i got several pieces of references from different artists alongside paragraphs of descriptions, and there were still several dozen attempts. tossing in ai art just increases the volume, not the quality, of the interaction

Besides, this is something I've heard from other artists, so it's very much a matter opinion.

i have interacted with hundreds of artists, and i have yet to meet an artist that does not, to at least some degree, have some kind of negative opinion on ai art, except those for whom image-generation models were their primary (or more commonly, only) tool for making art. so if there is such a group of artists that would be happy to be presented with ai art and asked to "make it like this", i have yet to find them

Annoying, sure, but not immoral.

annoying me is immoral actually

[-] ebu@awful.systems 17 points 4 months ago

as someone who only draws as a hobbyist, but who has taken commissions before, i think it would be very annoying to have a prospective client go "okay so here's what i want you to draw" and then send over ai-generated stuff. if only because i know said client is setting their expectations for the hyper-processed, over-tuned look of the machine instead of what i actually draw

[-] ebu@awful.systems 13 points 4 months ago

Well over a year ago it was actually useful. [...] And it just doesn't come up with interesting stuff any more.

i have to admit i'm deeply curious what outputs you considering interesting enough for twenty bucks a month

[-] ebu@awful.systems 18 points 5 months ago

i couldn't resist

Reddit post titled "The Anti-Al crowd is so toxic and ridiculous that it's actually pushed me FURTHER into Al art"

at least when this rhetoric popped up around crypto and GameStop stocks, there was a get-rich-quick scheme attached to it. these fuckers are doing it for free

[-] ebu@awful.systems 14 points 6 months ago* (last edited 6 months ago)

You're implicitly accepting that eventually AI will be better than you once it gets "good enough". [...] Only no, that's not how it's likely to go.

wait hold on. hold on for just a moment, and this is important:

Only no, that's not how it's likely to go.

i regret to inform you that thinking there's even a possibility of an LLM being better than people is actively buying into the sci-fi narrative

well, except maybe generating bullshit at breakneck speeds. so as long as we aren't living in a society based on bullshit we should be goo--... oh fuck

[-] ebu@awful.systems 16 points 6 months ago* (last edited 6 months ago)

i cant stop scrolling through this hot garbage, it just keeps getting better

cut-off tweet from the same account saying that Als are now capable of hypnotizing humans

[-] ebu@awful.systems 12 points 7 months ago

Wait a year and see how kids get on blockchain to sell and buy GPU resources for rendering ‘trans furries’

excuse you, i render my fursona with my own GPU

[-] ebu@awful.systems 17 points 7 months ago* (last edited 7 months ago)

i'll take trolls "pretending" to not understand computational time over fascists "pretending" to gush over other fascists any day

[-] ebu@awful.systems 18 points 7 months ago

it's funny how your first choice of insult is accusing me of not being deep enough into llm garbage. like, uh, yeah, why would i be

but also how dare you -- i'll have you know i only choose the most finely-tuned, artisinally-crafted models for my lawyering and/or furry erotic roleplaying needs

[-] ebu@awful.systems 13 points 8 months ago

i was impressed enough with kagi's by-default deranking/filtering of seo garbage that i got a year's subscription a while back. good to know that this is what that money went to. suppose i'll ride out the subscription (assuming they don't start injecting ai garbage into search before then) and then find some other alternative

switching topics, but i do find it weird how the Brave integration stuff (which i also only found out about after i got the subscription) hadn't... bothered me as much? to be exceptionally clear, fuck Brandon Eich and Brave -- the planet deserves fewer bigots, crypto grifters, and covid conspiracists -- but i can't put my finger on why Kagi paying to consume Brave's search API's just doesn't cause as much friction with me. honestly it could be the fact that when i pay for Kagi it doesn't feel like i'm bankrolling Eich and his ads-as-a-service grift, whereas the money for my subscription is definitely paying for Vlad to ~~reply-guy into bloggers' inboxes who are critical of the way Kagi operates~~ correct misunderstandings about Kagi.

view more: ‹ prev next ›

ebu

joined 9 months ago