it's a good thing charities don't distribute resources within societies or communal frameworks!
dear gods how does one type that with a straight face and not pass out from sheer intellectual exertion
it's a good thing charities don't distribute resources within societies or communal frameworks!
dear gods how does one type that with a straight face and not pass out from sheer intellectual exertion
no worries -- i am in the unfortunate position of very often needing to assume the worst in others and maybe my reading of you was harsher than it should have been, and for that i am sorry. but...
"generative AI" is a bit of a marketing buzzword. the specific technology in play here is LLMs, and they should be forcefully kept out of every online system, especially ones people rely on for information.
LLMs are inherently unfit for every purpose. they might be "useful", in the sense that a rock is useful for driving a nail through a board, but they are not tools in the same way hammers are. the only exception to this is when you need a lot of text in a hurry and don't care about the quality or accuracy of the text -- in other words, spams and scams. in those specific domains i can admit LLMs are the most applicable tool for the job.
so when ostensibly-smart people, but especially ones who are running public information systems, propose using LLMs for things they are unable to do, such as explain species identification procedures, it means either 1) they've been suckered into believing they're capable of doing those things, or 2) they're being paid to propose those things. sometimes it is a mix of both. either way, it very much indicates those people should not be trusted.
furthermore, the technology industry as a whole has already spent several billion dollars trying to push this technology onto and into every part of our daily lives. LLM-infested slop has made its way onto every online platform, and more often than not, with direct backing from those platforms. and the technology industry is openly hostile to the idea of "consent", actively trying to undermine it at every turn. it's even made it all the way through to the statement attempting to reassure on that forum post about the mystery demo LLMs -- note the use of the phrase "making it opt-out". why not "opt-in"? why not "with consent"?
it's no wonder that people are leaving -- the writing is more or less on the wall.
Al sales startup AI start-up claims...
much better :3
random guess, but: "11x” is the name of the company, that's not "eleven times"
That's the opposite of what I'm saying. Deepseek is the one under scrutiny, yet they are the only one to publish source code and training procedures of their model.
this has absolutely fuck all to do with anything i've said in the slightest, but i guess you gotta toss in the talking points somewhere
e: it's also trivially disprovable, but i don't care if it's actually true, i only care about headlines negative about AI
if you put this paragraph
Corporations institute barebones [crappy product] that [works terribly] because they can't be bothered to pay the [production workers] to actually [produce quality products] but when shit goes south they turn around and blame the [workers] for a bad product instead of admitting they cut corners.
and follow it up with "It's China Syndrome"... then it's pretty astonishingly clear it is meant in reference to the perceived dominant production ideology of specifically China and has nothing to do with nuclear reactors
~~you might know what "monotonic" means if you had googled it, which would also give you the answer to your question~~
edit: this was far too harsh of a reply in retrospect, apologies. the question is answered below, but i'll echo it: a "monotonic UUID" is one that numerically increases as new UUIDs are generated. this has an advantage when writing new UUIDs to indexed database columns, since most database index structures are more efficient when inserting at the end than at a random point (non-monotonic UUID's).
best of luck with android bullshit. i'm not familiar with either psychedelics themselves or their evangelists, but yeah, would love to hear thoughts
i mean. definitionally, some did, yeah? if you bought in at 25, 50, 75, 100, 200, or 400 -- these are all the same number in the end, the only difference being how much you're down by between then and now.
eta: that's not even to mention the fact that since this demand is all synthetic, all the money coming in is from people who are going to be left holding the bag, again. we're just watching it repeat.
at least if it was "vectors in a high-dimensional space" it would be like. at least a little bit accurate to the internals of llm's. (still an entirely irrelevant implementation detail that adds noise to the conversation, but accurate.)
my pet conspiracy theory is that the two streamers had installed cheats at one point in the past and compromised their systems that way. but i have no evidence to base that on, just seems more plausible to me than "a hacker discovered an RCE in EAC/Apex and used it during a tournament to install game cheats on two people and [appear to] do nothing else"
oh 100%. on the flipside of that, the advantage is that usually it's relatively easy to flip basic constructions into sneers. the combination of getting their arguments picked apart while being mocked usually causes the monocle to fall off the seal reeeal quick.
maybe there should be some kind of scoring system. perhaps a golf-like. par for three comments before they complain about your tone (sneering in a sneer club, my gods!), four for getting themselves banned. a bonus sticker in the shape of a star if "ad hominem" is typed verbatim