[-] ebu@awful.systems 21 points 1 month ago

i think you're missing the point that "Deepseek was made for only $6M" has been the trending headline for the past while, with the specific point of comparison being the massive costs of developing ChatGPT, Copilot, Gemini, et al.

to stretch your metaphor, it's like someone rolling up with their car, claiming it only costs $20 (unlike all the other cars that cost $20,000), when come to find out that number is just how much it costs to fill the gas tank up once

[-] ebu@awful.systems 25 points 5 months ago

because it encodes semantics.

if it really did so, performance wouldn't swing up or down when you change syntactic or symbolic elements of problems. the only information encoded is language-statistical

[-] ebu@awful.systems 21 points 8 months ago* (last edited 8 months ago)

I don't think emojis should be the place to have a socio-political discussion.

have some entirely non-political emojis:

🗳️: BALLOT BOX WITH BALLOT

🇹🇼: FLAG: TAIWAN

🇵🇸: FLAG: PALESTINIAN TERRITORIES

🗽: STATUE OF LIBERTY

🤡: FACE OF "NON-POLITICAL" PERSON

[-] ebu@awful.systems 21 points 9 months ago* (last edited 9 months ago)

"rat furry" :3

"(it's short for rationalist)" >:(

[-] ebu@awful.systems 20 points 9 months ago* (last edited 9 months ago)

simply ask the word generator machine to generate better words, smh

this is actually the most laughable/annoying thing to me. it betrays such a comprehensive lack of understanding of what LLMs do and what "prompting" even is. you're not giving instructions to an agent, you are feeding a list of words to prefix to the output of a word predictor

in my personal experiments with offline models, using something like "below is a transcript of a chat log with XYZ" as a prompt instead of "You are XYZ" immediately gives much better results. not good results, but better

[-] ebu@awful.systems 22 points 9 months ago

the upside: we can now watch "disruptive startups" go through the aquire funding -> slapdash development -> catastrophic failure -> postmortem cycle at breakneck speeds

[-] ebu@awful.systems 20 points 10 months ago* (last edited 10 months ago)

i really, really don't get how so many people are making the leaps from "neural nets are effective at text prediction" to "the machine learns like a human does" to "we're going to be intellectually outclassed by Microsoft Clippy in ten years".

like it's multiple modes of failing to even understand the question happening at once. i'm no philosopher; i have no coherent definition of "intelligence", but it's also pretty obvious that all LLM's are doing is statistical extrapolation on language. i'm just baffled at how many so-called enthusiasts and skeptics alike just... completely fail at the first step of asking "so what exactly is the program doing?"

[-] ebu@awful.systems 21 points 10 months ago

syncthing is an extremely valuable piece of software in my eyes, yeah. i've been using a single synced folder as my google drive replacement and it works nearly flawlessly. i have a separate system for off-site backups, but as a first line of defense it's quite good.

[-] ebu@awful.systems 24 points 11 months ago

correlation? between the rise in popularity of tools that exclusively generates bullshit en masse and the huge swelling in volume of bullshit on the Internet? it's more likely than you think

it is a little funny to me that they're taking about using AI to detect AI garbage as a mechanism of preventing the sort of model/data collapse that happens when data sets start to become poisoned with AI content. because it seems reasonable to me that if you start feeding your spam-or-real classification data back into the spam-detection model, you'd wind up with exactly the same degredations of classification and your model might start calling every article that has a sentence starting with "Certainly," a machine-generated one. maybe they're careful to only use human-curated sets of real and spam content, maybe not

it's also funny how nakedly straightforward the business proposition for SEO spamming is, compared to literally any other use case for "AI". you pay $X to use this tool, you generate Y articles which reach the top of Google results, you generate $(X+P) in click revenue and you do it again. meanwhile "real" business are trying to gauge exactly what single digit percent of bullshit they can afford to get away with putting in their support systems or codebases while trying to avoid situations like being forced to give refunds to customers under a policy your chatbot hallucinated (archive.org link) or having to issue an apology for generating racially diverse Nazis (archive).

[-] ebu@awful.systems 20 points 11 months ago* (last edited 11 months ago)

actually, i don't think possessing the ability to send email entitles you to """debate""" with anyone who publishes material disagreeing with you or the way your company runs, and i'm pretty sure responding with a (polite) "fuck off" is a perfectly reasonable approach to the kinds of people who believe they have an inalienable right to argue with you

[-] ebu@awful.systems 21 points 11 months ago

i absolutely love the "clarification" that an email address is PII only if it's your real, primary, personal email address, and any other email address (that just so happens to be operated and used exclusively by a single person, even to the point of uniquely identifying that person by that address) is not PII

[-] ebu@awful.systems 23 points 11 months ago* (last edited 11 months ago)

Actually, that email exchange isn’t as combative as I expected.

i suppose the CEO completely barreling forward past multiple attempts to refuse conversation while NOT screaming slurs at the person they're attempting to lecture, is, in some sense, strictly better than the alternative

view more: ‹ prev next ›

ebu

joined 1 year ago