[-] imadabouzu@awful.systems 9 points 1 year ago

In a sense, to me, it is the same thing. If your business is built upon repurposing everyone else's inputs indiscriminately to your benefit and their detriment, it is, too expensive, to reveal that simple truth.

[-] imadabouzu@awful.systems 8 points 1 year ago

Moravec's Paradox is actually more interesting than it appears. You don't have take his reasoning or Pinker's seriously but the observation is salient. Also the paradox gets stated in other ways by other scientists, it's a common theme.

One way I often think about it: in order for your to survive, the intelligence of moving in unknown spaces and managing numerous fuzzy energy systems is way more important to prioritize and master than like, the abstract conceptual spaces that are both not full of calories and are also cheaper to externalize anyways.

It's part of why I don't think there is a globally coherent heirarchy of intelligence, or potentially even general intelligence at all. Just, the distances and spaces that a thing occupies, and the competencies that define being in that space.

[-] imadabouzu@awful.systems 7 points 1 year ago

You don't have to agree with someone to recognize that they care.

[-] imadabouzu@awful.systems 8 points 1 year ago

I feel this shouldn't at all be surprising, and continues to point to Diverse Intelligence as more fundamental than any sort General Intelligence conceptually. There's a huge difference between what something is in theory or in principal capable of, and the economics story of what that thing attends to naturally as per its energy story.

Broadly, even simple things are powerful precisely because of what they don't bother trying to do until perturbed.

Ultimately, I hypothesize the reason why VCs like the idea of LLMs doing simple things far more expensively than otherwise is already possible, is because, They literally can't imagine what else to spend their money on. They are vacuous consumers by design.

[-] imadabouzu@awful.systems 8 points 1 year ago

Procreate is an example of what good AI deployment looks like. They do use technology, and even machine learning, but they do it in obviously constructive scopes between where the artist's attention is focused. And they're committed to that because... there's no value for them to just be a thin wrapper on an already completely commoditized technology on its way to the courtroom to be challenged by landmark rulings with no more room ceiling to grow into whooooooops.

[-] imadabouzu@awful.systems 8 points 1 year ago

No joke but actually yes?

[-] imadabouzu@awful.systems 8 points 1 year ago

Short story: it's smoke and mirrors.

Longer story: This is now how software releases work I guess. Alot is running on open ai's anticipated release of GPT 5. They have to keep promising enormous leaps in capability because everyone else has caught up and there's no more training data. So the next trick is that for their next batch of models they have "solved" various problems that people say you can't solve with LLMs, and they are going to be massively better without needing more data.

But, as someone with insider info, it's all smoke and mirrors.

The model that "solved" structured data is emperically worse at other tasks as a result, and I imagine the solution basically just looks like polling multiple response until the parser validates on the other end (so basically it's a price optimization afaik).

The next large model launching with the new Q* change tomorrow is "approaching agi because it can now reliably count letters" but actually it's still just agents (Q* looks to be just a cost optimization of agents on the backend, that's basically it), because the only way it can count letters is that it invokes agents and tool use to write a python program and feed the text into that. Basically, it is all the things that already exist independently but wrapped up together. Interestingly, they're so confident in this model that they don't run the resulting python themselves. It's still up to you or one of those LLM wrapper companies to execute the likely broken from time to time code to um... checks notes count the number of letters in a sentence.

But, by rearranging what already exists and claiming it solved the fundamental issues, OpenAI can claim exponential progress, terrify investors into blowing more money into the ecosystem, and make true believers lose their mind.

Expect more of this around GPT-5 which they promise "Is so scary they can't release it until after the elections". My guess? It's nothing different, but they have to create a story so that true believers will see it as something different.

[-] imadabouzu@awful.systems 8 points 1 year ago

The weird thing, is. From my perspective. Nearly every, weird, cringy, niche internet addiction I've ever seen or partaken in myself, has produced both two things: people who live through it and their perspective widens, and people who don't.

Like, I look back at my days of spending 2 days at a time binge playing World of Warcraft with a deep sense of cringe but also a smirk because I survived and I self regulated, and honestly. Made a couple of lifetime friends. Like whatever response we have to anime waifus, I hope we still recognize the humanity in being a thing that wants to be entertained or satisfied.

[-] imadabouzu@awful.systems 9 points 1 year ago* (last edited 1 year ago)

Audacious and Absurd Defender of Humanity

Your honor, I'd rather plea guilty than abide by my audacious counsel.

[-] imadabouzu@awful.systems 9 points 1 year ago

It can't stop the usage, it can raise the cost of doing so, by bringing in legal risk of operations operating in a public way. It can create precedence that can be built upon by other parts.

Politics and law move slower than and behind the things it attempts to regulate by design. Which is good, the atlernative is a surveilance state! But it definitely can arrange itself to punish or raise the risk profile of doing something in a certain patterned way.

[-] imadabouzu@awful.systems 7 points 1 year ago

Honestly, almost anything can work. Some, sort of flash card system, and some, sort of input in the language that you enjoy. I use Anki and yes it's trash but I have never found spending anymore than the least necessary time on the tech of language learning worth it.

The crucial thing, in my experience, is that language acquisition only works if you're paying attention because you actually care about the material in front of you. I think a lot of people make the mistake of only studying aspirationally and well beyond their current capacity, forgetting how to be a child and be highly curative and explorative. Weird shit, even practically unuseful shit, is surprisingly better than you'd think.

[-] imadabouzu@awful.systems 8 points 1 year ago

A certain class of idealists definitely feel this way, and it's why many decentralized efforts are fragile and fall apart. Because they can't meaningfully construct something without centralization or owners, they end up just hiding these things under a blanket rather than acknowledging them as design elements that require an intentional specification.

view more: ‹ prev next ›

imadabouzu

joined 1 year ago