32
We can, protect artists (nightshade.cs.uchicago.edu)

Remember how we were told that genAI learns "just like humans", and how the law can't say about fair use, and I guess now all art is owned by big tech companies?

Well, of course it's not true. Exploiting a few of the ways in which genAI --is not-- like human learners, artists can filter their digital art in such a way that if a genAI tool consumes it, it actively reduces the quality of the model, undoing generalization and bleading into neighboring concepts.

Can an AI tool be used to undo this obfuscation? Yes. At scale, however, doing so requires increasing compute costs more and more. This also looks like an improvable method, not a dead end -- adversarial input design is a growing field of machine learning with more and more techniques becoming highly available. Imagine this as sort of "cryptography for semantics" in the sense that it presents asymetrical work on AI consumers (while leaving the human eye much less effected).

Now we just need labor laws to catch up.

Wouldn't it be funny if not only does generative AI not lead to a boring dystopia, but the proliferation and expansion of this and similar techniques to protect human meaning eventually put a lot of grifters out of business?

We must have faith in the dark times. Share this with your artist friends far and wide!

[-] locallynonlinear@awful.systems 14 points 10 months ago

Scientists terrified to discover that language, the thing they trained into an highly flexible matrix of nearly arbitrary numbers, ends up can exist in multiple forms, including forms unintended by the matrix!

What happens next, the kids lie to their parents so they can go out partying after dark? The fall of humanity!

[-] locallynonlinear@awful.systems 19 points 10 months ago

you forgot the last stage of the evolution,

you'll later find out that people were talking about you, your actions, your words, and that being ghosted was in fact the consequence of your actions, and then you'll have one last opportunity to turn it all around

  1. do some self introspection and reconcile what actually happened vs what you intended to happen, and decide that it is in fact possible to create relationships without trying to meta discomfort them for your purposes specifically

or

  1. wokeism is the reason, so this time you need to be even MORE obnoxious, to filter people out who would talk behind your back even strongester! (repeat from the top of your flow)
[-] locallynonlinear@awful.systems 13 points 10 months ago

So far, there has been zero or one[1] lab leak that led to a world-wide pandemic. Before COVID, I doubt anyone was even thinking about the probabilities of a lab leak leading to a worldwide pandemic.

So, actually, many people were thinking about lab leaks, and the potential of a worldwide pandemic, despite Scott's suggestion that stupid people weren't. For years now, bioengineering has been concerned with accidental lab leaks because the understanding that risk existed was widespread.

But the reality is that guessing at probabilities of this sort of thing still doesn't change anything. It's up to labs to pursue safety protocols, which happens at the economic edge of of the opportunity vs the material and mental cost of being diligent. Reality is that lab leaks may not change probabilities, but yes the events of them occurring does cause trauma which acts, not as some bayesian correction, but an emotional correction so that people's motivations for atleast paying more attention increases for a short while.

Other than that, the greatest rationalist on earth can't do anything with their statistics about label leaks.

This is the best paradox. Not only is Scott wrong to suggest people shouldn't be concerned about major events (the traumatic update to individual's memory IS valuable), but he's wrong to suggest that anything he or anyone does after updating their probabilities could possibly help them prepare meaningfully.

He's the most hilarious kind of wrong.

[-] locallynonlinear@awful.systems 18 points 10 months ago* (last edited 10 months ago)

Ah, if only the world wasn't so full of "stupid people" updating their bayesians based off things they see on the news, because you should already be worried of and calculating your distributions for... inhales deeply terrorist nuclear attacks, mass shootings, lab leaks, famine, natural disasters, murder, sexual harassment, conmen, decay of society, copyright, taxes, spitting into the wind, your genealogy results, comets hitting the earth, UFOs, politics of any and every kind, and tripping on your shoe laces.

What... insight did any of this provide? Seriously. Analytical statistics is a mathematically consistent means of being technically not wrong, while using a lot of words, in order to disagree on feelings, and yet saying nothing.

Risk management is not a statistical question in fact. It's an economics question of your opportunities. It's why prepping is better seen as a hobby, a coping mechanism and not as viable means of surviving apocalypse. It's why even when a EA uses their super powers of bayesian rationality the answer in the magic eight ball is always just "try to make money, stupid".

[-] locallynonlinear@awful.systems 16 points 11 months ago

It's hilarious to me how unnecessarily complicated invoking moore's law is to say anything..

With Moore's Law: "Ok ok ok, so like, imagine that this highly abstract, broad process over huge time period, is actually the same as manufacturing this very specific thing over a small time period. Hmm, it doesn't fit. ok, let's normalize the timelines with this number. Why? Uhhh because you know, this metric doubles as well. Ok. Now let's just put these things together into our machine and LOOK it doesn't match our empirical observations, obviously I've discovered something!"

Without Moore's Law: "When you reduce the dimensions of any system in nature, flattening their interactions, you find exponential processes everywhere. QED."

[-] locallynonlinear@awful.systems 12 points 11 months ago

Helpful reminder to spread the word on Google alternatives this holiday season. Bought Kagi subscriptions as stocking stuffers for my loved ones. Everyone who I have convinced to give it a try has been impressed thus far.

SEO will pillage the commons. It has been for years and years. Community diversity and alternative payment models for search are part of the bulwark.

[-] locallynonlinear@awful.systems 17 points 11 months ago* (last edited 11 months ago)

Rich People: "Competitive markets optimize things, see how much progress capitalism has brought!"

Also Rich People: "But what if everything descends into expensive, unregulated competition between things that aren't rich people oooo nooo!!!"

[-] locallynonlinear@awful.systems 12 points 11 months ago

Question: if the only thing that matter is using AGI, what powers the AGI? Does the AGI produce net positive energy to power the continued expansion of AGI? Does AGI break the law of conservation because... if it didn't, it wouldn't be AGI?

[-] locallynonlinear@awful.systems 12 points 11 months ago

The irony in all this is that if they just dropped the utilitarianism and were just honest about feelings guiding their decision making, they could be tolerable. "I'm not terribly versed in the details of the gun violence issue, but I did care about malaria enough to donate to some functional causes." Ok, fine, you're now instantly just a normal person.

[-] locallynonlinear@awful.systems 38 points 1 year ago

Takes like this are one of the many things I pull out to point out how naive and misguided most x-risk obsessive people are. And especially Mr. Altman.

Despite wide fears of synthetic gain of function attacks, as it turns out, it's actually really hard to create a new virus meaningfully stronger than the standard endemic ones that already exist. Many countries and labs have legitimately tried. Lots of papers and research. It's, really really hard to beat nature at the microbiological scale; Viruses have to not only be virulent, but it has to contend with extremely unpredictable intermediate environments. The current endemic viruses got there through many mutations and adaptations inside environments that they were already at least successful (and not in vitro). And in the end, what would be the point? Once a virulent virus breaks out, you have very little control. Either it works really well and backfires or, even far more likely, it doesn't do that much at all, but it does piss other nations off.

It's not impossible. But honestly, yeah, I don't comprehend x-riskers who obsess over this.

[-] locallynonlinear@awful.systems 18 points 1 year ago

This is the push/pull abusive dynamic: feign sensitivity, deny negative implications as not their intention, but demand positive feedback for dangerous takes. EA believes that not being wrong or held accountable is the most important optimization, so all their positions come from having absolutely no stake in the real world consequences.

[-] locallynonlinear@awful.systems 12 points 1 year ago

ah, the NP-complete problem of just fucking pulling the file into memory (there’s no way this clown was burning a rainforest asking ChatGPT for a memory-optimized way to do this),

It's worse than that, because there's been incredibly simple, efficient ways to k-sample a stream with all sorts of guarantees about its distribution with no buffering required for centuries. And it took me all of 1 minute to use a traditional search engine to find all kinds of articles detailing this.

If you can't bother learning a thing, it isn't surprising when you end up worshiping the magic of the thing.

view more: next ›

locallynonlinear

joined 1 year ago