[-] coolin@beehaw.org 57 points 1 year ago

"I use Signal to hide my data from the US government and big tech"

"Wait, you seriously still use Reddit? Everyone switched to the Fediverse!"

"Wow, can't believe you use Apple! Android is so much better."

No one who isn't terminally online understands what these statements mean. If you want people to use something else, don't make it about privacy and choose something with fancy buttons and cool features that looks close enough to what they have. They do not care about privacy and are literally of the mindset "if I have nothing to hide I have nothing to fear". They sleep well at night.

[-] coolin@beehaw.org 33 points 1 year ago

Only thing really missing is Wallet and NFC support. Other than that I think Graphene and Lineage OS cover it all

[-] coolin@beehaw.org 9 points 1 year ago

Sam Altman: We are moving our headquarters to Japan

[-] coolin@beehaw.org 20 points 1 year ago* (last edited 1 year ago)

We have no moat and neither does OpenAI is the leaked document you're talking about

It's a pretty interesting read. Time will tell if it's right, but given the speed of advancements that can be stacked on top of each other that I'm seeing in the open source community, I think it could be right. If open source figured out scalable distributed training I think it's Joever for AI companies.

[-] coolin@beehaw.org 9 points 1 year ago

I don't know what type of chatbots these companies are using, but I've literally never had a good experience with them and it doesn't make sense considering how advanced even something like OpenOrca 13B is (GPT-3.5 level) which can run on a single graphics card in some company server room. Most of the ones I've talked to are from some random AI startup that have cookie cutter preprogrammed text responses that feel less like LLMs and more like a flow chart and a rudimentary classifier to select an appropriate response. We have LLMs that can do the more complex human tasks of figuring out problems and suggesting solutions and that can query a company database to respond correctly, but we don't use them.

[-] coolin@beehaw.org 9 points 1 year ago

Blocking out the sun with aerosols is a good idea if you know with high confidence how it will impact the climate system and environment. That's why they're trying to simulate it with the supercomputer, so they know if it fucks stuff up or not.

[-] coolin@beehaw.org 46 points 1 year ago

Cool meme but Reuters doesn't own AP and Rothschild doesn't own Reuters. It is quite ironic to be pushing against the very real problem of media disinfo/government propaganda trickled down through AP/Reuters, while at the same time spreading misinformation.

[-] coolin@beehaw.org 8 points 1 year ago

The natural next place for people to go to once they can't block ads on YouTube's website is to go to services that exploit the API to serve free content (NewPipe, Invidious, youtube-dl, etc.). If that happens at a large scale, YouTube might shut off its API just like Reddit did and we'll end up in scenario where creators are forced to move to Peertube, and, given how costly hosting is for video streaming, it could be much worse than Reddit->Lemmy+KBin or Twitter->Mastodon. Then again, YouTube has survived enshittiffication for a long time, so we'll have to wait and see.

[-] coolin@beehaw.org 15 points 1 year ago

I mean advanced AI aside, there are already browser extensions that you can pay for that have humans on the other end solving your Captcha. It's pretty much impossible to stop it imo

A long term solution would probably be a system similar to like public key/private key that is issued by a government or something to verify you're a real person that you must provide to sign up for a site. We obviously don't have the resources to do that ๐Ÿ˜ and people are going to leak theirs starting day 1.

Honestly, disregarding the dystopian nature of it all, I think Sam Altman's worldcoin is a good idea at least for authentication because all you need to do is scan your iris to prove you are a person and you're in easily. People could steal your eyes tho ๐Ÿ’€ so it's not foolproof. But in general biometric proof of personhood could be a way forward as well.

[-] coolin@beehaw.org 8 points 1 year ago

There are some in the research community that agree with your take: THE CURSE OF RECURSION: TRAINING ON GENERATED DATA MAKES MODELS FORGET

Basically the long and short of that paper is that LLMs are inherently biased towards likely responses. The more their training set is LLM generated, and thus contains that bias, the less the LLM will be able to produce unlikely responses, over time degrading the model quality throughout successive generations.

However, I tend to think this viewpoint is probably missing something important. Can you train a new LLM on today's internet? Probably not, at least without some heavy cleaning. Can you train a multimodal model on video, audio, the chat logs of people talking to it, and even other better LLMs? Yes, and you will get a much higher quality model and likely won't get the same model collapse implied by the paper.

This is more or less what OpenAI has done. All the conversations with 100M+ users are saved and used to further train the AI. Their latest GPT4 is also trained on video and image recognition, and they have also been exploring ways for LLMs to train new ones, especially to aid in alignment of these models.

Another recent example is Orca, a fine tune of the open source llama model, which is trained by GPT-3.5 and GPT-4 as teachers, and retains ~90% of GPT-3.5's performance though it uses a factor of 10 less parameters.

[-] coolin@beehaw.org 8 points 1 year ago

Lemmygrad is specifically problematic for being predominantly Marxist Leninist (as the .ml suggests). I think you're probably right that people just reject them outright because of AH THE COMMUNISTS WANT TO END CAPITALISM red scare type stuff present in Western countries, but where I specifically find Lemmygrad (and other tankies) being way too negative to interact with is when they get into defending Communist regimes.

If you asked the average Lemmygrad user, they too would be enveloped in propaganda, though this time coming from communist regimes and praxis they've read. They have been deluded into believing Stalin and Mao were good leaders, that authoritarianism is okay if it advances their favorite political agenda (though for some reason also claim that these countries aren't authoritarian), and that these regimes should be implemented everywhere.

The worst of it all is their constant genocide denial. Yes, the USA and other Western countries have done a similar amount (maybe even more?) of really bad stuff in this area (e.g. natives, apartheids, roma, etc. ๐Ÿ’€), but I think broadly a well educated Western citizen, especially a leftist one, should be able to understand and admit that what their country did was wrong and should never be done again. A Lemmygrad user instead defends things like the Uighur genocide and Holodomor, saying both that they don't exist and are "western propaganda" while at the same time entertaining the counterfactual and saying if they did happen it was justified because the West did it too and they were being very mean to communism ๐Ÿ˜ก.

When you get to that level of malevolent stupidity, you start to look more and more like a fascist that supports genocide and absolute power of the state and that uses strategic ambiguity to express your toxic beliefs, than you do a leftist. I don't think anyone suggests we stay federated with a fascist instance because fascists are misunderstood after "years of propaganda pushed by western countries" to discredit Hitler and Mussolini, but here you are doing the moral equivalent.

view more: next โ€บ

coolin

joined 1 year ago