view the rest of the comments
196
Community Rules
You must post before you leave
Be nice. Assume others have good intent (within reason).
Block or ignore posts, comments, and users that irritate you in some way rather than engaging. Report if they are actually breaking community rules.
Use content warnings and/or mark as NSFW when appropriate. Most posts with content warnings likely need to be marked NSFW.
Most 196 posts are memes, shitposts, cute images, or even just recent things that happened, etc. There is no real theme, but try to avoid posts that are very inflammatory, offensive, very low quality, or very "off topic".
Bigotry is not allowed, this includes (but is not limited to): Homophobia, Transphobia, Racism, Sexism, Abelism, Classism, or discrimination based on things like Ethnicity, Nationality, Language, or Religion.
Avoid shilling for corporations, posting advertisements, or promoting exploitation of workers.
Proselytization, support, or defense of authoritarianism is not welcome. This includes but is not limited to: imperialism, nationalism, genocide denial, ethnic or racial supremacy, fascism, Nazism, Marxism-Leninism, Maoism, etc.
Avoid AI generated content.
Avoid misinformation.
Avoid incomprehensible posts.
No threats or personal attacks.
No spam.
Moderator Guidelines
Moderator Guidelines
- Don’t be mean to users. Be gentle or neutral.
- Most moderator actions which have a modlog message should include your username.
- When in doubt about whether or not a user is problematic, send them a DM.
- Don’t waste time debating/arguing with problematic users.
- Assume the best, but don’t tolerate sealioning/just asking questions/concern trolling.
- Ask another mod to take over cases you struggle with, if you get tired, or when things get personal.
- Ask the other mods for advice when things get complicated.
- Share everything you do in the mod matrix, both so several mods aren't unknowingly handling the same issues, but also so you can receive feedback on what you intend to do.
- Don't rush mod actions. If a case doesn't need to be handled right away, consider taking a short break before getting to it. This is to say, cool down and make room for feedback.
- Don’t perform too much moderation in the comments, except if you want a verdict to be public or to ask people to dial a convo down/stop. Single comment warnings are okay.
- Send users concise DMs about verdicts about them, such as bans etc, except in cases where it is clear we don’t want them at all, such as obvious transphobes. No need to notify someone they haven’t been banned of course.
- Explain to a user why their behavior is problematic and how it is distressing others rather than engage with whatever they are saying. Ask them to avoid this in the future and send them packing if they do not comply.
- First warn users, then temp ban them, then finally perma ban them when they break the rules or act inappropriately. Skip steps if necessary.
- Use neutral statements like “this statement can be considered transphobic” rather than “you are being transphobic”.
- No large decisions or actions without community input (polls or meta posts f.ex.).
- Large internal decisions (such as ousting a mod) might require a vote, needing more than 50% of the votes to pass. Also consider asking the community for feedback.
- Remember you are a voluntary moderator. You don’t get paid. Take a break when you need one. Perhaps ask another moderator to step in if necessary.
What we see here is the real user base of LLM. And 97% of them are free users.
It's hardly a mystery why no AI company is remotely close to making a profit, ever.
AI. The Boat of big tech.
A giant pit you throw money into and set it on fire. I guess its a worthwhile investment for Thiel and his gaggle of fascist technocrats. So they can use it to control everyone.
Have we still learned nothing about enshittification? This implication of this graph is that there’s an entire generation of people being raised right now who won’t be able to do jack shit without depending on AI. These companies don’t need to be profitable right now, because once they’re completely entrenched in the workflows and thought processes of millions of people, they can charge whatever they want. Accuracy and usefulness are secondary to addiction and dependency. If you can afford to amass power and ubiquity first, all the profit you can imagine will come later.
I mean, I failed / dropped out of High school, I'm absolutely fine. I'm not worried for the kids tbh. Kids will always take the path of least resistance when it comes to schoolwork, just the path is now actually getting homework done by an ai instead of just guessing / skipping the assignment. I'm genuinely more worried for all the older generations who don't realize that because of AI honor roll has 0 meaning now.
I mean, we have had solid proof that giving children homework is counterproductive for at least 2 decades now. Maybe this will be the final straw that actually makes us listen to the experts and just stop giving children homework
The benefits of homework depend on how old the kid is and how much homework they're getting. Too much homework too early is either a wash or an overall negative, but homework as a concept does have benefits.
Yup, I'm surprised the bottom hasn't dropped out from these companies yet. It will be like the dot com crash in the early 2000s I'm guessing. And they'll act so surprised...
They're being artificially propped up by billionaires to use as a bludgeon against labor. Profit is less important to them than destroying upward mobility and punishing anyone who thinks about unionizing.
Wish more people caught on to this! The AI wave is not an economic boom and it is not motivated by any sort of consumer demand, it is very much a concerted push by industry to further impoverish the working class on several fronts (Monetary, mental, organizational, etc). That's why it has continued flying in the face of all economic logic for the past several years.
Yes, like the other commenter replied, the Theils and Musks of the world want to go back to feudalism. That’s also why after years of making a big deal that interfering would tarnish the Washington Post, Bezos now seems to not care what happens to it’s reputation because he knows that there isn’t an alternative anymore.
Fuck no, it'll be much worse than the dotcom bubble. If you want to be terrified, look how much of the stock market is NVIDIA and the big tech companies.
The value isnt the product; that's garbage.
It's the politics of the product. Its the labor discipline, the ability to say 'computer said so'. Why bomb hospital? Computer said so. Yes, i wanted to bomb hospital, but what i wanted didn't factor in! i did it because computer (trained to say bomb hospital) said so!
bottom can't drop out if they never had a bottom
Sorry to break it to you, but AI does have uses, it’s just they are all evil.
Imagine things like identifying (with low accuracy) enemies from civilians in a war zone.
What is discussed in this thread are LLMs, which are a subgroup of AI. What you are referring to, is image recognition, and there are plenty of examples where their use is not evil, e.g. in the medical field.
Yeah exactly, ML is very powerful and can be very useful in niche areas.
LLMs have tainted good AI progress cause it made line go up.
I have a few friends who work have started calling their field "data analysis with computers", because "AI" and "Machine Learning" sounds like "prompt engineer" nowadays.
AI as a concept has many uses that are beneficial!
The beneficial uses are non-profit seeking uses, which are not the ones seen jammed into everything. Pattern matching is extremely helpful for science, engineering , music, and a ton of other specific purposes in controlled settings.
LLMs/chat bots implemented by profit seeking companies and vomited upon the masses have only evil purposes though.
This is why they report "annualized revenue" where they take their best month and multiply it by 12.
It doesn’t even have to be a calendar month, it can be the best 30 days in a row times 12. I’d love to be able to report my yearly income that way when applying to apartments, lmao.
I also read Ed Zitron's newsletter!
Gonna be honest, I just listen to the podcast!
Really? Damn, I learned something today. I guess I will report my "annualised" income to the tax office from now on.
Dear tax office, if you're reading this, 1) hi, 2) this is a joke :)
Reminder that the bullshit Cursor pulled is entirely legal...
Simple evil solution.
Make it part of tuition.
Devil blushes
Not that I have sources to link but last I read I thought the big two providers are making enough money to profit on just inference cost, but because of the obscene spending on human talent and compute for new model training they aren't turning a profit. If they stopped developing new models they would likely be making money though.
And they are fleecing investors for billions, so big profit in that way lol
Midjourney reported a profit in 2022, and then never reported anything new.
Cursor recently made 1 month of mad profit, by first hiking the price in their product and holding the users basically hostage, and then they stopped offering their most succesful product because they couldnt afford to sell it at that price. They annualized that month, and now "make a profit".
Basically, cursor let everyone drive a Ferrari for a hundred bucks a month. Then they said "sorry, it costs 500 a month". And then said "actually, instead of Ferrari, here's a Honda". Then they subtracted the cost of the Honda from the price of the Ferrari, and called it a record profit
This is legal somehow
The companies that were rasing reasonable revenue compared to their costs (e.g. Cursor) were ones that were buying inference from OpenAI and Anthropic at enterprise rates and selling it to users at retail rates, but OpenAI and Anthropic raised their rates, so that cost was passed onto consumers, who stopped paying for Cursor etc. and now they're haemorrhaging money.
The problem is that you do need to keep training models for this to make sense.
And you always need at least some human editorialization of models, otherwise the model will just say whatever, learn from itself and degrade over time. This cannot be done by other AIs, so for now you still need humans to make sure the AI models are actually getting useful information.
The problem with this, which many have already pointed out, is that it makes AIs just as unreliable as any traditional media. But if you don't oversee their datasets at all and just allow them to learn from everything then they're even more useless, basically just replicating social media bullshit, which nowadays is like at least 60% AI generated anyway.
So yeah, the current model is, not surprisingly, completely unsustainable.
The technology itself is great though. Imagine having an AI that you can easily train at home on 100s of different academic papers, and then run specific analyses or find patterns that would be too big for humans to see at first. Also imagine the impact to the medical field with early cancer detection or virus spreading patterns, or even DNA analysis for certain diseases.
It's also super good if used for creative purposes (not for just generating pictures or music). So for example, AI makes it possible for you to sing a song, then sing the melody for every member of a choir, and fine tune all voices to make them unique. You can be your own choir, making a lot of cool production techniques more accessible.
I believe once the initial hype dies down, we stop seeing AI used as a cheap marketing tactic, and the bubble bursts, the real benefits of AI will become apparent, and hopefully we will learn to live with it without destroying each other lol.
Imagine is the key word. I've actually tried to use LLMs to perform literature analyses in my field, and they're total crap. They produce something that sounds true to someone not familiar with a field. But if you actually have some expert knowledge in a field, the LLM just completely falls apart. Imagine is all you can do, because LLMs cannot perform basic literature review and project planning, let alone find patterns in papers that human scientists can't. The emperor has no clothes.
But I don't think that's necessarily a problem that can't be solved. LLM and so on are ultimately simply statistical analysis, and if you refine it and train it enough, it can absolutely summarise at least one paper at the moment. Google's Notebook LM is already capable of it, I just don't think it can quite pull off many of them yet. But the current state of LLMs is not that far off.
I agree with AIs being way over hyped and also just having a general dislike for them due to the way they're being used, the people who gush over them, and the surrounding culture. But I don't think that means we should simply ignore reality altogether. The LLMs from 2 or even 1 year ago are not even comparable to the ones today, and that trend will probably keep going that way for a while. The main issue lies with the ethics of training, copyright, and of course, the replacement of labor in exchange of what amounts to simply a cool tool.
Hell, openai didn't turn a profit on users paying $2000 a month.