598
top 50 comments
sorted by: hot top controversial new old
[-] hokage@lemmy.world 237 points 1 year ago

What a silly article. 700,000 per day is ~256 million a year. Thats peanuts compared to the 10 billion they got from MS. With no new funding they could run for about a decade & this is one of the most promising new technologies in years. MS would never let the company fail due to lack of funding, its basically MS's LLM play at this point.

[-] p03locke@lemmy.dbzer0.com 110 points 1 year ago

When you get articles like this, the first thing you should ask is "Who the fuck is Firstpost?"

[-] altima_neo@lemmy.zip 34 points 1 year ago

Yeah where the hell do these posters find these articles anyway? It's always from blogs that repost stuff from somewhere else

load more comments (1 replies)
[-] Wats0ns@sh.itjust.works 42 points 1 year ago

Openai biggest spending is infrastructure, Whis is rented from... Microsoft. Even if the company fold, they will have given back to Microsoft most of the money invested

[-] fidodo@lemm.ee 25 points 1 year ago

MS is basically getting a ton of equity in exchange for cloud credits. That's a ridiculously good deal for MS.

[-] monobot@lemmy.ml 17 points 1 year ago

While title is click bite, they do say right at the beginning:

*Right now, it is pulling through only because of Microsoft's $10 billion funding *

Pretty hard to miss, and than they go to explain their point, which might be wrong, but still stands. 700k i only one model, there are others and making new ones and running the company. It is easy over 1B a year without making profit. Still not significant since people will pour money into it even after those 10B.

load more comments (5 replies)
[-] simple@lemm.ee 141 points 1 year ago

There's no way Microsoft is going to let it go bankrupt.

[-] jmcs@discuss.tchncs.de 66 points 1 year ago

If there's no path to make it profitable, they will buy all the useful assets and let the rest go bankrupt.

load more comments (1 replies)
[-] Tigbitties@kbin.social 23 points 1 year ago

That's $260 million .There are 360 million paid seats of MS360. So they'd have to raise their prices $0.73 per year to cover the cost.

[-] SinningStromgald@lemmy.world 25 points 1 year ago

So they'll raise the cost by $100/yr.

load more comments (2 replies)
[-] Elderos@lemmings.world 95 points 1 year ago

That would explain why ChatGPT started regurgitating cookie-cutter garbage responses more often than usual a few months after launch. It really started feeling more like a chatbot lately, it almost felt talking to a human 6 months ago.

[-] glockenspiel@lemmy.world 63 points 1 year ago

I don't think it does. I doubt it is purely a cost issue. Microsoft is going to throw billions at OpenAI, no problem.

What has happened, based on the info we get from the company, is that they keep tweaking their algorithms in response to how people use them. ChatGPT was amazing at first. But it would also easily tell you how to murder someone and get away with it, create a plausible sounding weapon of mass destruction, coerce you into weird relationships, and basically anything else it wasn't supposed to do.

I've noticed it has become worse at rubber ducking non-trivial coding prompts. I've noticed that my juniors have a hell of a time functioning without access to it, and they'd rather ask questions of seniors rather than try to find information our solutions themselves, replacing chatbots with Sr devs essentially.

A good tool for getting people on ramped if they've never coded before, and maybe for rubber ducking in my experience. But far too volatile for consistent work. Especially with a Blackbox of a company constantly hampering its outputs.

[-] Windex007@lemmy.world 67 points 1 year ago

As a Sr. Dev, I'm always floored by stories of people trying to integrate chatGPT into their development workflow.

It's not a truth machine. It has no conception of correctness. It's designed to make responses that look correct.

Would you hire a dev with no comprehension of the task, who can not reliably communicate what their code does, can not be tasked with finding and fixing their own bugs, is incapable of having accountibility, can not be reliably coached, is often wrong and refuses to accept or admit it, can not comprehend PR feedback, and who requires significantly greater scrutiny of their work because it is by explicit design created to look correct?

ChatGPT is by pretty much every metric the exact opposite of what I want from a dev in an enterprise development setting.

[-] JackbyDev@programming.dev 34 points 1 year ago

Search engines aren't truth machines either. StackOverflow reputation is not a truth machine either. These are all tools to use. Blind trust in any of them is incorrect. I get your point, I really do, but it's just as foolish as believing everyone using StackOverflow just copies and pastes the top rated answer into their code and commits it without testing then calls it a day. Part of mentoring junior devs is enabling them to be good problem solvers, not just solving their problems. Showing them how to properly use these tools and how to validate things is what you should be doing, not just giving them a solution.

load more comments (2 replies)
[-] SupraMario@lemmy.world 13 points 1 year ago

Don't underestimate C levels who read a Bloomberg article about AI to try and run their entire company off of it...then wonder why everything is on fire.

load more comments (3 replies)
[-] bmovement@lemmy.world 13 points 1 year ago* (last edited 1 year ago)

Copilot is pretty amazing for day to day coding, although I wonder if a junior dev might get led astray with some of its bad ideas, or too dependent on it in general.

Edit: shit, maybe I’m too dependent on it.

load more comments (3 replies)
[-] Gsus4@feddit.nl 18 points 1 year ago* (last edited 1 year ago)

But what did they expect would happen, that more people would subscribe to pro? In the beginning I thought they just wanted to survey-farm usage to figure out what the most popular use cases were and then sell that information or repackage use-cases as an individual added-value service.

load more comments (4 replies)
[-] merthyr1831@lemmy.world 81 points 1 year ago

I mean apart from the fact it's not sourced or whatever, it's standard practice for these tech companies to run a massive loss for years while basically giving their product away for free (which is why you can use openAI with minimal if any costs, even at scale).

Once everyone's using your product over competitors who couldn't afford to outlast your own venture capitalists, you can turn the price up and rake in cash since you're the biggest player in the market.

It's just Uber's business model.

[-] some_guy@lemmy.sdf.org 26 points 1 year ago

The difference is that the VC bubble has mostly ended. There isn't "free money" to keep throwing at a problem post-pan. That's why there's an increased focus on Uber (and others) making a profit.

[-] flumph@programming.dev 20 points 1 year ago

In this case, Microsoft owns 49% of OpenAI, so they're the ones subsidizing it. They can also offer at-cost hosting and in-roads into enterprise sales. Probably a better deal at this point than VC cash.

[-] yiliu@informis.land 15 points 1 year ago

This is what caused spez at Reddit and Musk at Twitter to go into desperation mode and start flipping tables over. Their investors are starting to want results now, not sometime in the distant future.

load more comments (1 replies)
load more comments (2 replies)
[-] Billy_Gnosis@lemmy.world 57 points 1 year ago

If AI was so great, it would find a solution to operate at fraction of the cost it does now

[-] Death_Equity@lemmy.world 70 points 1 year ago

Wait, has anybody bothered to ask AI how to fix itself? How much Avocado testing does it do? Can AI pull itself up by its own boot partition, or does it expect the administrator to just give it everything?

[-] vrighter@discuss.tchncs.de 15 points 1 year ago

if we don't know, it doesn't know.

If we know, but there's no public text about it, it doesn't know either.

it is trained off of stuff that has already been written, and trained to emulate the statistical properties of those words. It cannot and will not tell us anything new

[-] FaceDeer@kbin.social 15 points 1 year ago

That's not true. These models aren't just regurgitating text that they were trained on. They learn the patterns and concepts in that text, and they're able to use those to infer things that weren't explicitly present in the training data.

I read recently about some researchers who were experimenting with ChatGPT's ability to do basic arithmetic. It's not great at it, but it's definitely figured out some techniques that allow it to answer math problems that were not in its training set. It gets them wrong sometimes, but it's like a human doing math in its head rather than a calculator using rigorous algorithms so that's to be expected.

load more comments (7 replies)
[-] wizardbeard@lemmy.dbzer0.com 14 points 1 year ago

Really says something that none of your responses yet seem to have caught that this was a joke.

load more comments (5 replies)
load more comments (9 replies)
load more comments (1 replies)
[-] whispering_depths@lemmy.world 45 points 1 year ago

huh, so with the 10bn from Microsoft they should be good for... just over 30 years!

[-] pachrist@lemmy.world 27 points 1 year ago

ChatGPT has the potential to make Bing relevant and unseat Google. No way Microsoft pulls funding. Sure, they might screw it up, but they'll absolutely keep throwing cash at it.

load more comments (1 replies)
[-] Ghyste@sh.itjust.works 44 points 1 year ago
load more comments (4 replies)
[-] danielbln@lemmy.world 42 points 1 year ago* (last edited 1 year ago)

This article has been flagged on HN for being clickbait garbage.

[-] Zeth0s@lemmy.world 13 points 1 year ago* (last edited 1 year ago)

It is clearly no sense. But it satisfies the irrational needs of the masses to hate on AI.

Tbf I have no idea why. Why do people hate a extremely clever family of mathematical methods, which highlights the brilliance of human minds. But here we are. Casually shitting on one of the highest peak humanity has ever reached

load more comments (5 replies)
load more comments (4 replies)
[-] TimeMuncher@lemmy.world 41 points 1 year ago

Indian newpapers publish anything without any sort of verification. From reddit videos to whatsapp forwards. More than news, they are like an old chinese whispers game which is run infinitely. So take this with a huge grain of salt.

[-] figaro@lemdro.id 36 points 1 year ago

Pretty sure Microsoft will be happy to come save the day and just buy out the company.

[-] pexavc@lemmy.world 15 points 1 year ago

it feels like, that was the plan all along

[-] li10@feddit.uk 34 points 1 year ago

I don’t understand Lemmy’s hate boner over AI.

Yeah, it’s probably not going to take over like companies/investors want, but you’d think it’s absolutely useless based on the comments on any AI post.

Meanwhile, people are actively making use of ChatGPT and finding it to be a very useful tool. But because sometimes it gives an incorrect response that people screenshot and post to Twitter, it’s apparently absolute trash…

[-] Zeth0s@lemmy.world 19 points 1 year ago* (last edited 1 year ago)

AI is literally one of the most incredible creation of humanity, and people shit on it as if they know better. It's genuinely an astonishing historical and cultural achievement, peak of human ingenuity.

No idea why such hate...

One can hate disney ceo for misusing AI, but why shitting on AI?

[-] wizardbeard@lemmy.dbzer0.com 15 points 1 year ago

It's shit on because it is not actually AI as the general public tends to use the term. This isn't Data from Star Trek, or anything even approaching Asimov's three laws.

The immediate defense against this statement is people going into mental gymnastics and hand waving about "well we don't have a formal definition for intelligence so you can't say they aren't" which is just... nonsense rhetorically because the inverse would be true as well. Can't label something as intelligent if we have no formal definition either. Or they point at various arbitrary tests that ChatGPT has passed and claim that clearly something without intelligence could never have passed the bar exam, in complete and utter ignorance of how LLMs are suited to those types of problem domains.

Also, I find that anyone bringing up the limitations and dangers is immediately lumped into this "AI haters" group like belief in AI is some sort of black and white religion or requires some sort of idealogical purity. Like having honest conversations about these systems' problems intrinsically means you want them to fail. That's BS.


Machine Learning and Large Language Models are amazing, they're game changing, but they aren't magical panaceas and they aren't even an approximation of intelligence despite appearances. LLMs are especially dangerous because of how intelligent they appear to a layperson, which is why we see everyone rushing to apply them to entirely non-fitting use cases as a race to be the first to make the appearance of success and suck down those juicy VC bux.

Anyone trying to say different isn't familiar with the field or is trying to sell you something. It's the classic case of the difference between tech developers/workers and tech news outlets/enthusiasts.

The frustrating part is that people caught up in the hype train of AI will say the same thing: "You just don't understand!" But then they'll start citing the unproven potential future that is being bandied around by people who want to keep you reading their publication or who want to sell you something, not any technical details of how these (amazing) tools function.


At least in my opinion that's where the negativity comes from.

load more comments (5 replies)
load more comments (11 replies)
load more comments (9 replies)
[-] Lemmylefty@lemmy.world 27 points 1 year ago

Does it feel like these “game changing” techs have lives that are accelerating? Like there’s the dot com bubble of a decade or so, the NFT craze that lasted a few years, and now AI that’s not been a year.

The Internet is concentrating and getting worse because of it, inundated with ads and bots and bots who make ads and ads for bots, and being existentially threatened by Google’s DRM scheme. NFTs have become a joke, and the vast majority of crypto is not far behind. How long can we play with this new toy? Its lead paint is already peeling.

load more comments (14 replies)
[-] Zuberi@lemmy.world 25 points 1 year ago

This article is dumb as shit

[-] BetaDoggo_@lemmy.world 14 points 1 year ago

No sources and even given their numbers they could continue running chatgpt for another 30 years. I doubt they're anywhere near a net profit but they're far from bankruptcy.

load more comments (2 replies)
[-] theneverfox@pawb.social 22 points 1 year ago

This is alarming...

One of the things companies have started doing lately is signaling "we could do bankrupt", then jumping ahead a stage on enshittification

[-] FaceDeer@kbin.social 18 points 1 year ago

I don't think OpenAI needs any excuses to enshittify, they've been speedrunning ever since they decided they liked profit instead of nonprofit.

[-] Cheesus@lemmy.world 19 points 1 year ago

A company that just raised $10b from Microsoft is struggling with $260m a year? That's almost 40 years of runway.

[-] Browning@lemmy.world 15 points 1 year ago

They are choosing to spend that much. That doesn't suggest that they expect financial problems.

load more comments (2 replies)
load more comments
view more: next ›
this post was submitted on 13 Aug 2023
598 points (100.0% liked)

Technology

59299 readers
3984 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS