143
submitted 1 year ago by yogthos@lemmy.ml to c/technology@lemmy.ml
top 50 comments
sorted by: hot top controversial new old
[-] YurkshireLad@lemmy.ca 66 points 1 year ago

350,000 servers? Jesus, what a waste of resources.

[-] yogthos@lemmy.ml 53 points 1 year ago

just capitalist markets allocating resources efficiently where they're need

[-] AlexWIWA@lemmy.ml 19 points 1 year ago

Sounds like we're going to get some killer deals on used hardware in a year or so

[-] queermunist@lemmy.ml 58 points 1 year ago

Totally not a bubble though.

[-] MajorHavoc@programming.dev 24 points 1 year ago* (last edited 1 year ago)

Yeah. It's a legitimate business, where the funders at the top of the pyramid are paid by those that join at the bottom!

[-] riskable@programming.dev 34 points 1 year ago

Now's the time to start saving for a discount GPU in approximately 12 months.

[-] FaceDeer@fedia.io 17 points 1 year ago

They don't use GPUs, they use more specialized devices like the H100.

[-] tyler@programming.dev 9 points 1 year ago

Everyone that doesn’t have access to those is using gpus though.

[-] FaceDeer@fedia.io 7 points 1 year ago

We are talking specifically about OpenAI, though.

[-] porous_grey_matter@lemmy.ml 7 points 1 year ago

People who previously were at the high end of GPU can now afford used H100s -> they sell their GPUs -> we can maybe afford them

[-] Swedneck@discuss.tchncs.de 2 points 11 months ago

the hermit crab gambit, everyone line up in order of size!

load more comments (2 replies)
[-] Aabbcc@lemm.ee 2 points 1 year ago

Can I use a H100 to run hell divers 2?

[-] Travelator@thelemmy.club 22 points 1 year ago

Good. It's fake crap tech that no one needs.

[-] Ephera@lemmy.ml 20 points 1 year ago

I do expect them to receive more funding, but I also expect that to be tied to pricing increases. And I feel like that could break their neck.

In my team, we're doing lots of GenAI use-cases and far too often, it's a matter of slapping a chatbot interface onto a normal SQL database query, just so we can tell our customers and their bosses that we did something with GenAI, because that's what they're receiving funding for. Apart from these user interfaces, we're hardly solving problems with GenAI.

If the operation costs go up and management starts asking what the pricing for a non-GenAI solution would be like, I expect the answer to be rather devastating for most use-cases.

Like, there's maybe still a decent niche in that developing a chatbot interface is likely cheaper than a traditional interface, so maybe new projects might start out with a chatbot interface and later get a regular GUI to reduce operation costs. And of course, there is the niche of actual language processing, for which LLMs are genuinely a good tool. But yeah, going to be interesting how many real-world use-cases remain once the hype dies down.

[-] yogthos@lemmy.ml 5 points 1 year ago

It's also worth noting that smaller model work fine for these types of use cases, so it might just make sense to run a local model at that point.

[-] chemicalwonka@discuss.tchncs.de 17 points 1 year ago
[-] PanArab@lemmy.ml 17 points 1 year ago

I hope so! I am so sick and tired of AI this and AI that at work.

[-] delirious_owl@discuss.online 12 points 1 year ago

Bubble. Meet pop.

[-] flambonkscious@sh.itjust.works 12 points 1 year ago

The start(-up?)[sic] generates up to $2 billion annually from ChatGPT and an additional $ 1 billion from LLM access fees, translating to an approximate total revenue of between $3.5 billion and $4.5 billion annually.

I hope their reporting is better then their math...

[-] Hector_McG@programming.dev 10 points 1 year ago

Probably used ChatGPT….

[-] twei@discuss.tchncs.de 8 points 1 year ago

Maybe they also added 500M for stuff like Dall-E?

[-] flambonkscious@sh.itjust.works 3 points 1 year ago

Good point - it guess it could have easily fallen out while being edited, too

[-] NigelFrobisher@aussie.zone 8 points 1 year ago
[-] ryan213@lemmy.ca 12 points 1 year ago
[-] geneva_convenience@lemmy.ml 7 points 1 year ago

Ai stands for artificial income.

[-] Aurenkin@sh.itjust.works 7 points 1 year ago

Last time a batch of these popped up it was saying they'd be bankrupt in 2024 so I guess they've made it to 2025 now. I wonder if we'll see similar articles again next year.

[-] strawberry@kbin.run 7 points 1 year ago
[-] Tangentism@lemmy.ml 7 points 1 year ago

For anyone doing a serious project, it's much more cost effective to rent a node and run your own models on it. You can spin them up and down as needed, cache often-used queries, etc.

[-] yogthos@lemmy.ml 7 points 1 year ago

For sure, and in a lot of use cases you don't even need a really big model. There are a few niche scenarios where you require a large context that's not practical to run on your own infrastructure, but in most cases I agree.

[-] driving_crooner@lemmy.eco.br 4 points 1 year ago

I hope not, I use it a lot for quickly programming answers and prototypes and for theory on my actuarial science MBA.

[-] yogthos@lemmy.ml 12 points 1 year ago

I find you can just run local models for that. For example, I've been using gpt4all with a the phind model and it works reasonably well

[-] driving_crooner@lemmy.eco.br 7 points 1 year ago

How much computer power they need? My pc is pretty old :/

load more comments (1 replies)
[-] SoJB@lemmy.ml 2 points 1 year ago

Is 1) the fact that an LLM can be indistinguishable from your original thought and 2) an MBA (lmfao) supposed to be impressive?

[-] chicken@lemmy.dbzer0.com 15 points 1 year ago

I don't think that person is bragging, just saying why it's useful to them

[-] arran4@aussie.zone 4 points 1 year ago

This sounds like FUD to me. If it were it would be acquired pretty quickly.

[-] jackyalcine@lemmy.ml 8 points 1 year ago

They're wholly owned by Microsoft so it'd probably be mothballed at worst.

[-] arran4@aussie.zone 4 points 1 year ago

For another conversation I need some evidence of that, where did you find it?

[-] FaceDeer@fedia.io 4 points 1 year ago

OpenAI is no longer the cutting edge of AI these days, IMO. It'll be fine if they close down. They blazed the trail, set the AI revolution in motion, but now lots of other companies have picked it up and are doing better at it than them.

[-] pizza_the_hutt@sh.itjust.works 31 points 1 year ago

There is no AI Revolution. There never was. Generative AI was sold as an automation solution to companies looking to decrease labor costs, but's it's not actually good at doing that. Moreover, there's not enough good, accurate training material to make generative AI that much smarter or more useful than it already is.

Generative AI is a dead end, and big companies are just now starting to realize that, especially after the Goldman-Sachs report on AI. Sam Altman is just a snake oil saleman, another failing-upwards executive who told a bunch of other executives what they wanted to hear. It's just now becoming clear that the emperor has no clothes.

[-] SkyNTP@lemmy.ml 6 points 1 year ago

Generative AI is not smart to begin with. LLM are basically just compressed versions of the internet that predict statistically what a sentence needs to be to look "right". There's a big difference between appearing right and being right. Without a critical approach to information, independent reasoning, individual sensing, these AI's are incapable of any meaningful intelligence.

In my experience, the emperor and most people around them still has not figured this out yet.

[-] yogthos@lemmy.ml 7 points 1 year ago

this was my fav take on it https://archive.ph/lkpuA

[-] anachronist@midwest.social 5 points 1 year ago

Generative AI is just classification engines run in reverse. Classification engines are useful but they've been around and making incremental improvements for at least a decade. Also, just like self-driving cars they've been writing checks they can't honor. For instance, legal coding and radiology were supposed to be automated by classification engines a long time ago.

[-] bizarroland@fedia.io 3 points 1 year ago* (last edited 1 year ago)

It's sort of like how you can create a pretty good text message on your phone using voice to text but no courtroom is allowing AI transcription.

There's still too much risk that it will capitalize the wrong word or replace a word that's close to what was said or do something else wholly unconceived of to trust it with our legal process.

If they could guarantee a 100% accurate transcription of spoken word to text it would put the entire field of Court stenographers out of business and generate tens of millions of dollars worth of digital contracts for the company who can figure it out.

Not going to do it because even today a phone can't tell the difference between the word holy and the word holy. (Wholly)

[-] mozz@mbin.grits.dev 2 points 1 year ago* (last edited 1 year ago)

If they closed down, and the people still aligned with safety had to take up the mantle, that would be fine.

If they got desperate for money and started looking for people they could sell their soul to (more than they have already) in exchange for keeping the doors open, that could potentially be pretty fuckin bad.

[-] FaceDeer@fedia.io 4 points 1 year ago

Well, my point is that it's already largely irrelevant what they do. Many of their talented engineers have moved on to other companies, some new startups and some already-established ones. The interesting new models and products are not being produced by OpenAI so much any more.

I wouldn't be surprised if "safety alignment" is one of the reasons, too. There are a lot of folks in tech who really just want to build neat things and it feels oppressive to be in a company that's likely to lock away the things they build if they turn out to be too neat.

load more comments (2 replies)
load more comments
view more: next ›
this post was submitted on 28 Jul 2024
143 points (100.0% liked)

Technology

39095 readers
64 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 6 years ago
MODERATORS