208
top 38 comments
sorted by: hot top controversial new old
[-] admin@lemmy.my-box.dev 95 points 1 month ago

Technically correct (tm)

Before you get your hopes up: Anyone can download it, but very few will be able to actually run it.

[-] chiisana@lemmy.chiisana.net 23 points 1 month ago

What’s the resources requirements for the 405B model? I did some digging but couldn’t find any documentation during my cursory search.

[-] modeler@lemmy.world 38 points 1 month ago* (last edited 1 month ago)

Typically you need about 1GB graphics RAM for each billion parameters (i.e. one byte per parameter). This is a 405B parameter model. Ouch.

Edit: you can try quantizing it. This reduces the amount of memory required per parameter to 4 bits, 2 bits or even 1 bit. As you reduce the size, the performance of the model can suffer. So in the extreme case you might be able to run this in under 64GB of graphics RAM.

[-] cheddar@programming.dev 21 points 1 month ago

Typically you need about 1GB graphics RAM for each billion parameters (i.e. one byte per parameter). This is a 405B parameter model.

[-] Deceptichum@quokk.au 12 points 1 month ago

Or you could run it via cpu and ram at a much slower rate.

[-] errer@lemmy.world 12 points 1 month ago

Yeah uh let me just put in my 512GB ram stick…

[-] Deceptichum@quokk.au 7 points 1 month ago

Samsung do make them.

Goodluck finding 512gb of VRAM.

[-] bruhduh@lemmy.world 1 points 1 month ago

https://www.ebay.com/p/116332559 lga2011 motherboards quite cheap, insert 2 xeon 2696v4 44 threads each totalling at 88 threads and 8 ddr4 32gb sticks, it comes quite cheap actually, you can also install Nvidia p40 with 24gb each, you can max out this build for ai for under 2000$

[-] chiisana@lemmy.chiisana.net 2 points 1 month ago

Finally! My dumb dumb 1TB ram server (4x E5-4640 + 32x32GB DDR3 ECC) can shine.

[-] Siegfried@lemmy.world 8 points 1 month ago* (last edited 1 month ago)

At work we habe a small cluster totalling around 4TB of RAM

It has 4 cooling units, a m3 of PSUs and it must take something like 30 m2 of space

[-] TipRing@lemmy.world 4 points 1 month ago

When the 8 bit quants hit, you could probably lease a 128GB system on runpod.

[-] obbeel@lemmy.eco.br 2 points 1 month ago

According to huggingface, you can run a 34B model using 22.4GBs of RAM max. That's a RTX 3090 Ti.

[-] arefx@lemmy.ml 1 points 1 month ago

Ypu mean my 4090 isn't good enough 🤣😂

[-] Longpork3@lemmy.nz 1 points 1 month ago* (last edited 1 month ago)

Hmm, I probably have that much distributed across my network... maybe I should look into some way of distributing it across multiple gpu.

Frak, just counted and I only have 270gb installed. Approx 40gb more if I install some of the deprecated cards in any spare pcie slots i can find.

[-] Blaster_M@lemmy.world 6 points 1 month ago

As a general rule of thumb, you need about 1 GB per 1B parameters, so you're looking at about 405 GB for the full size of the model.

Quantization can compress it down to 1/2 or 1/4 that, but "makes it stupider" as a result.

[-] coffee_with_cream@sh.itjust.works 12 points 1 month ago* (last edited 1 month ago)

This would probably run on a a6000 right?

Edit: nope I think I'm off by an order of magnitude

[-] 5redie8@sh.itjust.works 2 points 1 month ago

"an order of magnitude" still feels like an understatement LOL

My 35b models come out at like Morse code speed on my 7800XT, but at least it does work?

[-] LavenderDay3544@lemmy.world 8 points 1 month ago* (last edited 1 month ago)

When the RTX 9090 Ti comes, anyone who can afford it will be able to run it.

[-] Contravariant@lemmy.world 3 points 1 month ago

That doesn't sound like much of a change from the situation right now.

[-] bitfucker@programming.dev 4 points 1 month ago

So does OSM data. Everyone can download the whole earth but to serve it and provide routing/path planning at scale takes a whole other skill and resources. It's a good thing that they are willing to open source their model in the first place.

[-] abcdqfr@lemmy.world 25 points 1 month ago

Wake me up when it works offline "The Llama 3.1 models are available for download through Meta's own website and on Hugging Face. They both require providing contact information and agreeing to a license and an acceptable use policy, which means that Meta can technically legally pull the rug out from under your use of Llama 3.1 or its outputs at any time."

[-] admin@lemmy.my-box.dev 33 points 1 month ago* (last edited 1 month ago)

WAKE UP!

It works offline. When you use with ollama, you don't have to register or agree to anything.

Once you have downloaded it, it will keep on working, meta can't shut it down.

[-] MonkderVierte@lemmy.ml 3 points 1 month ago* (last edited 1 month ago)

Well, yes and no. See the other comment, 64 GB VRAM at the lowest setting.

[-] admin@lemmy.my-box.dev 9 points 1 month ago

Oh, sure. For the 405B model it's absolutely infeasible to host it yourself. But for the smaller models (70B and 8B), it can work.

I was mostly replying to the part where they claimed meta can take it away from you at any point - which is simply not true.

[-] RandomLegend@lemmy.dbzer0.com 13 points 1 month ago* (last edited 1 month ago)

It's available through ollama already. i am running the 8b model on my little server with it's 3070 as of right now.

It's really impressive for a 8b model

[-] abcdqfr@lemmy.world 1 points 1 month ago

Intriguing. Is that an 8gb card? Might have to try this after all

[-] RandomLegend@lemmy.dbzer0.com 1 points 1 month ago

Yup, 8GB card

Its my old one from the gaming PC after switching to AMD.

It now serves as my little AI hub and whisper server for home assistant

[-] abcdqfr@lemmy.world 1 points 1 month ago

What the heck is whisper? Ive been fooling around with hass for ages, haven't heard of it even after at least two minutes of searching. Is it openai affiliated hardwae?

[-] RandomLegend@lemmy.dbzer0.com 4 points 1 month ago

whisper is an STT application that stems from openAI afaik, but it's open source at this point.

i wrote a little guide on how to install it on a server with an NVidia GPU and hw acceleration and integrate it into your homeassistant after. https://a.lemmy.dbzer0.com/lemmy.dbzer0.com/comment/5330316

it's super fast with a GPU available and i use those little M5 ATOM Echo microphones for this.

[-] Kuvwert@lemm.ee 11 points 1 month ago* (last edited 1 month ago)

I'm running 3.1 8b as we speak via ollama totally offline and gave info to nobody.

https://ollama.com/library/llama3.1

[-] hperrin@lemmy.world 16 points 1 month ago

Yo this is big. In both that it is momentous, and holy shit that’s a lot of parameters. How many GB is this model?? I’d be able to run it if I had an few extra $10k bills lying around to buy the required hardware.

[-] Ripper@lemmy.world 20 points 1 month ago
[-] hperrin@lemmy.world 4 points 1 month ago
[-] bruhduh@lemmy.world 3 points 1 month ago

That's some thick model

[-] 2001zhaozhao@sh.itjust.works 1 points 1 month ago

Time to buy a thread ripper and 800gb of ram so that I can run this model at 1 token per hour.

[-] i_am_a_cardboard_box@lemmy.world 11 points 1 month ago

Kind of petty from Zuck not to roll it out in Europe due to the digital services act.. But also kind of weird since it's open source? What's stopping anyone from downloading the model and creating a web ui for Europe users?

[-] obbeel@lemmy.eco.br 1 points 1 month ago

That looks good on paper, but while I find ChatGPT good to create critical thinking, I've found Meta's products (Facebook and Instagram) to be sources of disinformation. That makes me have reservations about Meta's intentions with LLMs. As the article says, the model comes pre-trained, so it's most made up of information gathered by Meta.

[-] BreadstickNinja@lemmy.world 2 points 1 month ago

Neither Meta nor anyone else is hand-curating their dataset. The fact that Facebook is full of grandparents sharing disinformation doesn't impact what's in their model.

But all LLMs are going to have accuracy issues because they're 1) trained on text written by humans who themselves are inaccurate and 2) designed to choose tokens based on probability rather than any internal logic as to whether an answer is factual.

All LLMs are full of shit. That doesn't mean they're not fun or even useful in some applications, but you shouldn't trust anything they write.

this post was submitted on 24 Jul 2024
208 points (100.0% liked)

Technology

57898 readers
4033 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS