77
AI ‘dream girls’ are coming for porn stars’ jobs
(www.washingtonpost.com)
This is a most excellent place for technology news and articles.
They could even arrange meetups like double dates and parties and such. The future is gonna be so chaotic. I love it.
If the algorithms of social media are any indication, it's more likely they'd be programmed to manipulate the user into spending as much time with the AI as possible, while the AI serves them ads.
Hahahaha
I like how you think!
What would be the issue with that?
Big privacy advocate so I was curious what it takes to self host something like that, more so just wanting a very flexible personal assistant for product, weather alerts all in one.
Takes a lot of RAM and GPU power, more than I have sitting around.
Have you been looking at quantised models? You can get pretty good ones at the 20 gig RAM+VRAM level which is very reasonable if you have a gaming PC and are ok with responses not being instant.
Already a thing on /g/
Not just scraping data but spreading disinfo and radicalizing people. There was one case recently I saw about that. Not just AI lovers, but potentially a number of AI applications
Pasterama, Don't Date Fumos that are also Tulpas and if your cold their cold, put them behind 3 secret walls that are on fire, under the sea.
Neural engines are coming to basically all CPUs. It won’t be long before you can run your own girlfriend offline on your phone. Training the data is the expensive part after all. I can already run basic llama 2B on my iPad, though offloading the software instead of just downloading off the App Store.
I’m fairly sure anyone with a good GPU can also run these, but I haven’t tried.
Yes. The Llama 70B derived models, as well as Mixtral 8x7B and the new Mistral Medium 70B are competitive with ChatGPT 3.5. Most of them can do 16,000 token context similar to ChatGPT as well.
You only NEED 40GB of free RAM to run them at decent quality, but it's slow.
With a 24GB GPU like a 3090 or 4090 you can run them at a reasonable speed with partial GPU offload. About 1-2 words per second. I run 70Bs in this manner on my computer.
With two 24GB GPUs you can run them very fast, like ChatGPT.
There's of course a whole world in between as well, but those are the rough hardware requirements to match ChatGPT in a self-hosted sort of way. There's also a new thing people are doing where they add layers from one model onto another one, like a merge but keeping >50% of the original layers from each model. "Goliath 120B" and the like, which is made from 2 different 70Bs. They're even better but it's a bit beyond reasonable consumer hardware for now.