Without display? Ok. But wolf can stream xfce inside podman.
You need virtual dislay so that meat that you don't connect video card to TV. Maybe best choice - use wolf? No DE, no display. Just GPU and podman.
Yes, it is. But I have llama-swap, openweb-ui. If you spend some time on the llama-swap configuration, then the you have a good chance to run the model on 2 cards is through llama.cpp. The winnings, however, will not be x2 of course and will fall non-linearly from the number of cards. And you need motherboard with good PCI-E lines (2 pci-e x16 or more). But it's still cheaper than one large card. Example:
HIP_VISIBLE_DEVICES=0,1 \
/opt/llama.cpp/build/bin/llama-server \
--host 127.0.0.1 \
--port 8082 \
--model /storage/models/model.gguf \
--n-gpu-layers all \
--split-mode layer \
--tensor-split 1,1 \
--ctx-size 32768 \
--batch-size 512 \
--ubatch-size 512 \
--flash-attn on \
--parallel 1
There is a less stable but more productive one: --split-mode row
P.S. By the way, one RX9070XT on my instance translates posts and comments. You can test it if you want. =)
not a very popular opinion, but if you want an inexpensive, really inexpensive variant, take the AMD MX9070XT. AMD is not the most popular AI cards, but they are not bad with ROCm and for the price of 5090 you can put 5 cards (80 GB vram)
I think 1 modifications fix this: User migration via Lemmy federation. When you can migrate from Instance to instance with old profile (comments, posts and all another) what instance you choice right now don't importand.