72
Have you found any cool uses/life hacks for AI?
(reddthat.com)
A loosely moderated place to ask open-ended questions
If your post meets the following criteria, it's welcome here!
Looking for support?
Looking for a community?
~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~
Before it was hot, I used ESRGAN and some other stuff for restoring old TV. There was a niche community that finetuned models just to, say, restore classic SpongeBob or DBZ or whatever they were into.
These days, I am less into media, but keep Qwen3 32B loaded on my desktop… pretty much all the time? For brainstorming, basic questions, making scripts, an agent to search the internet for me, a ‘dumb’ writing editor, whatever. It’s a part of my “degoogling” effort, and I find myself using it way more often since it’s A: totally free/unlimited, B: private and offline on an open source stack, and C: doesn’t support Big Tech at all. It’s kinda amazing how “logical” a 14GB file can be these days, and I can bounce really personal/sensitive ideas off it that I would hardly trust anyone with.
…I’ve pondered getting back into video restoration, with all the shiny locally runnable tools we have now.
Do you run this on NVIDIA or AMD hardware?
Nvidia.
Back then I had a 980 TI RN I am lucky enough to have snagged a 3090 before they shot up.
I would buy a 7900, or a 395 APU, if they were even reasonably affordable for the VRAM, but AMD is not pricing their stuff well…
But FYI you can fit Qwen 32B on a 16GB card with the right backend/settings.
Do you have any recommendations for a local Free Software tool to fix VHS artifacts (bad tracking etc., not just blurriness) in old videos?
That work well out of the box? Honestly, I’m not sure.
Back in the day, I’d turn to vapoursynth or (Or avisynth+) filters and a lot of hand editing, basically go through the trouble sections one-by-one and see which combination of VHS-specific correction and regeneration looks best.
These days, we have far more powerful tools. I’d probably start by training a LoRA for Wan 2B or something, then use it to straight up regenerate damaged test sections with video-2-video. Then I’d write a script to detect them, and mix in some “traditional” vapoursynth filters.
…But this is all very manual, like python dev level with some media/ml knowledge, unfortunately. I am much less familiar with, like, a GUI that could accomplish this. Paid services out there likely offer this, but who knows how well they work.