[-] KingRandomGuy@lemmy.world 17 points 12 hours ago* (last edited 12 hours ago)

Every time I see a headline like this I’m reminded of the time I heard someone describe the modern state of AI research as equivalent to the practice of alchemy.

Not sure if you're referencing the same thing, but this actually came from a presentation at NeurIPS 2017 (the largest and most prestigious machine learning/AI conference) for the "Test of Time Award." The presentation is available here for anyone interested. It's a good watch. The presenter/awardee, Ali Rahimi, talks about how over time, rigor and fundamental knowledge in the field of machine learning has taken a backseat compared to empirical work that we continue to build upon, yet don't fully understand.

Some of that sentiment is definitely still true today, and unfortunately, understanding the fundamentals is only going to get harder as empirical methods get more complex. It's much easier to iterate on empirical things by just throwing more compute at a problem than it is to analyze something mathematically.

[-] KingRandomGuy@lemmy.world 21 points 2 months ago

Mildly useful tip: when you take a card or battery out of your camera, leave the door open until you put it back in. That way you'll know if you forgot to put one of them back into the camera. I do this and it's saved me a few times.

[-] KingRandomGuy@lemmy.world 11 points 3 months ago

Yeah 1.0 has been quite stable for me. I especially recommend the weekly releases with features planned for 1.1, like better sketch projection tools and snapping.

[-] KingRandomGuy@lemmy.world 22 points 9 months ago

In fairness if you really needed to you could rent this kind of compute via a service like vast.ai, it'd probably still be cheaper than paying a ransom.

[-] KingRandomGuy@lemmy.world 23 points 9 months ago

Useless is a strong term. I do a fair amount of research on a single 4090. Lots of problems can fit in <32 GB of VRAM. Even my 3060 is good enough to run small scale tests locally.

I'm in CV, and even with enterprise grade hardware, most folks I know are limited to 48GB (A40 and L40S, substantially cheaper and more accessible than A100/H100/H200). My advisor would always say that you should really try to set up a problem where you can iterate in a few days worth of time on a single GPU, and lots of problems are still approachable that way. Of course you're not going to make the next SOTA VLM on a 5090, but not every problem is that big.

[-] KingRandomGuy@lemmy.world 14 points 11 months ago

He's apparently said he was born in 1988. In another thread others mentioned that would make him 21 when he started his PhD, which checks out.

[-] KingRandomGuy@lemmy.world 15 points 11 months ago

You can do this on Linux using gphoto2, ffmpeg, and v4l2loopback. You probably won't get full resolution but the quality will still be good enough for video conferencing. See here for a guide.

[-] KingRandomGuy@lemmy.world 17 points 2 years ago

IMO the bigger problem with FreeCAD is the topological naming problem. It's very easy to get frustrated because your model broke due to a change you made in an earlier feature.

The UI isn't amazing though, and that unfortunately happens quite a bit with open source software. Hopefully it'll go the way of Blender and KiCAD with an eventual major release that overhauls the UI.

[-] KingRandomGuy@lemmy.world 12 points 2 years ago

Ondel has a nicer user interface, but I personally use and recommend realthunder's LinkStable branch of FreeCAD. Mainline FreeCAD (and by extension, Ondsel) suffer from the topological naming problem, which can be especially jarring to users coming from proprietary CAD software. realthunder put a lot of work into a solution that handles the problem pretty well, so I'm using his fork until toponaming gets mainlined.

[-] KingRandomGuy@lemmy.world 12 points 2 years ago

As of right now VLC also doesn't properly support Wayland, but MPV does. It's a great piece of software!

Agree on the sentiment about VLC though, having an open source project demonstrate what is possible and stand the test of time definitely paves the way for future work and improvements.

[-] KingRandomGuy@lemmy.world 11 points 2 years ago* (last edited 2 years ago)

I'm a researcher in ML and that's not the definition that I've heard. Normally the way I've seen AI defined is any computational method with the ability to complete tasks that are thought to require intelligence.

This definition admittedly sucks. It's very vague, and it comes with the problem that the bar for requiring intelligence shifts every time the field solves something new. We sort of go "well, given these relatively simple methods could solve it, I guess it couldn't have really required intelligence."

The definition you listed is generally more in line with AGI, which is what people likely think of when they hear the term AI.

[-] KingRandomGuy@lemmy.world 13 points 2 years ago

I believe this is the referenced article:

https://arxiv.org/abs/2311.03348

40

Equipment details:

  • Mount: OpenAstroMount by OpenAstroTech
  • Lens: Sony 200-600 @ 600mm f/7.1
  • Camera: Sony A7R III
  • Guidescope: OpenAstroGuider (50mm, fl=163) by OpenAstroTech
  • Guide Camera: SVBONY SV305m Pro
  • Imaging Computer: ROCKPro64 running INDIGO server

Acquisition & Processing:

  • Imaged and Guided/Dithered in Ain Imager
  • 420x30s lights, 40 darks, 100 flats, 100 biases, 100 dark-flats over two nights
  • Prepared data and stacked in SiriLic
  • Background extraction, photometric color calibration, generalized hyperbolic stretch transform, and StarNet++ in SiriLic
  • Adjusted curves, enhanced saturation of the nebula and recombined with star mask in GIMP, desaturated and denoised background

This is my first time doing a multi-night image, and my first time using SiriLic to configure a Siril script. Any tips there would be helpful. Suggestions for improvement or any other form of constructive criticism are welcome!

35
submitted 2 years ago* (last edited 2 years ago) by KingRandomGuy@lemmy.world to c/astrophotography@lemmy.world

Equipment details:

  • Mount: OpenAstroMount by OpenAstroTech
  • Lens: Sony 200-600 @ 600mm f/7.1
  • Camera: Sony A7R III
  • Guidescope: OpenAstroGuider (50mm, fl=153) by OpenAstroTech
  • Guide Camera: SVBONY SV305m Pro
  • Imaging Computer: ROCKPro64 running INDIGO server

Acquisition & Processing:

  • Imaged and Guided/Dithered in Ain Imager
  • 360x30s lights, 30 darks, 30 flats, 30 biases
  • Stacked in Siril, background extraction, photometric color calibration, generalized hyperbolic stretch transform, and StarNet++
  • Enhanced saturation of the galaxy and recombined with star mask in GIMP, desaturated and denoised background

Suggestions for improvement or any other form of constructive criticism welcome!

view more: next ›

KingRandomGuy

joined 2 years ago