244
submitted 6 days ago by Twongo@lemmy.ml to c/memes@lemmy.ml
you are viewing a single comment's thread
view the rest of the comments
[-] Scrath@lemmy.dbzer0.com 1 points 5 days ago

A while back I trained a small LSTM based neural net to classify the power phases of a device I work on based on their current consumption over time.

The model worked seemingly great and it took a while for me to notice that it did not catch every phase perfectly.

Yesterday I created a larger and more complex CNN based model on the recommendation of my coworkers which I trained over night since I had to use my work laptop. When applying it to my real data I ran out of RAM. After fixing this issue and getting it to run, it misclassified far too many samples.

I spent the rest of the day building an algorithmic solution that has yet to mislabel a single sample.

This isn't really all that relevant to the post I guess but I found it a nice reminder to myself to actually think about a problem instead of throwing brute force at it and hoping it will solve it. As a side benefit, I can now actually explain why my data is classified the way it is instead of pointing at a black box. There are definitely usecases for AI but you should know enough to recognize when an algorithmic approach is better suited.

this post was submitted on 15 Apr 2026
244 points (100.0% liked)

Memes

55518 readers
947 users here now

Rules:

  1. Be civil and nice.
  2. Try not to excessively repost, as a rule of thumb, wait at least 2 months to do it if you have to.

founded 7 years ago
MODERATORS