Well that’s easy to remember !
Hello @theit8514
You are actually spot on ^^
I did look in my exports file which was like so :/mnt/DiskArray 192.168.0.16(rw) 192.168.0.65(rw)
I added a localhost line in case: /mnt/DiskArray 127.0.0.1(rw) 192.168.0.16(rw) 192.168.0.65(rw)
It didn't solve the problem. I went to investigate with the mount command:
-
Will mount on 192.168.0.65:
mount -t nfs 192.168.0.55:/mnt/DiskArray/mystuff/ /tmp/test
-
Will NOT mount on 192.168.0.55 (NAS):
mount -t nfs 192.168.0.55:/mnt/DiskArray/mystuff/ /tmp/test
-
Will mount on 192.168.0.55 (NAS):
mount -t nfs 127.0.0.1:/mnt/DiskArray/mystuff/ /tmp/test
The mount -t nfs 192.168.0.55
is the one that the cluster does actually.
So i either need to find a way for it to use 127.0.0.1 on the NAS machine, or use a hostname that might be better to resolve
EDIT:
I was acutally WAY simpler.
I just added 192.168.0.55 to my /etc/exports file. It works fine now ^^
Thanks a lot for your help@theit8514@lemmy.world !
Hello ! You might find Sylius suitable. It’s an Open source framework based on Symfony.
Im pretty sure it has all your requirements. The thing is that it’s a headless framework, so a frontend needs to be built on top of that if you want some custom features.
Hope that helps !
Haha sorry indeed, it’s Kubernetes related and not Windows WeDontSayItsName related 😄
Hello ! First question would be : why buy an external drive if you are buying a NAS in the first place ?
Just in case: there are 2 slots available in the NAS you sent, meaning you could buy 2 internal drives for its storage.
On the hosting part, Jellyfin might be able to run judging by the specifications of the NAS. However, you have to take into account if the NAS operating system can run it (maybe there is an app store for it like Synology) and also media transcoding might be limited (to easily stream around your house 4K content for exemple)
Still from IT Crowd, it’s when Reynholm get sued by his exwife. The quote isn’t from that episode though
I think you are right indeed, i had the idea to maybe use the GC for AI stuff and play with it. I would probably go with kube and add the NAS in longhorn (that i already set up)
Would have been cool to add yet another machine to the cluster, especially if i could use the NAS for the kube VolumeClaims. 🤔
This is the way.
It’s actually how people build their images, in which some include sensitive data (when they should definitely not). It’s the same problem as exposed S3 buckets actually, nothing wrong with docker in itself.
Dave the Diver
I’d say Daft Punk too 😄