321
Proxmox 9 released (www.proxmox.com)
submitted 3 weeks ago* (last edited 3 weeks ago) by beerclue@lemmy.world to c/selfhosted@lemmy.world

Proxmox 9 was released, based on Debian 13 (Trixie), with some interesting new features.

Here are the highlights: https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_9.0

Upgrade from 8 to 9 readme: https://pve.proxmox.com/wiki/Upgrade_from_8_to_9

Known issues & breaking changes: https://pve.proxmox.com/wiki/Roadmap#9.0-known-issues

top 50 comments
sorted by: hot top controversial new old
[-] kebab@endlesstalk.org 52 points 3 weeks ago

The new mobile interface is lit 🔥. Finally usable

[-] billygoat@catata.fish 13 points 3 weeks ago

Fuck, I just left for a month away and I hate to do major upgrades when remote.

[-] ikidd@lemmy.world 35 points 3 weeks ago

Probably for the best. Upgrades on the first release haven't had a stellar record

[-] Moonrise2473@feddit.it 4 points 3 weeks ago

Exactly, for example I missed the note that updating truenas to the latest version disables and hides all the virtual machines (theoretically they can get migrated to the new engine but it gave me some weird error. Luckily truenas can be downgraded easily.)

Now, 3 months after the First release of the update, those virtual machines aren't disabled and hidden anymore

load more comments (4 replies)
[-] lka1988@sh.itjust.works 2 points 3 weeks ago

Stick with 8 then, until we know it's stable.

[-] littleomid@feddit.org 37 points 3 weeks ago

For beginners here: do not run apt upgrade!! Read the documentation on how to upgrade properly.

[-] beerclue@lemmy.world 24 points 3 weeks ago* (last edited 3 weeks ago)

It's always good to read the docs, but I often skip them myself :)

They have this nifty tool called pve8to9 that you could run before upgrading, to check if everything is healthy.

I have a 3 node cluster, so I usually migrate my VMs to a different node and do my maintenance then, with minimal risks.

[-] drkt@scribe.disroot.org 15 points 3 weeks ago

pve8to9 --full

[-] wreckedcarzz@lemmy.world 21 points 3 weeks ago* (last edited 3 weeks ago)

Yay, it only took 2 hours and the help of an llm since the upgrade corrupted my lvm metadata! Little bit of post cleanup and verifying everything works. Now I can go to sleep (it's 5am).

Wasn't that bad, but not exactly relaxing. And when my VMs threw a useless error ('can't start need manual fix') I might have slightly panicked...

[-] Appoxo@lemmy.dbzer0.com 7 points 3 weeks ago

Not something that sounds production ready lol

[-] potpotato@lemmy.world 6 points 3 weeks ago

Started a system upgrade at 3am…you ok?

[-] wreckedcarzz@lemmy.world 2 points 3 weeks ago

I'm always up late (it's 5:19a), though a good bit more than usual lately. But I did the upgrade because I was anxious, had nothing to do, and there were no users utilizing the machine.

[-] nevetsg@aussie.zone 5 points 3 weeks ago

Thanks for posting this and reminding me to never go back to Proxmox. My Proxmox server killed itself and all VM's twice before I moved onto HyperV.

[-] wreckedcarzz@lemmy.world 2 points 3 weeks ago* (last edited 3 weeks ago)

Oof. I have my VMs getting backed up to another machine so theoretically (untested) I should be able to recover with less than a day of data loss (very minimal for this box). The annoying part would be getting it hooked up to a monitor and keyboard, since it's under an end-table in the living room.

This is the first issue in like... 15 months? Hopefully it stays rather uneventful.

[-] Damage@feddit.it 16 points 3 weeks ago

ZFS now supports adding new devices to existing RAIDZ pools with minimal downtime.

Yes!!

[-] non_burglar@lemmy.world 3 points 3 weeks ago* (last edited 3 weeks ago)

Edit2: the following is no longer true, so ignore it.

Why do you want this? There are very few valid use cases for it.

Edit: this is a serious question. Adding a member to a vdev does not automatically move any of the parity or data distribution off the old vdev. You'll not only have old data distributed on old vdev layout until you copy it back, but you'll also now have a mix of io requests for old and new vdev layout, which will kill performance.

Not to mention that the metadata is now stored for new layout, which means reads from the old layout will cause rw on both layouts. It's not actually something anyone should want, unless they are really, really stuck for expansion.

And we're talking about a hypervisor here, so performance is likely a factor.

Jim Salter did a couple writeups on this.

[-] Saik0Shinigami@lemmy.saik0.com 4 points 3 weeks ago* (last edited 3 weeks ago)

Adding a member to a vdev does not automatically move any of the parity or data distribution off the old vdev.

Yes it does. ZFS does a full resilver after the addition. Jim Salter's write ups are from 4 years ago. Shit changes.

Edit: and even if it didn't... It's trivial to write a script that rewrites all the data to move it into the new structure. To say there's no valid cases when even in 2021 there was an answer to the problem is a bit crazy.

[-] non_burglar@lemmy.world 4 points 3 weeks ago

Whoah, I see this has indeed changed. Thanks.

[-] adavis@lemmy.world 2 points 3 weeks ago

Wait till you hear about zfs anyraid. An upcoming feature to make zfs more flexible with mixed sized drives.

[-] etchinghillside@reddthat.com 16 points 3 weeks ago

Not sure I want to check how far behind I am. How rough are these upgrades? I’ve got most things under Terraform and Ansible but am still procrastinating under the fear of losing a weekend regiggling things.

[-] phanto@lemmy.ca 13 points 3 weeks ago

I just did three nodes this evening from 8.4.1 to 9, no issues other than a bit of farting around with my sources.list files.

Not noticing anything significant, but I haven't tried the mobile interface yet.

[-] CmdrShepard49@sh.itjust.works 7 points 3 weeks ago

I'd also like to know.

I built a new machine seceral months back with PVE and got the hang of it but it's been "set it and forget it" since then due to everything running smoothly. Now I don't remember half the things I learned and don't want to get in over my head running into issues during a major upgrade. I definitely do want the ability to expand my ZFS pool so I will need to bite the bullet eventually.

[-] possiblylinux127@lemmy.zip 6 points 3 weeks ago

It will vary but for me it was smooth

[-] SheeEttin@lemmy.zip 5 points 3 weeks ago

I just did one of my two nodes. Easy upgrade, looks good so far.

[-] sandwichsaregood@lemmy.world 4 points 3 weeks ago

Previous 3 major release upgrades I've done were smooth, ymmv

[-] mio@lemmy.mio19.uk 8 points 3 weeks ago

I am telling myself that updating remotely is not a good idea

[-] beerclue@lemmy.world 10 points 3 weeks ago

My "servers" are headless, in the basement, so even if I'm home, it's still remote :D

[-] HiTekRedNek@lemmy.world 7 points 3 weeks ago

IPMI + BMC are wonderful things.

load more comments (1 replies)
[-] ipkpjersi@lemmy.ml 3 points 3 weeks ago* (last edited 3 weeks ago)

I tell myself that every time, but I mean, I still end up doing it every time anyway lmao

edit: Just did it, it went well.

[-] bigkahuna1986@lemmy.ml 2 points 3 weeks ago

My work computer is Debian and I'm so looking forward to the upgrade. Just gotta contain myself for a free weeks until a 0.1 type update is released.

[-] ssdfsdf3488sd@lemmy.world 1 points 3 weeks ago

There is no need ibthink. I did all 12 of my cluster at home plus all the work proxmox with no issues

[-] ipkpjersi@lemmy.ml 1 points 3 weeks ago

It might be safer to wait, one of my IRL friends ran into an issue, and I saw some others post about it on the Proxmox forums: TASK ERROR: activating LV 'pve/data' failed: Check of pool pve/data failed (status:64). Manual repair required!

I think I didn't run into that error because I flattened my LVM kinda, but if I hadn't customized my setup maybe I would have run into that too.

load more comments (5 replies)
[-] TheUnicornOfPerfidy@feddit.uk 5 points 3 weeks ago

As a person who just installed proxmox for the first time a couple of weeks ago, does this allow me to fix some of my mistakes and convert VMs to LXCs?

[-] CmdrShepard49@sh.itjust.works 8 points 3 weeks ago

You could just start over if you dont have much invested into your current setup.

load more comments (2 replies)
[-] SidewaysHighways@lemmy.world 6 points 3 weeks ago

i don't think so

[-] JPAKx4 4 points 3 weeks ago

As someone who also started proxmox fairly recently, I found that the community has these really cool scripts that you can use to get started. Obviously you're running bash scripts on your main node for some, so there are risks involved with that but in my experience it's been great.

This is awesome, I am going to imediatly get a test cluster set up when I get to work. Snapshots with FC support was the only major thing (appart from Veeam support) holding us back from switching to Proxmox. The HA improvements also sound nice!

[-] slazer2au@lemmy.world 2 points 3 weeks ago

Testing in production? Brave move mate. :)

[-] mio@lemmy.mio19.uk 4 points 3 weeks ago

I am telling myself that updating remotely is not a good idea

[-] Oisteink@feddit.nl 6 points 3 weeks ago

Keep on telling yourself that, but most of us aren’t on physical console anyways

[-] mio@lemmy.mio19.uk 3 points 3 weeks ago

My duplicate comments were caused by my slow home server. I really should upgrade my hardware

[-] Sunny@slrpnk.net 3 points 3 weeks ago

Anyone got screenshots of the new mobile UI?

[-] BlueEther@no.lastname.nz 3 points 3 weeks ago

A job for the weekend I guess. just done all the prerequisites and only have a warning for dkms

load more comments
view more: next ›
this post was submitted on 06 Aug 2025
321 points (100.0% liked)

Selfhosted

51003 readers
539 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS