Jump to content

Which filetype to minimize wear?

I have an NVME installed that I was planning on using to install all of my VMs and Containers. Which file system type should I use to minimize wear and maximize life?

What about filesystems for HDDs that will be general storage? (e.g., media, back ups, playing around)

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, TechNoob9 said:

I have an NVME installed that I was planning on using to install all of my VMs and Containers. Which file system type should I use to minimize wear and maximize life?

What about filesystems for HDDs that will be general storage? (e.g., media, back ups, playing around)

Is this even a thing?  Not sure how a filesystem can minimize wear, as the act of reading and writing are the primary wear aspects.

"Do what makes the experience better" - in regards to PCs and Life itself.

 

Onyx AMD Ryzen 7 7800x3d / MSI 6900xt Gaming X Trio / Gigabyte B650 AORUS Pro AX / G. Skill Flare X5 6000CL36 32GB / Samsung 980 1TB x3 / Super Flower Leadex V Platinum Pro 850 / EK-AIO 360 Basic / Fractal Design North XL (black mesh) / AOC AGON 35" 3440x1440 100Hz / Mackie CR5BT / Corsair Virtuoso SE / Cherry MX Board 3.0 / Logitech G502

 

7800X3D - PBO -30 all cores, 4.90GHz all core, 5.05GHz single core, 18286 C23 multi, 1779 C23 single

 

Emma : i9 9900K @5.1Ghz - Gigabyte AORUS 1080Ti - Gigabyte AORUS Z370 Gaming 5 - G. Skill Ripjaws V 32GB 3200CL16 - 750 EVO 512GB + 2x 860 EVO 1TB (RAID0) - EVGA SuperNova 650 P2 - Thermaltake Water 3.0 Ultimate 360mm - Fractal Design Define R6 - TP-Link AC1900 PCIe Wifi

 

Raven: AMD Ryzen 5 5600x3d - ASRock B550M Pro4 - G. Skill Ripjaws V 16GB 3200Mhz - XFX Radeon RX6650XT - Samsung 980 1TB + Crucial MX500 1TB - TP-Link AC600 USB Wifi - Gigabyte GP-P450B PSU -  Cooler Master MasterBox Q300L -  Samsung 27" 1080p

 

Plex : AMD Ryzen 5 5600 - Gigabyte B550M AORUS Elite AX - G. Skill Ripjaws V 16GB 2400Mhz - MSI 1050Ti 4GB - Crucial P3 Plus 500GB + WD Red NAS 4TBx2 - TP-Link AC1200 PCIe Wifi - EVGA SuperNova 650 P2 - ASUS Prime AP201 - Spectre 24" 1080p

 

Steam Deck 512GB OLED

 

OnePlus: 

OnePlus 11 5G - 16GB RAM, 256GB NAND, Eternal Green

OnePlus Buds Pro 2 - Eternal Green

 

Other Tech:

- 2021 Volvo S60 Recharge T8 Polestar Engineered - 415hp/495tq 2.0L 4cyl. turbocharged, supercharged and electrified.

Lenovo 720S Touch 15.6" - i7 7700HQ, 16GB RAM 2400MHz, 512GB NVMe SSD, 1050Ti, 4K touchscreen

MSI GF62 15.6" - i7 7700HQ, 16GB RAM 2400 MHz, 256GB NVMe SSD + 1TB 7200rpm HDD, 1050Ti

- Ubiquiti Amplifi HD mesh wifi

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Dedayog said:

Is this even a thing?

No it isnt, this is some mega nerd tier minmaxing that people do on /g/ either ironically or as actual schizo strats. XFS is technically better for drive health but on massive multi drive scales, SGI was using it for servers in the late 90's and its been part of linux for ages. But its barely tested, barely proven, "technically" on the grandest scale one can be technically better.

Link to comment
Share on other sites

Link to post
Share on other sites

what os is this? FIle system is pretty os specific

 

The filesystem shouldn't affect ssd writes that much, and generally don't worry bout writes in a home server setup.

Link to comment
Share on other sites

Link to post
Share on other sites

56 minutes ago, Dedayog said:

Is this even a thing?  Not sure how a filesystem can minimize wear, as the act of reading and writing are the primary wear aspects.

 

44 minutes ago, jaslion said:

Don't matter the read and write quanitity is what wears

 

43 minutes ago, 8tg said:

No it isnt, this is some mega nerd tier minmaxing that people do on /g/ either ironically or as actual schizo strats. XFS is technically better for drive health but on massive multi drive scales, SGI was using it for servers in the late 90's and its been part of linux for ages. But its barely tested, barely proven, "technically" on the grandest scale one can be technically better.

 

42 minutes ago, Electronics Wizardy said:

what os is this? FIle system is pretty os specific

 

The filesystem shouldn't affect ssd writes that much, and generally don't worry bout writes in a home server setup.

Interesting. I've read on the Proxmox forums and Reddit that Proxmox wear on SSDs is a thing. Probably trying to optimize the last 5% of a drive life I guess.

 

So I am guessing ZFS single disk for everything? My back up is cloud/external HDD. I can't see any drawbacks, but I am not that experienced.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, TechNoob9 said:

 

 

 

Interesting. I've read on the Proxmox forums and Reddit that Proxmox wear on SSDs is a thing. Probably trying to optimize the last 5% of a drive life I guess.

 

So I am guessing ZFS single disk for everything? My back up is cloud/external HDD. I can't see any drawbacks, but I am not that experienced.

ZFS will work fine.

 

I did some testing once on writes due to the proxmox os, and got about 10TB a year(your will probably see my video if you look up proxmox videos on youtube). I woudln't worry about that number, but you can reduce it if you turn of some features like the cluster services(not needed on a single node setup).

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, TechNoob9 said:

 

 

 

Interesting. I've read on the Proxmox forums and Reddit that Proxmox wear on SSDs is a thing. Probably trying to optimize the last 5% of a drive life I guess.

 

So I am guessing ZFS single disk for everything? My back up is cloud/external HDD. I can't see any drawbacks, but I am not that experienced.

FWIW, I am running proxmox on a nvme, and after one year it showed 10% life usage. I forget the actual TB written figure, but it seemed within reason.

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, Electronics Wizardy said:

ZFS will work fine.

 

I did some testing once on writes due to the proxmox os, and got about 10TB a year(your will probably see my video if you look up proxmox videos on youtube). I woudln't worry about that number, but you can reduce it if you turn of some features like the cluster services(not needed on a single node setup).

 

 

Yeah I decided to use ZFS... But trying to mount the drive that is ZFS in the fstab is becoming its own challenge lol.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, TechNoob9 said:

Yeah I decided to use ZFS... But trying to mount the drive that is ZFS in the fstab is becoming its own challenge lol.

 

you  dont mount ZFS in fstab generally. Zpools shouldf mount auto matically at boot.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Electronics Wizardy said:

 

you  dont mount ZFS in fstab generally. Zpools shouldf mount auto matically at boot.

Interesting. But, I am not pooling two hdd together. Just have 1 HDD with ZFS filesystem.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, TechNoob9 said:

Interesting. But, I am not pooling two hdd together. Just have 1 HDD with ZFS filesystem.

There is always a Zpool, even with one disk. 

 

The zpool should auto mount on boot. Does it do that here?

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Electronics Wizardy said:

There is always a Zpool, even with one disk. 

 

The zpool should auto mount on boot. Does it do that here?

So I passed through the drive to my Ubuntu Server VM via /sbin/qm set 100 -scsi1 /dev/disk/by-id/ata-ST14000NM0121_ZKL2R2DG in Proxmox host:

 

image.png.5394fb6c7b6f2b50c543856f31bc1d75.png

But this is as far as I have gotten.

 

In host, when I do zfs list and zfs pool:

image.thumb.png.24967862392274bf8cad8d186d0e967d.png

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, TechNoob9 said:

So I passed through the drive to my Ubuntu Server VM via /sbin/qm set 100 -scsi1 /dev/disk/by-id/ata-ST14000NM0121_ZKL2R2DG in Proxmox host:

 

image.png.5394fb6c7b6f2b50c543856f31bc1d75.png

But this is as far as I have gotten.

 

In host, when I do zfs list and zfs pool:

image.thumb.png.24967862392274bf8cad8d186d0e967d.png

 

It looks like the disk is passed to a vm and monted via ZFS. Thats a big no no as you want the disk to be used by a file system on the host xor passed to a vm, not both. Use Zpool export if you don't want the host to mount/use the zpool and yo want it to be used by the vm.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Electronics Wizardy said:

It looks like the disk is passed to a vm and monted via ZFS. Thats a big no no as you want the disk to be used by a file system on the host xor passed to a vm, not both. Use Zpool export if you don't want the host to mount/use the zpool and yo want it to be used by the vm.

The command line screen is of my host, not my VM.

 

The only thing I did in Proxmox was make the HDD ZFS single disk.

 

When I go fdisk -l on the VM, it does show up:

image.png.408417a9338619cc79225f1471db991b.png

 

Is there anything else I need to do?

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, TechNoob9 said:

The command line screen is of my host, not my VM.

 

The only thing I did in Proxmox was make the HDD ZFS single disk.

 

When I go fdisk -l on the VM, it does show up:

image.png.408417a9338619cc79225f1471db991b.png

 

Is there anything else I need to do?

 

Are you passing the disk to a vm at the same time as having a zpool mounted on the host? Don't do that, that will lead to lots of problems. Either mount the HDD on the host, or pass it to a vm, not both. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Electronics Wizardy said:

Are you passing the disk to a vm at the same time as having a zpool mounted on the host? Don't do that, that will lead to lots of problems. Either mount the HDD on the host, or pass it to a vm, not both. 

Oh, I see. I guess I am. When I installed the new HDD, I was under the impression that you have to pick an initial filesystem:

 

image.png.0ad86d2898aaadc741bee92911ca2a72.png

 

You're saying just "initialize disk with gpt" and then pass it through to the VM?

Link to comment
Share on other sites

Link to post
Share on other sites

56 minutes ago, TechNoob9 said:

Oh, I see. I guess I am. When I installed the new HDD, I was under the impression that you have to pick an initial filesystem:

 

image.png.0ad86d2898aaadc741bee92911ca2a72.png

 

You're saying just "initialize disk with gpt" and then pass it through to the VM?

If you want to pass the whole disk to a vm, don't initialzie it all all. Just pass the whole block device to the VM.

 

But I'd generally reccoment using something like ZFS and virtual disks for the VMs. Makes things like snapshots and backups much easier.

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, Electronics Wizardy said:

If you want to pass the whole disk to a vm, don't initialzie it all all. Just pass the whole block device to the VM.

 

But I'd generally reccoment using something like ZFS and virtual disks for the VMs. Makes things like snapshots and backups much easier.

But in order to make it ZFS, wouldn't I run into the same issue as I have right now? Can I still share virtual disks? Dumb question... How do I make a virtual disk?

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, TechNoob9 said:

But in order to make it ZFS, wouldn't I run into the same issue as I have right now? Can I still share virtual disks? Dumb question... How do I make a virtual disk?

Setup the drive as ZFS and as a storage in proxmox.

 

Then add a disk to the VM to use. Don't pass through the whole disk, but select the add > Hard disk option in the hardware section for the VM.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Electronics Wizardy said:

Setup the drive as ZFS and as a storage in proxmox.

 

Then add a disk to the VM to use. Don't pass through the whole disk, but select the add > Hard disk option in the hardware section for the VM.

 

 

Yeah, that's what I did but via cli from the host:

 

/sbin/qm set 100 -scsi1 /dev/disk/by-id/ata-ST14000NM0121_ZKL2R2DG in Proxmox host

 

Is there anything else I need to do? I assume just go into the VM and mount it?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, TechNoob9 said:

Yeah, that's what I did but via cli from the host:

 

/sbin/qm set 100 -scsi1 /dev/disk/by-id/ata-ST14000NM0121_ZKL2R2DG in Proxmox host

 

Is there anything else I need to do? I assume just go into the VM and mount it?

 

 

No, that command passes the whole disk through. You don't normally want to do that. 

 

You want to create a virtual hard disk for the VM, using the ZFS storage of the HDD.

 

I'd do this through the GUI here if your new to Proxmox.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Electronics Wizardy said:

No, that command passes the whole disk through. You don't normally want to do that. 

 

You want to create a virtual hard disk for the VM, using the ZFS storage of the HDD.

 

I'd do this through the GUI here if your new to Proxmox.

Oh I see. Just pick an initial amount first, then if I need more, assign more via the gui?

 

After I do this, do I need to do anything else in the VM to set it up?

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, TechNoob9 said:

Oh I see. Just pick an initial amount first, then if I need more, assign more via the gui?

 

After I do this, do I need to do anything else in the VM to set it up?

Yup, you can expand virtual disks as needed. 

 

It should just show up as a disk you can use in the VM.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×