Jump to content

Install proxmox zfs on one drive.

Hi there.

I got myself an inexpensive machine to use use as a proxmox server.

It is the almighty dell optiplex 3060, with the i5-8700T, 256GB nvme  and 8GB memory.

I am looking to upgrade the memory to 32GB if I can setup the way I want to.

It can fit an nvme & sata drive.

 

I installed proxmox 8, using ext4.

Presently I am testing it out.

I am confused, as I wish to install the OS using zfs. I says it needs 2 disk for that, which is a problem.

This is so I can get the benefit of zfs snapshots.

 

The 256GB nvme will be going elsewhere,and in there will put:

1TB sata SSD

16GB optane.

I want the OS and virtual machines on the sata SSD - using zfs.

The 16GB optane I will add as a persistent l2arc - a type of cache in zfs.

 

Those optane drives can actually work well as a cache drive for an OS, but suggest staying well clear of the locked down intel method.

 

The random 4k reads speeds of those little drives are much better than any nand based SSD I am aware of, which are typical for most OS I/O as I understand.

 

That setup as I see it would be optimum for an OS & virtual machines.

Zfs cache hit rates are impressive - 95% +.

 

If working well, may get a 32GB or 64GB optane.

 

So how do install proxmox OS on zfs with a single drive?

I do not know how.

 

Useful info appreciated.

 

 

Main Machine: CPU: 5800X3D  RAM: 32GB  GPU: RTX 3080  M/B: ASUS B550-E Storage: 2 x 256GB NVME boot, 1/2 TB NVME OS: Windows 10, Ubuntu 22.04

Server1:  M92p micro  CPU: i5-3470T  RAM: 8GB OS: Proxmox  Virtual Machines: Opnsense router, LXC containers: netboot server, download manager

Server2: CPU: 3600X  RAM: 64GB M/B MSI B450 Tomahawk  OS: Proxmox  Virtual machines: Windows 10, 3 x Ubuntu Linux, Truenas scale (16TB logical storage)

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, ianm_ozzy said:

I am confused, as I wish to install the OS using zfs. I says it needs 2 disk for that, which is a problem.

If you select "ZFS RAID0", you should be able to install with only one disk selected.

 

(yes, I just installed Proxmox inside of Proxmox to prove that to myself...)

 

9 minutes ago, ianm_ozzy said:

The 16GB optane I will add as a persistent l2arc - a type of cache in zfs.

This is almost laughably pointless. You're much better off adding 16GB more RAM and letting the actual ARC do its thing. A SATA SSD by itself will be more than enough for several VMs of use (assuming you're not hammering it with writes or something weird like that).

Main System (Byarlant): Ryzen 7 5800X | Asus B550-Creator ProArt | EK 240mm Basic AIO | 16GB G.Skill DDR4 3200MT/s CAS-14 | XFX Speedster SWFT 210 RX 6600 | Samsung 990 PRO 2TB / Samsung 960 PRO 512GB / 4× Crucial MX500 2TB (RAID-0) | Corsair RM750X | a 10G NIC (pending) | Inateck USB 3.0 Card | Hyte Y60 Case | Dell U3415W Monitor | Keychron K4 Brown (white backlight)

 

Laptop (Narrative): Lenovo Flex 5 81X20005US | Ryzen 5 4500U | 16GB RAM (soldered) | Vega 6 Graphics | SKHynix P31 1TB NVMe SSD | Intel AX200 Wifi (all-around awesome machine)

 

Proxmox Server (Veda): Ryzen 7 3800XT | AsRock Rack X470D4U | Corsair H80i v2 | 64GB Micron DDR4 ECC 3200MT/s | 4× WD 10TB / 4× Seagate 14TB Exos / 8× WD 12TB (custom external SAS enclosure) / 2× Samsung PM963a 960GB SSD | Seasonic Prime Fanless 500W | Intel X550-T2 10G NIC | LSI 9300-8i HBA | Adaptec 82885T SAS Expander | Fractal Design Node 804 Case (side panels swapped to show off drives) | VMs: TrueNAS Scale; Ubuntu Server (PiHole/PiVPN/NGINX?); Windows 10 Pro; Ubuntu Server (Apache/MySQL)

 

Proxmox Server (La Vie en Rose)GMKtec Mini PC | Ryzen 7 5700U | 32GB RAM (SO-DIMM) | Vega 8 Graphics | Lexar 1TB 610 Pro SSD | Dual Realtek 8125 2.5G NICs | VMs: Ubuntu Server (PiHole)


Media Center/Video Capture (Jesta Cannon): Ryzen 5 1600X | ASRock B450M Pro4 R2.0 | Noctua NH-L12S | 16GB Crucial DDR4 3200MT/s CAS-22 | EVGA GTX750Ti SC | UMIS NVMe SSD 256GB / TEAMGROUP MS30 1TB | Corsair CX450M | Viewcast Osprey 260e Video Capture | Mellanox ConnectX-2 10G NIC | LG UH12NS30 BD-ROM | Silverstone Sugo SG-11 Case | Sony XR65A80K

 

Camera: Sony ɑ7II w/ Meike Grip | Sony SEL24240 | Samyang 35mm ƒ/2.8 | Sony SEL50F18F | Sony SEL2870 (kit lens) | PNY Elite Perfomance 512GB SDXC card

 

Network:

Spoiler
                           ┌─────────────── Office/Rack ─────────────────────────────────────────────────────────────────────┐
Google Fiber Webpass ────── UniFi Security Gateway ─── UniFi Switch 8-60W ─┬─ UniFi Flex XG ═╦═ Veda (Intel X550.1)
(500Mbps↑/500Mbps↓)                             UniFi CloudKey Gen2 (PoE) ─┴─ Veda (IPMI)    ╠═ Veda-NAS (Intel X550.2)
╔════════════════════════════════════════════════════════════════════════════════════════════╩═ Narrative (Asus USB 2½G NIC)
║ ┌── Closet ───┐    ┌─────────────── Bedroom ─────────────────────────────────────────────┐
╚═ UniFi Flex XG ═╦╤═ UniFi Flex XG ═╦═ Byarlant
   (PoE)          ║│                 ╠═ Narrative (Cable Matters 2½G NIC w/ USB-PD)
   Kitchen Jack ══╝│                 ╚═ Jesta Cannon*
   (Testing)       │        ┌──────── Media Center ────────────────────────────────────────┐
                   └──────── UniFi Switch 8 ─────────┬─ UniFi Access Point nanoHD (PoE)
Notes:                                               ├─ Sony Playstation 4 
─── is Gigabit / ═══ is Multi-Gigabit                ├─ Pioneer VSX-S520
* = cable passed to Bedroom from Media Center        ├─ Sony XR65A80K (Google TV)
** = cable passed from Media Center to Bedroom       └─ Work Laptop** (Startech USB-PD Dock)

 

Link to comment
Share on other sites

Link to post
Share on other sites

42 minutes ago, ianm_ozzy said:

This is so I can get the benefit of zfs snapshots.

 

Just saying, for vms you can snapshot with qcow2 or lvm, you don't need zfs for snapshots of vms in proxmox.

 

43 minutes ago, ianm_ozzy said:

I am confused, as I wish to install the OS using zfs. I says it needs 2 disk for that, which is a problem.

 

Set it to raid0 and it will install with one drive. You can add a mirror if you want later too.

 

43 minutes ago, ianm_ozzy said:

 

The 256GB nvme will be going elsewhere,and in there will put:

1TB sata SSD

16GB optane.

I want the OS and virtual machines on the sata SSD - using zfs.

The 16GB optane I will add as a persistent l2arc - a type of cache in zfs.

 

I'd keep the 256gb ssd if possible, it will be better for most uses here. The l2arc doesn't help that much when its tiny.\

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, AbydosOne said:

If you select "ZFS RAID0", you should be able to install with only one disk selected.

 

(yes, I just installed Proxmox inside of Proxmox to prove that to myself...)

 

This is almost laughably pointless. You're much better off adding 16GB more RAM and letting the actual ARC do its thing. A SATA SSD by itself will be more than enough for several VMs of use (assuming you're not hammering it with writes or something weird like that).

 

 

How exactly do I add 16GB more ram when it can only take 32GB. Perhaps you could do a little more research before replies in the future.

I am unsure even that 32GB will be enough.

 

Even if could add more, those optane drives are super cheap and have a few already.

 

I have problems with proxmox freezing with zfs on another machine, as I am using the non production updates. It is just for a home setup, so not spending cash on in.

 

I had an inkling it was memory related. I reduced the zfs maximum arc memory, and the problem went away.

I think in proxmox, it uses up to half the memory for arc as default.

 

I expect it will be hammered with lots of reads. As for writes,  are looking at a quality sata drive with DRAM cache to help with that.

 

 

With the dell machine, maybe reduce arc 4GB, and use the optane drive also.

 

 

Thanks for the info on the zfs raidz0.

 

 

 

 

 

 

 

Main Machine: CPU: 5800X3D  RAM: 32GB  GPU: RTX 3080  M/B: ASUS B550-E Storage: 2 x 256GB NVME boot, 1/2 TB NVME OS: Windows 10, Ubuntu 22.04

Server1:  M92p micro  CPU: i5-3470T  RAM: 8GB OS: Proxmox  Virtual Machines: Opnsense router, LXC containers: netboot server, download manager

Server2: CPU: 3600X  RAM: 64GB M/B MSI B450 Tomahawk  OS: Proxmox  Virtual machines: Windows 10, 3 x Ubuntu Linux, Truenas scale (16TB logical storage)

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Electronics Wizardy said:

Just saying, for vms you can snapshot with qcow2 or lvm, you don't need zfs for snapshots of vms in proxmox.

 

Set it to raid0 and it will install with one drive. You can add a mirror if you want later too.

 

I'd keep the 256gb ssd if possible, it will be better for most uses here. The l2arc doesn't help that much when its tiny.\

 

 

 

I was not aware of the riad 0 option - or that you could just use one drive.

As for using the 'tiny' optane drive, what matters is the cache hit rate.

If not good enough, then maybe get a bigger optane one as indicated.

I expect it will be.

On my present proxmox server, with arc limited to 12GB for stability reasons, the cache hit rate is presently 91%.

It is OK but not great.

With the same OS & virtual machines on the dell optiplex (except for truenas), will have 16GB optane with maybe 4GB memory for arc.

If the overall hit rate goes above 95%, will be  happy.

If not then get a 32GB optane.

 

 

 

Main Machine: CPU: 5800X3D  RAM: 32GB  GPU: RTX 3080  M/B: ASUS B550-E Storage: 2 x 256GB NVME boot, 1/2 TB NVME OS: Windows 10, Ubuntu 22.04

Server1:  M92p micro  CPU: i5-3470T  RAM: 8GB OS: Proxmox  Virtual Machines: Opnsense router, LXC containers: netboot server, download manager

Server2: CPU: 3600X  RAM: 64GB M/B MSI B450 Tomahawk  OS: Proxmox  Virtual machines: Windows 10, 3 x Ubuntu Linux, Truenas scale (16TB logical storage)

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, ianm_ozzy said:

As for using the 'tiny' optane drive, what matters is the cache hit rate.

Why not use the bigger 256gb nand drive or other bigger ssd?

 

The big feature of optane is the fast sync writes, and won't be used as a l2arc. OPtane really makes much more sense as a slog drive. A 256gb drive will give you a much better hit ration.

 

But with a SSD already, don't bother with a l2arc your probably just wasting ram as the ssd being cached is plenty fast already.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, ianm_ozzy said:

How exactly do I add 16GB more ram when it can only take 32GB. Perhaps you could do a little more research before replies in the future.

 

The 32GB ddr4 sodimms likely came out after the system and the specs weren't updated. I'd guess they will work and give you 64gb of ram.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Electronics Wizardy said:

Why not use the bigger 256gb nand drive or other bigger ssd?

 

The big feature of optane is the fast sync writes, and won't be used as a l2arc. OPtane really makes much more sense as a slog drive. A 256gb drive will give you a much better hit ration.

 

But with a SSD already, don't bother with a l2arc your probably just wasting ram as the ssd being cached is plenty fast already.

 

 

I already stated, need the 256GB SSD elsewhere.

It is not new and probably past warranty. I am not trusting it in any server.

 

A quality sata SSD seems the way to go - for TBW values & dram. Possibly the 1TB MX500. 

An Nvme for a similar price  will typically have less endurance and no DRAM.

 

If the optane helps (which I have already), will be using it.

 

Main Machine: CPU: 5800X3D  RAM: 32GB  GPU: RTX 3080  M/B: ASUS B550-E Storage: 2 x 256GB NVME boot, 1/2 TB NVME OS: Windows 10, Ubuntu 22.04

Server1:  M92p micro  CPU: i5-3470T  RAM: 8GB OS: Proxmox  Virtual Machines: Opnsense router, LXC containers: netboot server, download manager

Server2: CPU: 3600X  RAM: 64GB M/B MSI B450 Tomahawk  OS: Proxmox  Virtual machines: Windows 10, 3 x Ubuntu Linux, Truenas scale (16TB logical storage)

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, ianm_ozzy said:

If the optane helps (which I have already), will be using it.

 

Why not use the optane as a boot drive. Then keep the data on a seperate ssd. The caching really won't help here. 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Electronics Wizardy said:

The 32GB ddr4 sodimms likely came out after the system and the specs weren't updated. I'd guess they will work and give you 64gb of ram.

 

Are you seriously suggesting I buy 64GB when the specs clearly state it takes on 32GB?

 

It is for a home server, so are not spending much anyway.

Also the little optane drive will improve performance.

By how much I am yet to find out.

The measurable part is the cache hit rate.

How responsive I feel is  the important part.

 

 

 

 

 

 

 

 

Main Machine: CPU: 5800X3D  RAM: 32GB  GPU: RTX 3080  M/B: ASUS B550-E Storage: 2 x 256GB NVME boot, 1/2 TB NVME OS: Windows 10, Ubuntu 22.04

Server1:  M92p micro  CPU: i5-3470T  RAM: 8GB OS: Proxmox  Virtual Machines: Opnsense router, LXC containers: netboot server, download manager

Server2: CPU: 3600X  RAM: 64GB M/B MSI B450 Tomahawk  OS: Proxmox  Virtual machines: Windows 10, 3 x Ubuntu Linux, Truenas scale (16TB logical storage)

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, ianm_ozzy said:

Are you seriously suggesting I buy 64GB when the specs clearly state it takes on 32GB?

 

Well you said 32gb might not be enough ram, and this would likely be much cheaper than buying a new system. 

 

This is very common for systems to support more ram than rated when larger dimms come out after the product was released.

 

3 minutes ago, ianm_ozzy said:

It is for a home server, so are not spending much anyway.

Also the little optane drive will improve performance.

By how much I am yet to find out.

The measurable part is the cache hit rate.

How responsive I feel is  the important part.

 

I think the optane will be much better used as a boot drive here. The cache hit ratio doesn't tell that much as the ssd below is still pretty fast here, its not like your caching a HDD thats much slower.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Electronics Wizardy said:

Why not use the optane as a boot drive. Then keep the data on a seperate ssd. The caching really won't help here. 

 

 

 

How exactly do you know that?

 

Have you even done testing?

 

I am going to use an old 256GB sata drive in the mean time to test it out with low s;ec VMs/containers.

 

 

 

 

 

 

Main Machine: CPU: 5800X3D  RAM: 32GB  GPU: RTX 3080  M/B: ASUS B550-E Storage: 2 x 256GB NVME boot, 1/2 TB NVME OS: Windows 10, Ubuntu 22.04

Server1:  M92p micro  CPU: i5-3470T  RAM: 8GB OS: Proxmox  Virtual Machines: Opnsense router, LXC containers: netboot server, download manager

Server2: CPU: 3600X  RAM: 64GB M/B MSI B450 Tomahawk  OS: Proxmox  Virtual machines: Windows 10, 3 x Ubuntu Linux, Truenas scale (16TB logical storage)

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, ianm_ozzy said:

 

How exactly do you know that?

 

Have you even done testing?

 

I am going to use an old 256GB sata drive in the mean time to test it out with low s;ec VMs/containers.

 

 

 

 

 

 

Ive done a lot of testing and using of proxmox/zfs and optane, and l2arc drives don't help much typically with ssds as the base storage, and the tiny size won't store much. 

 

Its pretty convenient to have a seperate boot drive in my view, so thats what I'd use that slot for.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Electronics Wizardy said:

Well you said 32gb might not be enough ram, and this would likely be much cheaper than buying a new system. 

 

This is very common for systems to support more ram than rated when larger dimms come out after the product was released.

 

I think the optane will be much better used as a boot drive here. The cache hit ratio doesn't tell that much as the ssd below is still pretty fast here, its not like your caching a HDD thats much slower.

 

It is what you think, with no data to back it up.

 

I could easily use the optane as boot if I wanted to.

 

I expect performance  to improve using as a cache - exactly what is it designed for.

 

 

Main Machine: CPU: 5800X3D  RAM: 32GB  GPU: RTX 3080  M/B: ASUS B550-E Storage: 2 x 256GB NVME boot, 1/2 TB NVME OS: Windows 10, Ubuntu 22.04

Server1:  M92p micro  CPU: i5-3470T  RAM: 8GB OS: Proxmox  Virtual Machines: Opnsense router, LXC containers: netboot server, download manager

Server2: CPU: 3600X  RAM: 64GB M/B MSI B450 Tomahawk  OS: Proxmox  Virtual machines: Windows 10, 3 x Ubuntu Linux, Truenas scale (16TB logical storage)

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, ianm_ozzy said:

 

It is what you think, with no data to back it up.

 

I could easily use the optane as boot if I wanted to.

 

I expect performance  to improve using as a cache - exactly what is it designed for.

 

 

are you here to ask for advice, or are you here to argue that you know better than the people who've spent time running setups like this with nothing to base this off of for yourself either?

 

caching isnt some magic bullet, if it was SSHD's would have had an actual install base outside of prebuilts.

 

you *might* see *some* real world benefit if you are regularly accessing the same data, and that data is less than 16GB, and you have enough other stuff going on that the main storage is actually quite busy.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, manikyath said:

are you here to ask for advice, or are you here to argue that you know better than the people who've spent time running setups like this with nothing to base this off of for yourself either?

 

caching isnt some magic bullet, if it was SSHD's would have had an actual install base outside of prebuilts.

 

you *might* see *some* real world benefit if you are regularly accessing the same data, and that data is less than 16GB, and you have enough other stuff going on that the main storage is actually quite busy.

I received the advice I needed.

There was some interesting 'advice' suggesting I purchase more memory than the machine can use.

 

Now it is setup. I will probably get 32GB - not 64GB memory to upgrade it.

 

It is setup & installed.  a 240GB  sata SSD and optane drive as l2arc.  I have these already.

 

The only elusive thing now, it making the l2arc consistent.  If you know how to to that, would appreciate it.

I used the command:  zpool add rpool cache nvme0n1

It added it  just fine.

 

 

When setup & ran with some Virtual machines, will determine what storage I need to buy, whether  using the optane as cache or not.

Maybe after a week or so. It will depend on the cache hit rate.

 

 

 

Main Machine: CPU: 5800X3D  RAM: 32GB  GPU: RTX 3080  M/B: ASUS B550-E Storage: 2 x 256GB NVME boot, 1/2 TB NVME OS: Windows 10, Ubuntu 22.04

Server1:  M92p micro  CPU: i5-3470T  RAM: 8GB OS: Proxmox  Virtual Machines: Opnsense router, LXC containers: netboot server, download manager

Server2: CPU: 3600X  RAM: 64GB M/B MSI B450 Tomahawk  OS: Proxmox  Virtual machines: Windows 10, 3 x Ubuntu Linux, Truenas scale (16TB logical storage)

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, ianm_ozzy said:

I received the advice I needed.

There was some interesting 'advice' suggesting I purchase more memory than the machine can use.

 

Now it is setup. I will probably get 32GB - not 64GB memory to upgrade it.

 

It is setup & installed.  a 240GB  sata SSD and optane drive as l2arc.  I have these already.

 

The only elusive thing now, it making the l2arc consistent.  If you know how to to that, would appreciate it.

I used the command:  zpool add rpool cache nvme0n1

It added it  just fine.

 

 

When setup & ran with some Virtual machines, will determine what storage I need to buy, whether  using the optane as cache or not.

Maybe after a week or so. It will depend on the cache hit rate.

 

 

 

As others said…. Done run L2arc in this setup. It won’t help anything, and if anything, it will likely hurt. l2ARC requires more RAM to be used which takes away from actual RAM based ARC, and for this use case, you don’t need to cache anything. 
 

Actually…. What is the use case? Why do you think you need L2arc? 
 

I run a ZFS array as my NAS, I have 10x4TB drives, standard RAM ARC, no L2arc, and I can read and write (with 0% hits on ARC, I can even outright disable arc), and I do 5gigabit/sec… that’s about 420 MB/s, that’s practically the speed of a sata SSD, and it’s almost 4x as fast as gigabit networking anyways. And that’s on mechanical drives, with no ARC at all. Yes, this array has horrible latency compared to an SSD, and its IOPS are also pretty horrible, but the point here is, ZFS is impressive, and for a home server use case, you don’t need L2arc, and you especially don’t need it if the VDEV is already flash based. 
 

Something to remember, unless you’re running faster than gigabit LAN, your networking will be your bottleneck anyways.
 

Get a SATA or nvme SSD for Proxmox boot and for VM’s to run off of. No need for L2arc to cache boot media.

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, LIGISTX said:

As others said…. Done run L2arc in this setup. It won’t help anything, and if anything, it will likely hurt. l2ARC requires more RAM to be used which takes away from actual RAM based ARC, and for this use case, you don’t need to cache anything. 
 

Actually…. What is the use case? Why do you think you need L2arc? 
 

I run a ZFS array as my NAS, I have 10x4TB drives, standard RAM ARC, no L2arc, and I can read and write (with 0% hits on ARC, I can even outright disable arc), and I do 5gigabit/sec… that’s about 420 MB/s, that’s practically the speed of a sata SSD, and it’s almost 4x as fast as gigabit networking anyways. And that’s on mechanical drives, with no ARC at all. Yes, this array has horrible latency compared to an SSD, and its IOPS are also pretty horrible, but the point here is, ZFS is impressive, and for a home server use case, you don’t need L2arc, and you especially don’t need it if the VDEV is already flash based. 
 

Something to remember, unless you’re running faster than gigabit LAN, your networking will be your bottleneck anyways.
 

Get a SATA or nvme SSD for Proxmox boot and for VM’s to run off of. No need for L2arc to cache boot media.

 

The initial query has been answered.

It just goes on and on and on and on.

Your use is quite different from mine.

 

The l2arc will be for the OS  & virtual machines, not for a NAS.

I have truenas scale as a virtual machine on another computer.  No l2arc there as it is not needed for the NAS.

 

Lots of random 4k reads are needed for  OSes I understand.  Guess what - the optane drive is really good for that for the price!

I could  remove the wifi card & maybe replace with a 2.5Gb card it seems. 

At this point it seems I may not need it.

 

 

I am buying some memory anyway, so and testing it out.

It all hinges on the cache hit rate with the virtual machines I transfer there.

Presently 91% in my present proxmox machine - for virtual machines. Memory caching only limited to 12GB.

If better with the optane drive, will be using it.- 16Gb optane l2arc + around 4GB arc RAM I think.

 

Once tested out with storage I have already , will decide what storage to buy & use.

 

 

bye

 

 

Main Machine: CPU: 5800X3D  RAM: 32GB  GPU: RTX 3080  M/B: ASUS B550-E Storage: 2 x 256GB NVME boot, 1/2 TB NVME OS: Windows 10, Ubuntu 22.04

Server1:  M92p micro  CPU: i5-3470T  RAM: 8GB OS: Proxmox  Virtual Machines: Opnsense router, LXC containers: netboot server, download manager

Server2: CPU: 3600X  RAM: 64GB M/B MSI B450 Tomahawk  OS: Proxmox  Virtual machines: Windows 10, 3 x Ubuntu Linux, Truenas scale (16TB logical storage)

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, ianm_ozzy said:

 

The initial query has been answered.

It just goes on and on and on and on.

Your use is quite different from mine.

 

The l2arc will be for the OS  & virtual machines, not for a NAS.

I have truenas scale as a virtual machine on another computer.  No l2arc there as it is not needed for the NAS.

 

Lots of random 4k reads are needed for  OSes I understand.  Guess what - the optane drive is really good for that for the price!

I could  remove the wifi card & maybe replace with a 2.5Gb card it seems. 

At this point it seems I may not need it.

 

 

I am buying some memory anyway, so and testing it out.

It all hinges on the cache hit rate with the virtual machines I transfer there.

Presently 91% in my present proxmox machine - for virtual machines. Memory caching only limited to 12GB.

If better with the optane drive, will be using it.- 16Gb optane l2arc + around 4GB arc RAM I think.

 

Once tested out with storage I have already , will decide what storage to buy & use.

 

 

bye

 

 

But what are your VM’s doing. Why do you think you need to cache your boot media?

 

I have 12 VM’s and at least as many docker containers running on my Proxmox host, boot drive is NVMe SSD, there is no need for a cache. 
 

We are just trying to provide you insight on the benefits and drawbacks of ZFS. The number 1 thing with ZFS is don’t over build your machine simply because you have the hardware and think it will help. Less complexity is better… especially when more complexity will not help. 
 

But, ¯\_(ツ)_/¯. 
 

We are not trying to be rude, or trying to tell you what to do. Just providing feedback from having worked with and been around ZFS for a long time. What you don’t realize is @Electronics Wizardyhas A LOT of ZFS experience… and he done A LOT of testing on how’s own. So when he provided advice, and you simply said:

10 hours ago, ianm_ozzy said:

It is what you think, with no data to back it up.

You didn’t realize you were the naive one talking to an expert. 
 

Again, you can do whatever you want, but trust me, simple is better. Don’t use hardware you think helps without severally understand why it helps. That’s how you get into trouble with ZFS.  Less complexity is good is my stance on the subject, as well as caching flash with flash doesn’t make much sense. If you truly believe you need that kind of OS performance, just run a 128 or 256 Optane boot drive.

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

I actually have utilized 3 Optane drives in my TrueNAS setup. To clarify first, one of Optane is used as boot drive, installed into the slot on motherboard, while two others are assigned to a regular striped (RAID 0) pool and do something special like caching for Docker apps, installed via riser cards.

 

2024-08-12_17-59-fs8.thumb.png.91f5b27b3451a26749d51738c5c7afa2.png

2024-08-12_18-13.thumb.jpg.376aa881f32203a5219c6a4998d59a3f.jpg

 

So here are my conclusions:

  1. TrueNAS can indeed be installed on a single drive including Optane, and your issues may be due to the nature of at least one spare drive required to create a pool. Nothing special, and most pre-built solutions would also require at least one drive to be installed before initialization.
  2. TrueNAS on Optane behaves identically to that on regular NAND SSD drives (Lenovo SL700 128GB in this case).
  3. I have attempted to utilize the second Optane as SLOG attached to the main pool, but its improvements on performance has been negligible, and nearly no activities in throughput have been found in the Reports section. This may indicate that with just one user, neither L2ARC nor SLOG would make any difference in performance. They actually are intended for performance boosting in concurrent reads and writes, respectively. This documentation provides more information about ZFS as well as these caching technologies.

The following shows the activities of an Optane configured as SLOG in 5 days, in when I was copying terabytes of data through Samba shares.

 

l2arc.thumb.jpg.f02aa5ade00a5ace8ba04db55276ccae.jpg

 

The following shows the activities of the same Optane in recent 7 days after re-assignment to a regular pool, and serving Docker apps.

 

2024-08-12_18-37.thumb.jpg.7fe6c1372543670529b48971bc0fbb90.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

Oh I did not realize here's talking about Proxmox installation... Now IMO Proxmox is just like any other specialized Linux system derived from Debian (e.g. TrueNAS Scale), and shares the same OpenZFS package from TrueNAS, so that these experiences should also comply with Proxmox. In any case, you may take an attempt to all of these possibilities, spend some bucks, have more lessons, then consider which solution fits better to you.☺️

 

On 8/11/2024 at 12:05 PM, ianm_ozzy said:

Are you seriously suggesting I buy 64GB when the specs clearly state it takes on 32GB?

Now that there have been unbuffered DDR4 modules with 32GB on a single channel readily available, and the processor has support for up to 128GB, this platform should be supposed to support up to 64GB kits.

 

On 8/11/2024 at 11:57 AM, ianm_ozzy said:

An Nvme for a similar price  will typically have less endurance and no DRAM.

If you ever worried about this, one thing you should consider is to back up data regularly. No SSDs could guarantee durability and/or data integrity, even for those pricier and with DRAM caches. The most recent example was the late defects in Samsung NAND chips manufactured in 2020~2021, leading to increasing readings in the SMART parameter 0E (Integrity Checksum Error), and errors when reading from random chunks. My SSD, ADATA SX8200 Pro (2020 ver.), happened to utilize Samsung chips, and problems in 0E had raised in last year. Hardware can be grabbed over and over again, but data cannot be retrieved once lost.

Link to comment
Share on other sites

Link to post
Share on other sites

So anyway.

All the proxmox Vm/containers are backup up.

Presently I have an M92p micro with proxmox and some containers/VMs.  It will probably be promoted to a proxmox backup machine.

 

 

I went and bought  32GB RAM + 1TB sata SSD. It is now in the machine and apparently working well.

 

As for the RAM, 32GB is the stated limit for the machine that I found. It will be  adequate. Even  if it can 'use' more, installing out of spec RAM size seems to be asking for trouble.

 

The SSD, the MX500 1TB. It has a small DRAM cache, unlike most nvme available. Also decent endurance with a 36GB slc cache.

It will be at most 50% full.

I expect little trouble from it.

 

 

So installed proxmox with zfs RAID 0 on the 1TB SSD.

 

With the 16GB optane I have already, will be trying it out as a  cache.

 

So after so looking around  I did a few additions.

 

in the proxmox shell:

 

zpool add rpool cache nvme0n1

 

nano /etc/modprobe.d/zfs.conf

 

The content are:

options zfs zfs_arc_max=4294967296
options zfs zfs_arc_min=1073741824
options zfs l2arc_write_boost=8388608
options zfs l2arc_write_max=50331648
options zfs l2arc_rebuild_enabled=1

 

I saved it and used commands:

 

update-initramfs -u -k all

 

reboot

 

 

 

 

I will see how stable it is and how the cache hit rate works out.

 

The value l2arc_write_max has a tiny default value of 8 megabytes, which is how much can be evicted from arc to it at every second. The default time period is 1 second.

I upped it to a conservative 48.  The 4k random write speed of the optane is about 80.

 

I guess the default 8MB value is why not many people have had  much use of a zfs l2arc.

 

So are not spending more time looking into in the guts of zfs, and will see how it goes.

 

On there will be various containers VMs, except truenas, lancache and router (opnsense).

 

Someone mentioned  a 256GB optane drive.- a combined 16GB optane and 256GB nvme I think.

 

That actually could be useful. the 16GB as  the cache for the 1TB dive and the 256GB as lancache storage.

It will be hit hard with I/O. If it fails, will not be a great loss.

 

Combined with a 2.5Gb nic in there, could put all VMs/containers, and my main server just as a NAS (truenas scale) .

 

There you go

bye

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Main Machine: CPU: 5800X3D  RAM: 32GB  GPU: RTX 3080  M/B: ASUS B550-E Storage: 2 x 256GB NVME boot, 1/2 TB NVME OS: Windows 10, Ubuntu 22.04

Server1:  M92p micro  CPU: i5-3470T  RAM: 8GB OS: Proxmox  Virtual Machines: Opnsense router, LXC containers: netboot server, download manager

Server2: CPU: 3600X  RAM: 64GB M/B MSI B450 Tomahawk  OS: Proxmox  Virtual machines: Windows 10, 3 x Ubuntu Linux, Truenas scale (16TB logical storage)

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, ianm_ozzy said:

So anyway.

All the proxmox Vm/containers are backup up.

Presently I have an M92p micro with proxmox and some containers/VMs.  It will probably be promoted to a proxmox backup machine.

 

 

I went and bought  32GB RAM + 1TB sata SSD. It is now in the machine and apparently working well.

 

As for the RAM, 32GB is the stated limit for the machine that I found. It will be  adequate. Even  if it can 'use' more, installing out of spec RAM size seems to be asking for trouble.

 

The SSD, the MX500 1TB. It has a small DRAM cache, unlike most nvme available. Also decent endurance with a 36GB slc cache.

It will be at most 50% full.

I expect little trouble from it.

 

 

So installed proxmox with zfs RAID 0 on the 1TB SSD.

 

With the 16GB optane I have already, will be trying it out as a  cache.

 

So after so looking around  I did a few additions.

 

in the proxmox shell:

 

zpool add rpool cache nvme0n1

 

nano /etc/modprobe.d/zfs.conf

 

The content are:

options zfs zfs_arc_max=4294967296
options zfs zfs_arc_min=1073741824
options zfs l2arc_write_boost=8388608
options zfs l2arc_write_max=50331648
options zfs l2arc_rebuild_enabled=1

 

I saved it and used commands:

 

update-initramfs -u -k all

 

reboot

 

 

 

 

I will see how stable it is and how the cache hit rate works out.

 

The value l2arc_write_max has a tiny default value of 8 megabytes, which is how much can be evicted from arc to it at every second. The default time period is 1 second.

I upped it to a conservative 48.  The 4k random write speed of the optane is about 80.

 

I guess the default 8MB value is why not many people have had  much use of a zfs l2arc.

 

So are not spending more time looking into in the guts of zfs, and will see how it goes.

 

On there will be various containers VMs, except truenas, lancache and router (opnsense).

 

Someone mentioned  a 256GB optane drive.- a combined 16GB optane and 256GB nvme I think.

 

That actually could be useful. the 16GB as  the cache for the 1TB dive and the 256GB as lancache storage.

It will be hit hard with I/O. If it fails, will not be a great loss.

 

Combined with a 2.5Gb nic in there, could put all VMs/containers, and my main server just as a NAS (truenas scale) .

 

There you go

bye

 

 

 

 

 

 

 

 

 

 

 

 

 

 

I’d love to hear how you plan to test performance from having the L2ARC vs not having it. 
 

But again, ¯\_(ツ)_/¯.

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, LIGISTX said:

I’d love to hear how you plan to test performance from having the L2ARC vs not having it. 
 

But again, ¯\_(ツ)_/¯.

I am just using it to see if it useful, as I have the 16Gb optane already.

 

Presently:

 

ARC total accesses:                                                 2.9M
        Total hits:                                    87.3 %

 

and

 

L2ARC breakdown:                                                  361.9k
        Hit ratio:                                     40.6 %     146.9k

 

I expect it to get a bit worse, then maybe get better.

 

I just  uploaded a bunch of ISO files &  virtual machines from the backup.

 

I expect the results are skewed for now.

 

It seems it is all about the play between  the MFU (most frequently used) and MRU (most recently used) cache data.

 

Right now are primarily concerned about the stability of the machine.

 

In a few weeks will check the results again.

 

 

 

 

 

 

 

 

Main Machine: CPU: 5800X3D  RAM: 32GB  GPU: RTX 3080  M/B: ASUS B550-E Storage: 2 x 256GB NVME boot, 1/2 TB NVME OS: Windows 10, Ubuntu 22.04

Server1:  M92p micro  CPU: i5-3470T  RAM: 8GB OS: Proxmox  Virtual Machines: Opnsense router, LXC containers: netboot server, download manager

Server2: CPU: 3600X  RAM: 64GB M/B MSI B450 Tomahawk  OS: Proxmox  Virtual machines: Windows 10, 3 x Ubuntu Linux, Truenas scale (16TB logical storage)

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, ianm_ozzy said:

I am just using it to see if it useful, as I have the 16Gb optane already.

 

Presently:

 

ARC total accesses:                                                 2.9M
        Total hits:                                    87.3 %

 

and

 

L2ARC breakdown:                                                  361.9k
        Hit ratio:                                     40.6 %     146.9k

 

I expect it to get a bit worse, then maybe get better.

 

I just  uploaded a bunch of ISO files &  virtual machines from the backup.

 

I expect the results are skewed for now.

 

It seems it is all about the play between  the MFU (most frequently used) and MRU (most recently used) cache data.

 

Right now are primarily concerned about the stability of the machine.

 

In a few weeks will check the results again.

 

 

 

 

 

 

 

 

Checking hit rate doesn’t tell you anything about improved or degraded performance… there is penalty both in terms of RAM use and in term of just latency. You’d have to do actual benchmarking to tell if you are gaining any performance. Remember….. you are not caching spinning rust with Optane, your caching an SSD with Optane.

 

Typically when you consider cache, you want to cache with at least 1 order of magnitude more performant media. Optane is basically an SSD on steroids, but it’s not 10x the performance. I suggest doing some actual testing, not just looking at ARC hit rate. Think about it this way… in theory you could use a harddrive as a L2arc device, and the hit rate can be very high, but without doing any testing we all know this will make things slower in any situation. What we, nor you, know without doing testing, is if that Optane actually helping anything in your use case. 

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×