Jump to content

Guide to choosing the best drive for Virtualization

I've seen many posts of the forums relating to what the best type of drive should be used for virtualization. If the incorrect drive is used virtual machines can have extremely poor performance to the point they are usable. Believe it or not the certain kinds of drives can significantly improve performance in virtual machines. The protocol such as AHCI, SCSI, or NVMe also greatly determines performance. The AHCI protocol which is included with SATA drives can only handle so many commands at a time. SAS and NVMe devices are designed to handle more difficult tasks that require frequent access to the drive. Any hard disk drives slower than 7200 RPM should be avoided at all costs.The following list shows which drive should be used depending on the number of virtual machines running at the same time. I wouldn't recommend a 10000 RPM or 15000 RPM hard drive unless you already have a server that uses them. These drives get quite hot and are costly.

 

7200 RPM SATA HDD (AHCI); About 200 MB/s, 1 to 2 Virtual Machines at a time

7200 RPM SAS HDD (SCSI); About 230 MB/s, 3 to 4 Virtual Machines at a time

10000 RPM SAS HDD (SCSI); About 270 MB/s, Up to 5 Virtual Machines at a time

15000 RPM SAS HDD (SCSI); About 350 MB/s, 6 to 7 Virtual Machines at a time

Consumer-Grade SATA SSD (AHCI); About 550 MB/s, 4 to 6 Virtual Machines at a time

SAS SSD (SCSI): About 1.1 GB/s, 8 to 16 Virtual Machines at a time

PCIe 4x SSD (NVMe); About 2.5 GB/s, 16 to 32 Virtual Machines at a time

PCIe 8x SSD (NVMe): About 5.0 GB/s, 32+ Virtual Machines at a time

 

Link to comment
Share on other sites

Link to post
Share on other sites

I'd like to point out RAID makes a bit of a difference here too. A server running multiple VMs will probably be using a lot more than 1 spindle. 

Gaming build:

CPU: i7-7700k (5.0ghz, 1.312v)

GPU(s): Asus Strix 1080ti OC (~2063mhz)

Memory: 32GB (4x8) DDR4 G.Skill TridentZ RGB 3000mhz

Motherboard: Asus Prime z270-AR

PSU: Seasonic Prime Titanium 850W

Cooler: Custom water loop (420mm rad + 360mm rad)

Case: Be quiet! Dark base pro 900 (silver)
Primary storage: Samsung 960 evo m.2 SSD (500gb)

Secondary storage: Samsung 850 evo SSD (250gb)

 

Server build:

OS: Ubuntu server 16.04 LTS (though will probably upgrade to 17.04 for better ryzen support)

CPU: Ryzen R7 1700x

Memory: Ballistix Sport LT 16GB

Motherboard: Asrock B350 m4 pro

PSU: Corsair CX550M

Cooler: Cooler master hyper 212 evo

Storage: 2TB WD Red x1, 128gb OCZ SSD for OS

Case: HAF 932 adv

 

Link to comment
Share on other sites

Link to post
Share on other sites

You can simply write off any 10K or 15K RPM SATA or SAS disks that are new, those should never by purchased over any SSD as they cost the same or more and perform worse.

 

You should also be able to run more than 6 VMs on a SATA SSD, but that does come down to how good it is i.e junker vs 850 EVO vs 850 PRO.

 

Other than size a SAS SSD will be able to run 30+ non I/O demanding VMs, and I mean real demand not something you'd get in a home lab. Most hyper-converged (HCI) platforms use 1 or 2 Mixed Use or Write Intensive SATA SSDs to cache/hot tier HDDs per host.

 

I'd adjust all your stats to IOPs btw not throughput as IOPs is what determines how many VMs you can run not throughput, sustained IOPS btw not peak which is only what consumer SSDs list. You'll often see enterprise SSDs with lower numbers than consumer but that isn't the truth.

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, reniat said:

I'd like to point out RAID makes a bit of a difference here too. A server running multiple VMs will probably be using a lot more than 1 spindle. 

Good point.

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, leadeater said:

You can simply write off any 10K or 15K RPM SATA or SAS disks that are new, those should never by purchased over any SSD as they cost the same or more and perform worse.

 

You should also be able to run more than 6 VMs on a SATA SSD, but that does come down to how good it is i.e junker vs 850 EVO vs 850 PRO.

 

Other than size a SAS SSD will be able to run 30+ non I/O demanding VMs, and I mean real demand not something you'd get in a home lab. Most hyper-converged (HCI) platforms use 1 or 2 Mixed Use or Write Intensive SATA SSDs to cache/hot tier HDDs per host.

 

I'd adjust all your stats to IOPs btw not throughput as IOPs is what determines how many VMs you can run not throughput, sustained IOPS btw not peak which is only what consumer SSDs list. You'll often see enterprise SSDs with lower numbers than consumer but that isn't the truth.

Even though a SATA SSD appears to be fast enough to run more than 6 VMs the AHCI protocol has its limitations. SATA(AHCI) is simply not designed to be used in the server an environment with lots of commands. SATA only has one data channel which can read or write. SAS and PCIe have multiple data channels which can read and write at the same time.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, TheCherryKing said:

Good point but don't use RAID on SSDs.

There's nothing wrong with SSD RAID. A lot of enterprise servers use SSD RAID.The reason there is a stigma is because you lose windows TRIM garbage collecting functionality, which is AHCI mode only, but modern SSDs are much better about this on their own than they used to be so this effect is pretty negligible. 

Gaming build:

CPU: i7-7700k (5.0ghz, 1.312v)

GPU(s): Asus Strix 1080ti OC (~2063mhz)

Memory: 32GB (4x8) DDR4 G.Skill TridentZ RGB 3000mhz

Motherboard: Asus Prime z270-AR

PSU: Seasonic Prime Titanium 850W

Cooler: Custom water loop (420mm rad + 360mm rad)

Case: Be quiet! Dark base pro 900 (silver)
Primary storage: Samsung 960 evo m.2 SSD (500gb)

Secondary storage: Samsung 850 evo SSD (250gb)

 

Server build:

OS: Ubuntu server 16.04 LTS (though will probably upgrade to 17.04 for better ryzen support)

CPU: Ryzen R7 1700x

Memory: Ballistix Sport LT 16GB

Motherboard: Asrock B350 m4 pro

PSU: Corsair CX550M

Cooler: Cooler master hyper 212 evo

Storage: 2TB WD Red x1, 128gb OCZ SSD for OS

Case: HAF 932 adv

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, reniat said:

There's nothing wrong with SSD RAID. A lot of enterprise servers use SSD RAID.The reason there is a stigma is because you lose windows TRIM garbage collecting functionality, which is AHCI mode only, but modern SSDs are much better about this on their own than they used to be so this effect is pretty negligible. 

TRIM is supported in SCSI mode too. I was unaware that SSDs TRIM themselves.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, TheCherryKing said:

I was unaware that SSDs TRIM themselves.

I dont think all do, but some consumer SSDs are starting to do GC so it's not just an enterprise thing. I know at least the 960 evo does. 

Gaming build:

CPU: i7-7700k (5.0ghz, 1.312v)

GPU(s): Asus Strix 1080ti OC (~2063mhz)

Memory: 32GB (4x8) DDR4 G.Skill TridentZ RGB 3000mhz

Motherboard: Asus Prime z270-AR

PSU: Seasonic Prime Titanium 850W

Cooler: Custom water loop (420mm rad + 360mm rad)

Case: Be quiet! Dark base pro 900 (silver)
Primary storage: Samsung 960 evo m.2 SSD (500gb)

Secondary storage: Samsung 850 evo SSD (250gb)

 

Server build:

OS: Ubuntu server 16.04 LTS (though will probably upgrade to 17.04 for better ryzen support)

CPU: Ryzen R7 1700x

Memory: Ballistix Sport LT 16GB

Motherboard: Asrock B350 m4 pro

PSU: Corsair CX550M

Cooler: Cooler master hyper 212 evo

Storage: 2TB WD Red x1, 128gb OCZ SSD for OS

Case: HAF 932 adv

 

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, TheCherryKing said:

Even though a SATA SSD appears to be fast enough to run more than 6 VMs the AHCI protocol has its limitations. SATA(AHCI) is simply not designed to be used in the server an environment with lots of commands. SATA only has one data channel which can read or write. SAS and PCIe have multiple data channels which can read and write at the same time.

That has way less impact than you think, most enterprise SSDs use SATA. Have a look at how little SAS SSDs there actually are and what the SSD catalogs from HPE is mostly made up of, SATA.

 

Our development environment runs about 260 VMs on 14 Nutanix ESXi hosts with 1 Intel DC S3610 SATA SSD and 5 HDDs. That's 18 VMs average per host and the VMs range from web servers, SharePoint, MSSQL, Application servers, you name it it's on there. 

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, TheCherryKing said:

TRIM is supported in SCSI mode too. I was unaware that SSDs TRIM themselves.

 

6 minutes ago, reniat said:

I dont think all do, but some consumer SSDs are starting to do GC so it's not just an enterprise thing. I know at least the 960 evo does. 

TRIM doesn't function in RAID arrays with the only exception to that being Intel onboard RAID. Enterprise SSDs have much more internal NAND over provisioning to counter lack of TRIM and have slightly better optimized GC.

Link to comment
Share on other sites

Link to post
Share on other sites

49 minutes ago, leadeater said:

That has way less impact than you think, most enterprise SSDs use SATA. Have a look at how little SAS SSDs there actually are and what the SSD catalogs from HPE is mostly made up of, SATA.

 

Our development environment runs about 260 VMs on 14 Nutanix ESXi hosts with 1 Intel DC S3610 SATA SSD and 5 HDDs. That's 18 VMs average per host and the VMs range from web servers, SharePoint, MSSQL, Application servers, you name it it's on there. 

Maybe for HPE but at SanDisk and Micron there is a lot less SATA enterprise products.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, TheCherryKing said:

Maybe for HPE but at SanDisk and Micron there is a lot less SATA enterprise products.

HPE don't make SSDs, they are all OEM from Intel, Micron, Samsung etc. SATA is widely used in the enterprise world. Even our multi million dollar Netapps use SATA SSDs.

 

Not to say we don't have some servers with NVMe in them but that's 6 out of like 150.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, leadeater said:

HPE don't make SSDs, they are all OEM from Intel, Micron, Samsung etc. SATA is widely used in the enterprise world. Even our multi million dollar Netapps use SATA SSDs.

 

Not to say we don't have some servers with NVMe in them but that's 6 out of like 150.

There is more than one catalog in the world! Besides NVMe is a new technology that is not quite mainstream yet. SAS SSDs will likely be replaced by NVMe SSDs within the next decade. In the future SAS will probably only be used by hard disk drives. SATA will still be widely used in the consumer market until NVMe is more affordable. There are enterprise-grade SATA SSDs but in many cases businesses need more performance.

Link to comment
Share on other sites

Link to post
Share on other sites

@TheCherryKing

I'm not trying to shoot down your guide or anything, it's mostly on point but the biggest thing that needs looking at is changing from displaying throughput performance to IOPs as throughput literally has nothing to do with how many VMs a device or storage array can run.

 

SATA SSD performance also varies greatly, as much as 100 times difference between a cheap desktop one and a enterprise write intensive.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, leadeater said:

@TheCherryKing

I'm not trying to shoot down your guide or anything, it's mostly on point but the biggest thing that needs looking at is changing from displaying throughput performance to IOPs as throughput literally has nothing to do with how many VMs a device or storage array can run.

 

SATA SSD performance also varies greatly, as much as 100 times difference between a cheap desktop one and a enterprise write intensive.

I should change MB/s to IOPS because VMs don't utilize the maximum sequential reads or writes.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, TheCherryKing said:

There is more than one catalog in the world! Besides NVMe is a new technology that is not quite mainstream yet. SAS SSDs will likely be replaced by NVMe SSDs within the next decade. In the future SAS will probably only be used by hard disk drives. SATA will still be widely used in the consumer market until NVMe is more affordable. There are enterprise-grade SATA SSDs but in many cases businesses need more performance.

I gave you three different companies as reference to SATA SSDs being used in the enterprise. I'm only trying to help, I've been in the IT industry a very long time and I'm a Systems Engineer so this really is my background.

 

We're one of the biggest networks in my country and the largest Netapp customer too.

 

Anyway just saying SATA SSDs is likely much more used that you think, that catalog pretty accurately reflects the ratio of SATA:SAS:NVMe being used in servers around the world. HPE is also the worlds largest server supplier representing about 24% of the market.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

I gave you three different companies as reference to SATA SSDs being used in the enterprise. I'm only trying to help, I've been in the IT industry a very long time and I'm a Systems Engineer so this really is my background.

 

We're one of the biggest networks in my country and the largest Netapp customer too.

 

Anyway just saying SATA SSDs is likely much more used that you think, that catalog pretty accurately reflects the ratio of SATA:SAS:NVMe being used in servers around the world. HPE is also the worlds largest server supplier representing about 24% of the market.

I can see that SATA is used in the enterprise more than I thought. But why? These enterprise SATA SSDs cost almost as much as the SAS SSDs which are twice as fast.

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, TheCherryKing said:

I can see that SATA is used in the enterprise more than I thought. But why? These enterprise SATA SSDs cost almost as much as the SAS SSDs which are twice as fast.

Depends on what kind of SATA SSD you're looking at. There's 3 different endurance/performance classifications they come under: Read Intensive (RI), Mixed Use (MU) and Write Intensive (WI).

 

Read Intensive have lower wear rating and a bit lower performance, these mostly get used for OS drives or web servers where the majority of I/O is read.

 

Mixed Use is the most common and will get used to host VMs, databases etc. These generally are the go to SSD if you need any in a server.

 

Write Intensive are for workloads that have high amounts of writes and requires the best performance, this is commonly on high performance database servers or for hosting many high performance VMs.

 

Read intensive SSDs are generally pretty cheap, not much more than a good consumer SSD. Write Intensive on the other had have a very big jump in price. It's also important to note that SAS SSDs are almost always Write Intensive and it's this that makes them so much more expensive rather than having a SAS interface. SAS does add cost but not nearly as much as the extra NAND in WI SSDs that is set aside for wear and GC to keep performance at it's utmost.

 

Also a SAS RI SSD will typically have a higher DWPD rating than a SATA RI SSD.

 

If you want to know why an SSDs costs what is does look at it's DWPD rating.

 

These are list prices and not what we pay but pretty good guide for you to see the differences in prices.

 

Spoiler
804613-B21 HPE 200GB SATA MU SFF SC SSD $615.00 $615.00
804639-B21 HPE 200GB SATA WI SFF SC SSD $738.00 $738.00
872853-B21 HPE 240GB SATA 6G RI SFF SC SSD $656.00 $656.00
804665-B21 HPE 400GB SATA WI SFF SC SSD $1,232.00 $1,232.00
869378-B21 HPE 480GB SATA 6G RI SFF SC DS SSD $1,088.00 $1,088.00
872855-B21 HPE 480GB SATA 6G RI SFF SC SSD $1,252.00 $1,252.00
832414-B21 HPE 480GB SATA MU SFF SC SSD $1,437.00 $1,437.00
804599-B21 HPE 800GB SATA RI SFF SC SSD $1,561.00 $1,561.00
804625-B21 HPE 800GB SATA MU SFF SC SSD $1,972.00 $1,972.00
804671-B21 HPE 800GB SATA WI SFF SC SSD $2,383.00 $2,383.00
869384-B21 HPE 960GB SATA 6G RI SFF SC DS SSD $1,931.00 $1,931.00
872348-B21 HPE 960GB SATA 6G MU SFF SC DS SSD $3,165.00 $3,165.00
804677-B21 HPE 1.2TB SATA WI SFF SC SSD $4,863.00 $4,863.00
869386-B21 HPE 1.6TB SATA 6G RI SFF SC DS SSD $2,363.00 $2,363.00
804631-B21 HPE 1.6TB SATA MU SFF SC SSD $3,720.00 $3,720.00
872363-B21 HPE 1.6TB SATA 6G WI SFF SC DS SSD $6,064.00 $6,064.00
871770-B21 HPE 1.92TB SATA RI SFF SC DS SSD $4,542.00 $4,542.00
872352-B21 HPE 1.92TB SATA 6G MU SFF SC DS SSD $6,167.00 $6,167.00
868830-B21 HPE 3.84TB SATA 6G RI SFF SC DS SSD $8,552.00 $8,552.00

 

 

     
779164-B21 HPE 200GB SAS WI SFF SC SSD $3,041.00 $3,041.00
822555-B21 HPE 400GB SAS MU SFF SC SSD $2,691.00 $2,691.00
779168-B21 HPE 400GB SAS WI SFF SC SSD $5,776.00 $5,776.00
816562-B21 HPE 480GB SAS RI SFF SC SSD $2,835.00 $2,835.00
762261-B21 HPE 800GB SAS RI SFF SC SSD $7,767.00 $7,767.00
822559-B21 HPE 800GB SAS MU SFF SC SSD $6,146.00 $6,146.00
846430-B21 HPE 800GB SAS WI SFF SC SSD $7,051.00 $7,051.00
816568-B21 HPE 960GB SAS RI SFF SC SSD $5,878.00 $5,878.00
822563-B21 HPE 1.6TB SAS MU SFF SC SSD $11,677.00 $11,677.00
779176-B21 HPE 1.6TB SAS WI SFF SC SSD $20,869.00 $20,869.00
816572-B21 HPE 1.92TB SAS RI SFF SC SSD $11,122.00 $11,122.00
822567-B21 HPE 3.2TB SAS MU SFF SC SSD $18,463.00 $18,463.00
816576-B21 HPE 3.84TB SAS RI SFF SC SSD $17,620.00 $17,620.00

 

Spoiler
764904-B21 HP 400GB NVMe PCIe RI SFF SC2 SSD $2,877.00 $2,877.00
765034-B21 HP 400GB NVMe PCIe MU SFF SC2 SSD $2,280.00 $2,280.00
736936-B21 HP 400GB NVMe PCIe WI SFF SC2 SSD $4,152.00 $4,152.00
765036-B21 HP 800GB NVMe PCIe MU SFF SC2 SSD $4,337.00 $4,337.00
736939-B21 HP 800GB NVMe PCIe WI SFF SC2 SSD $7,894.00 $7,894.00
764906-B21 HP 1.2TB NVMe PCIe RI SFF SC2 SSD $7,791.00 $7,791.00
765038-B21 HP 1.6TB NVMe PCIe MU SFF SC2 SSD $8,236.00 $8,236.00
764892-B21 HP 1.6TB NVMe PCIe WI SFF SC2 SSD $14,989.00 $14,989.00
764908-B21 HP 2TB NVMe PCIe RI SFF SC2 SSD $12,254.00 $12,254.00
765044-B21 HP 2TB NVMe PCIe MU SFF SC2 SSD $9,716.00 $9,716.00
764894-B21 HP 2TB NVMe PCIe WI SFF SC2 SSD $34,544.00 $34,544.00

 

Link to comment
Share on other sites

Link to post
Share on other sites

Can also vouch as SATA being widely used. There are specific use cases for SAS drives, the best storage solution is going to be a mixture of drives. Virtual Desktops in a VDI environment don't really need much, especially if you're using Horizon 7 + instant clone. However a very busy central SQL server is going to be very taxing in every way imaginable.

 

I think saying how many vms per disk type is vague but hopefully gives people a rough idea. I can squeeze a web server, dns server, domain server, print server, and maybe one other type on a single regular SATA consumer drive for a small office. Sure boot times will be a little rough if they're all windows based, but once they're booted and services/daemons running in RAM, good to go.

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, leadeater said:

Depends on what kind of SATA SSD you're looking at. There's 3 different endurance/performance classifications they come under: Read Intensive (RI), Mixed Use (MU) and Write Intensive (WI).

 

Read Intensive have lower wear rating and a bit lower performance, these mostly get used for OS drives or web servers where the majority of I/O is read.

 

Mixed Use is the most common and will get used to host VMs, databases etc. These generally are the go to SSD if you need any in a server.

 

Write Intensive are for workloads that have high amounts of writes and requires the best performance, this is commonly on high performance database servers or for hosting many high performance VMs.

 

Read intensive SSDs are generally pretty cheap, not much more than a good consumer SSD. Write Intensive on the other had have a very big jump in price. It's also important to note that SAS SSDs are almost always Write Intensive and it's this that makes them so much more expensive rather than having a SAS interface. SAS does add cost but not nearly as much as the extra NAND in WI SSDs that is set aside for wear and GC to keep performance at it's utmost.

 

Also a SAS RI SSD will typically have a higher DWPD rating than a SATA RI SSD.

 

If you want to know why an SSDs costs what is does look at it's DWPD rating.

 

These are list prices and not what we pay but pretty good guide for you to see the differences in prices.

 

  Hide contents
804613-B21 HPE 200GB SATA MU SFF SC SSD $615.00 $615.00
804639-B21 HPE 200GB SATA WI SFF SC SSD $738.00 $738.00
872853-B21 HPE 240GB SATA 6G RI SFF SC SSD $656.00 $656.00
804665-B21 HPE 400GB SATA WI SFF SC SSD $1,232.00 $1,232.00
869378-B21 HPE 480GB SATA 6G RI SFF SC DS SSD $1,088.00 $1,088.00
872855-B21 HPE 480GB SATA 6G RI SFF SC SSD $1,252.00 $1,252.00
832414-B21 HPE 480GB SATA MU SFF SC SSD $1,437.00 $1,437.00
804599-B21 HPE 800GB SATA RI SFF SC SSD $1,561.00 $1,561.00
804625-B21 HPE 800GB SATA MU SFF SC SSD $1,972.00 $1,972.00
804671-B21 HPE 800GB SATA WI SFF SC SSD $2,383.00 $2,383.00
869384-B21 HPE 960GB SATA 6G RI SFF SC DS SSD $1,931.00 $1,931.00
872348-B21 HPE 960GB SATA 6G MU SFF SC DS SSD $3,165.00 $3,165.00
804677-B21 HPE 1.2TB SATA WI SFF SC SSD $4,863.00 $4,863.00
869386-B21 HPE 1.6TB SATA 6G RI SFF SC DS SSD $2,363.00 $2,363.00
804631-B21 HPE 1.6TB SATA MU SFF SC SSD $3,720.00 $3,720.00
872363-B21 HPE 1.6TB SATA 6G WI SFF SC DS SSD $6,064.00 $6,064.00
871770-B21 HPE 1.92TB SATA RI SFF SC DS SSD $4,542.00 $4,542.00
872352-B21 HPE 1.92TB SATA 6G MU SFF SC DS SSD $6,167.00 $6,167.00
868830-B21 HPE 3.84TB SATA 6G RI SFF SC DS SSD $8,552.00 $8,552.00

 

 

     
779164-B21 HPE 200GB SAS WI SFF SC SSD $3,041.00 $3,041.00
822555-B21 HPE 400GB SAS MU SFF SC SSD $2,691.00 $2,691.00
779168-B21 HPE 400GB SAS WI SFF SC SSD $5,776.00 $5,776.00
816562-B21 HPE 480GB SAS RI SFF SC SSD $2,835.00 $2,835.00
762261-B21 HPE 800GB SAS RI SFF SC SSD $7,767.00 $7,767.00
822559-B21 HPE 800GB SAS MU SFF SC SSD $6,146.00 $6,146.00
846430-B21 HPE 800GB SAS WI SFF SC SSD $7,051.00 $7,051.00
816568-B21 HPE 960GB SAS RI SFF SC SSD $5,878.00 $5,878.00
822563-B21 HPE 1.6TB SAS MU SFF SC SSD $11,677.00 $11,677.00
779176-B21 HPE 1.6TB SAS WI SFF SC SSD $20,869.00 $20,869.00
816572-B21 HPE 1.92TB SAS RI SFF SC SSD $11,122.00 $11,122.00
822567-B21 HPE 3.2TB SAS MU SFF SC SSD $18,463.00 $18,463.00
816576-B21 HPE 3.84TB SAS RI SFF SC SSD $17,620.00 $17,620.00

 

  Hide contents
764904-B21 HP 400GB NVMe PCIe RI SFF SC2 SSD $2,877.00 $2,877.00
765034-B21 HP 400GB NVMe PCIe MU SFF SC2 SSD $2,280.00 $2,280.00
736936-B21 HP 400GB NVMe PCIe WI SFF SC2 SSD $4,152.00 $4,152.00
765036-B21 HP 800GB NVMe PCIe MU SFF SC2 SSD $4,337.00 $4,337.00
736939-B21 HP 800GB NVMe PCIe WI SFF SC2 SSD $7,894.00 $7,894.00
764906-B21 HP 1.2TB NVMe PCIe RI SFF SC2 SSD $7,791.00 $7,791.00
765038-B21 HP 1.6TB NVMe PCIe MU SFF SC2 SSD $8,236.00 $8,236.00
764892-B21 HP 1.6TB NVMe PCIe WI SFF SC2 SSD $14,989.00 $14,989.00
764908-B21 HP 2TB NVMe PCIe RI SFF SC2 SSD $12,254.00 $12,254.00
765044-B21 HP 2TB NVMe PCIe MU SFF SC2 SSD $9,716.00 $9,716.00
764894-B21 HP 2TB NVMe PCIe WI SFF SC2 SSD $34,544.00 $34,544.00

 

I can't believe how much these drives cost. I definitely got a great deal on eBay. I've should have been more specific when writing this guide. I intended to write this guide for virtualization various desktop operating systems. I have some changes to make to this guide.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 10 months later...

I know this is an old topic but still quite valid. I myself have been watching the enterprise SSD space for some time and the cost/GB is slowly getting to the point where using 10/15K SAS makes no sense.

The latest Intel S4500 type SATA drives have a 1 DWPD which for most environments is still a lot. I still see the comparable SAS interfaced drives about double the cost of the SATA equivalent so would struggle to justify the cost of SAS (let alone NVMe) SSD yet. Only specific workloads in my environment would justify these. Even production clusters running 30+ VM's of mixed load could happily sit on a RAID set of SATA SSD.

Interested to know what others have seen in the last year since the last posts?

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/14/2018 at 12:08 PM, Ben Loveday said:

I know this is an old topic but still quite valid. I myself have been watching the enterprise SSD space for some time and the cost/GB is slowly getting to the point where using 10/15K SAS makes no sense.

The latest Intel S4500 type SATA drives have a 1 DWPD which for most environments is still a lot. I still see the comparable SAS interfaced drives about double the cost of the SATA equivalent so would struggle to justify the cost of SAS (let alone NVMe) SSD yet. Only specific workloads in my environment would justify these. Even production clusters running 30+ VM's of mixed load could happily sit on a RAID set of SATA SSD.

Interested to know what others have seen in the last year since the last posts?

Last year we stopped buying 10k SAS for OS disks and went with SATA RI SSDs. We don't have too many physical non virtualized servers left though so servers with OS disks are rare in themselves now. Virtual hosts install the hypervisor on to mirrored SD cards now and we use Nutanix + ESXi hosts with 2 SATA MU SSDs and 4 7.2k HDD per node.

 

For out main network storage, Netapp, we'll only buy 7.2k HDDs or SSDs.

 

Unless HDD manufactures pull some serious magic and 10x the capacity they are eventually going to do the way of floppy disks and zip drives, distant memories.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×