Jump to content

Viking Technology ships 50TB SSD for the datacenter

YongKang

Well, we can expect bigger SSDs in the future. Especially when you consider the biggest consumer one is from Samsung (4TB). I'm sure Linus gonna get his hand on some of these bad boys.

Source: http://www.tweaktown.com/news/58378/viking-technology-ships-50tb-ssd-datacenter/index.html

 

Quote

The new drives are built with energy efficiency in mind, with idle power consumption of less than 10W, and active power consumption of only 16W... not too damn bad for 50TB of SSD. Viking Technology adds that by increasing the overall storage capacity per rack, while decreasing the power required per TB. Viking's new UHC-Silo SSD will have datacenter customers saving in power, space, and cooling by up to 80% per terabyte.

The new drives are built with energy efficiency in mind, with idle power consumption of less than 10W, and active power consumption of only 16W... not too damn bad for 50TB of SSD. Viking Technology adds that by increasing the overall storage capacity per rack, while decreasing the power required per TB. Viking's new UHC-Silo SSD will have datacenter customers saving in power, space, and cooling by up to 80% per terabyte.

 

viking-technology-ships-50tb-ssd-datacenter_04

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, ShadySocks said:

Must be in response for VR Porn

It's possible and companies are actually getting profit from it haha

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, ShadySocks said:

Welcome to the future

 

This is kinda good since its gonna reduce the spread of STDs haha.

Link to comment
Share on other sites

Link to post
Share on other sites

Haven't there already been 60, 80, and 100 TB SSD drives sold in the enterprise space?  Is there something special about this one compared to those?

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, ChineseChef said:

Haven't there already been 60, 80, and 100 TB SSD drives sold in the enterprise space?  Is there something special about this one compared to those?

Nope, these are world first ;)

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, ChineseChef said:

Haven't there already been 60, 80, and 100 TB SSD drives sold in the enterprise space?  Is there something special about this one compared to those?

Toshiba talked about a 100TB but it has yet to actually surface and Seagate is supposed to be releasing their 60TB SAS SSD this year, but again not heard any news about it since the press release.

 

Also can I have 8 of these please?

Link to comment
Share on other sites

Link to post
Share on other sites

So this is interesting, 50TB storage but their IOPS and read/write speeds are actually pretty slow. This is not a drive you'd want to use for mission critical workloads, but it'll be too expensive to be a cold storage drive. How on earth will they market it?

Corsair 600T | Intel Core i7-4770K @ 4.5GHz | Samsung SSD Evo 970 1TB | MS Windows 10 | Samsung CF791 34" | 16GB 1600 MHz Kingston DDR3 HyperX | ASUS Formula VI | Corsair H110  Corsair AX1200i | ASUS Strix Vega 56 8GB Internet http://beta.speedtest.net/result/4365368180

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, Terodius said:

So this is interesting, 50TB storage but their IOPS and read/write speeds are actually pretty slow. This is not a drive you'd want to use for mission critical workloads, but it'll be too expensive to be a cold storage drive. How on earth will they market it?

it is a faster back up storage, and it reduces space compared to a HDD solution. 

if you want to annoy me, then join my teamspeak server ts.benja.cc

Link to comment
Share on other sites

Link to post
Share on other sites

It writes at about 300 MByte/s so it takes more than a day to fill the drive. There must be a market for this, but is the space and energy saving really worth to high price of flash?

Mineral oil and 40 kg aluminium heat sinks are a perfect combination: 73 cores and a Titan X, Twenty Thousand Leagues Under the Oil

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, The Benjamins said:

it is a faster back up storage, and it reduces space compared to a HDD solution. 

No, that's just super dumb. did you even bother doing some math before making that comment? most large backups are already done to cold-storage servers which are highly optimized storage servers with low performance and massive hdd arrays for a reason.

 

A 10TB enterprise drive is like 400USD, and a 1TB enterprise SSD is 500+USD, with perfect scaling that would mean that this 50TB drive would cost at least 25.000USD. But let's give them the benefit of the doubt and say 20.000 USD assuming they can manufacture it cheaply. If you wanted to store a petabyte with good redundancy, you'd need about 120 10TB drives.

 

Each of those storage servers is about 5.000-10.000USD, but let's assume it's a high-end 10k USD server with 60 HDD bays. That's 20k in servers and 120x400 = 48.000USD in drives plus let's say 2k USD for the installation and setup. So you can get a petabyte of HDD storage for about 60.000 dollars.

 

Now if you were to do the same with 50TB SSDs, you'd need just one cheap server with 30 bays and 25 SSDs. So that would be  5.000 server + 25x20.000 SSDs + 2.000 installation. So 1 petabyte of SSDs would run you a cool half a million dollars. compared with 60.000 with harddrives. Only mission-critical data would justify those prices, and these SSDs definitely do not have the performance for that application.

 

So yeah, I don't know who on earth they'll sell these to.

Corsair 600T | Intel Core i7-4770K @ 4.5GHz | Samsung SSD Evo 970 1TB | MS Windows 10 | Samsung CF791 34" | 16GB 1600 MHz Kingston DDR3 HyperX | ASUS Formula VI | Corsair H110  Corsair AX1200i | ASUS Strix Vega 56 8GB Internet http://beta.speedtest.net/result/4365368180

Link to comment
Share on other sites

Link to post
Share on other sites

Curious as to how it compares against hard drives in a raid array. Is two of these in a RAID 1 faster than a 6x10TB RAID 5?

Link to comment
Share on other sites

Link to post
Share on other sites

Slap some RGB and a Gaming logo on it, and you've got the next big thing in computing.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/07/2017 at 7:01 PM, ShadySocks said:

Must be in response for VR Porn

VR porn is overhyped, the resolution of the headets and video source are too low.

 

I know because reasons......

Link to comment
Share on other sites

Link to post
Share on other sites

let me just ask because this will eventually become affordable however many years down the road

what block size would you need to address these things in a windows share (ntfs or maybe fatX??)? with an array that size you're already up to 8k I think? and that's one drive, some 20 drive array of these? your block size would be bigger than most files

I don't even want to imagine trying this with a zfs or btrfs, clearly ntfs isn't even built to handle this kind of nonsense

looks like M$ is abandoning ReFS

what kind of file system would you even run with this kind of drive?

 

 

 

Spoiler

CPU: TR3960x enermax 360 AIO Mobo: Aorus Master RAM: 128gb ddr4 trident z royal PSU: Seasonic Prime 1300w GPU: 5700xt, 5500xt, rx590 Case: c700p black edition Display: Asus MG279Q ETC: Living the VM life many accessories as needed Storage: My personal cluster is now over 100tb!

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, biotoxin said:

looks like M$ is abandoning ReFS

What makes you say that? Storage Spaces Direct only works on ReFS and Veeam uses it now instead of deduplication for it's awesome metadata capabilities.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Terodius said:

No, that's just super dumb. did you even bother doing some math before making that comment? most large backups are already done to cold-storage servers which are highly optimized storage servers with low performance and massive hdd arrays for a reason.

 

A 10TB enterprise drive is like 400USD, and a 1TB enterprise SSD is 500+USD, with perfect scaling that would mean that this 50TB drive would cost at least 25.000USD. But let's give them the benefit of the doubt and say 20.000 USD assuming they can manufacture it cheaply. If you wanted to store a petabyte with good redundancy, you'd need about 120 10TB drives.

 

Each of those storage servers is about 5.000-10.000USD, but let's assume it's a high-end 10k USD server with 60 HDD bays. That's 20k in servers and 120x400 = 48.000USD in drives plus let's say 2k USD for the installation and setup. So you can get a petabyte of HDD storage for about 60.000 dollars.

 

Now if you were to do the same with 50TB SSDs, you'd need just one cheap server with 30 bays and 25 SSDs. So that would be  5.000 server + 25x20.000 SSDs + 2.000 installation. So 1 petabyte of SSDs would run you a cool half a million dollars. compared with 60.000 with harddrives. Only mission-critical data would justify those prices, and these SSDs definitely do not have the performance for that application.

 

So yeah, I don't know who on earth they'll sell these to.

10TB enterprise disks designed to go in systems with that many disks can cost quite a bit more than that. Depending on interface type, sector size, SED/FIPS it can range from that 400USD to 1200USD. HPE disks also tend to just be expensive even though they are Seagate disks but that cost increase is the warranty/support. I would use probably $600USD in that calculation so 72,000 + 20,000 + 2,000 = 94,000.

 

There are other factors too that might start making SSDs make sense for backup storage, not yet of course but SSD development is focused now more on capacity to performance than capacity to price.

 

For our backup storage arrays we need a system that can sustain 20,000 IOPs and 2,000 MB/s. Large capacity HDDs designed for cold storage we use 80 IOPs and 100 MB/s figures for sustained performance. We right now use a 4 node Netapp 3220 C-Mode cluster for backup storage, it has 288 disks of 864TiB raw capacity and 592TiB after provisioning (double parity + dual hot spare per shelf). We have two of these each in a different city.

 

So the question is can we replace this storage array using more cost effective 10TB and 12TB disks?

 

864 / 9 =  96 disks. So is 96 disks even correct? That depends on the disk shelves and protection I want. I could use 24 or 48 or 60 LFF shelves. We currently use 4 48 LFF shelves and 4 24 LFF shelves.

 

If I went with either 48 or 60 disk shelves that is only two so I may have dual parity per shelf with 2 hot spares but I have no shelf redundancy so if one goes down access to data is lost. That could still be fine it is backup data after all and I'd have 2 of these storage clusters as well. So 88 * 9 = 792TiB, perfect well above what I need. Wait what about IOPs? 92 * 80 = 7360, damn that's a third of what I need.

 

Now lets adjust for the over spend because I don't need 792TiB. The adjusted number of disks per shelf would be 37 with 33 usable, 70 * 80 =  5600 IOPS. Nearly a quarter of that I need.

 

What about if I went with 24 bay shelves, 4 in total. So far this is without shelf redundancy, 80 * 9 = 720TiB. But is that enough for shelf redundancy? 720 - (720 / 4) = 540TiB, nope too small. 5 shelves with 22 disks would allow me to create a configuration with 1 shelf redundancy with 90 usable disks, 100 * 80 = 8000 IOPs.

 

Welp to keep an even longer math story short for my usage I can't use 10TB disks for the replacement of my backup storage without significantly over buying capacity, I must use smaller disks and more of them. I could have cut the story way shorter with 20,000 / 80 = 250 but that doesn't quite demonstrate how complex storage array sizing can be or show that 250 may not actually be the real number. 4TB disks is what I would need to buy to suit my performance needs, to lower the footprint I would have to lower my performance requirement or move to SSD. The lowest cost configuration is 5 60 disk shelves with 52 disks in each for a total of 260, 260 * 300 = 78,000 excluding array controllers (servers) and I can tell you right now Netapp disks are not as cheap as I have just shown but this is more about general costs using cheaper servers and no licenses etc.

 

SSDs on the other hand takes most of that complexity and throws it out the window, unless you have a serious I/O demand requirement. However to even start thinking about using SSD for backup storage the cost would have to drop to $105/TB and it's currently at $405 on the consumer side, used an 850 EVO 4TB as reference.

Link to comment
Share on other sites

Link to post
Share on other sites

That's some SSD capacity there. I know it's far from consumer market though hopefully we start seeing more and more price drops for SSDs with solid capacity too, like 1TB be more affordable.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, leadeater said:

10TB enterprise disks designed to go in systems with that many disks can cost quite a bit more than that. Depending on interface type, sector size, SED/FIPS it can range from that 400USD to 1200USD. HPE disks also tend to just be expensive even though they are Seagate disks but that cost increase is the warranty/support. I would use probably $600USD in that calculation so 72,000 + 20,000 + 2,000 = 94,000.

 

There are other factors too that might start making SSDs make sense for backup storage, not yet of course but SSD development is focused now more on capacity to performance than capacity to price.

 

For our backup storage arrays we need a system that can sustain 20,000 IOPs and 2,000 MB/s. Large capacity HDDs designed for cold storage we use 80 IOPs and 100 MB/s figures for sustained performance. We right now use a 4 node Netapp 3220 C-Mode cluster for backup storage, it has 288 disks of 864TiB raw capacity and 592TiB after provisioning (double parity + dual hot spare per shelf). We have two of these each in a different city.

 

So the question is can we replace this storage array using more cost effective 10TB and 12TB disks?

 

864 / 9 =  96 disks. So is 96 disks even correct? That depends on the disk shelves and protection I want. I could use 24 or 48 or 60 LFF shelves. We currently use 4 48 LFF shelves and 4 24 LFF shelves.

 

If I went with either 48 or 60 disk shelves that is only two so I may have dual parity per shelf with 2 hot spares but I have no shelf redundancy so if one goes down access to data is lost. That could still be fine it is backup data after all and I'd have 2 of these storage clusters as well. So 88 * 9 = 792TiB, perfect well above what I need. Wait what about IOPs? 92 * 80 = 7360, damn that's a third of what I need.

 

Now lets adjust for the over spend because I don't need 792TiB. The adjusted number of disks per shelf would be 37 with 33 usable, 70 * 80 =  5600 IOPS. Nearly a quarter of that I need.

 

What about if I went with 24 bay shelves, 4 in total. So far this is without shelf redundancy, 80 * 9 = 720TiB. But is that enough for shelf redundancy? 720 - (720 / 4) = 540TiB, nope too small. 5 shelves with 22 disks would allow me to create a configuration with 1 shelf redundancy with 90 usable disks, 100 * 80 = 8000 IOPs.

 

Welp to keep an even longer math story short for my usage I can't use 10TB disks for the replacement of my backup storage without significantly over buying capacity, I must use smaller disks and more of them. I could have cut the story way shorter with 20,000 / 80 = 250 but that doesn't quite demonstrate how complex storage array sizing can be or show that 250 may not actually be the real number. 4TB disks is what I would need to buy to suit my performance needs, to lower the footprint I would have to lower my performance requirement or move to SSD. The lowest cost configuration is 5 60 disk shelves with 52 disks in each for a total of 260, 260 * 300 = 78,000 excluding array controllers (servers) and I can tell you right now Netapp disks are not as cheap as I have just shown but this is more about general costs using cheaper servers and no licenses etc.

 

SSDs on the other hand takes most of that complexity and throws it out the window, unless you have a serious I/O demand requirement. However to even start thinking about using SSD for backup storage the cost would have to drop to $105/TB and it's currently at $405 on the consumer side, used an 850 EVO 4TB as reference.

Or you could, you know, just have 1 SSD drive cache per rack, like any sane person would do in your situation. 

Corsair 600T | Intel Core i7-4770K @ 4.5GHz | Samsung SSD Evo 970 1TB | MS Windows 10 | Samsung CF791 34" | 16GB 1600 MHz Kingston DDR3 HyperX | ASUS Formula VI | Corsair H110  Corsair AX1200i | ASUS Strix Vega 56 8GB Internet http://beta.speedtest.net/result/4365368180

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Terodius said:

Or you could, you know, just have 1 SSD drive cache per rack, like any sane person would do in your situation. 

SSD cache wouldn't work, the amount required for the data rate would cost too much. The Netapp 3220 also only support 1TB per controller pair so 2TB and if you do that you severally limit the supported sizing as it is much less when doing that.

 

We do that on our production storage of Netapp 8040 6 node clusters for the SAS shelves for ESXi NFS volumes, 11 SSDs, but it doesn't make a huge difference. Our all flash shelves have such a performance difference it's not even worth comparing and that's to 15K RPM SAS cached with SSD and not 7.2K RPM SATA cached with SSD.

 

If we wanted to use flash pool for backup storage and keep using Netapp we would have to use 8060 nodes since 8040 maximum size possible for flash pool is 576TB, 86PB without it.

http://www.netapp.com/us/products/storage-systems/hybrid-flash-array/fas8000.aspx

 

Then there is the issue that SSD caching doesn't do anything for reads which is a big problem. We use Commvault for our backup application and that does synthetic fulls which entails reading the deduplicated data off of the backup storage to create the next full backup rather than reading it from the source. Then we also copy daily around 90TB to the other backup storage array in a different city which is another read operation, while other backup operations are running and synthetic fulls are being created as well.

 

The actual network traffic will be less due to deduplication, we generally write around 9TB-11TB to disk every day which is that 90TB before deduplication.

 

In front of those storage arrays is 6 HPE DL380 Gen9 servers with 4 NVMe write intensive SSDs and 2 SATA mixed use SSDs in them each, 3 servers in each city. All backup traffic flows through those servers and gets deduplicated before being stored on the backend storage. Only unique data is then stored on the backend storage and the performance requirements I talked about is for that data traffic which is way less that what hits the backup servers.

 

The deduplication allows us to store around 5.6PB on 450TB of used storage.

 

Since both production storage and backup storage is Netapp we can use Netapp snapmirror and snapvault technologies as well which we can do natively or use Commvault Intellisnap engine.

 

Cloud storage is dumb and slow and isn't great for using it as the first stage of backup storage, you can send copies of backups to the cloud but if you need performance to achieve your backup window the cloud isn't it. There are options but they are expensive and not always available in the location you are in.

 

Edit:

Oh and we also write data to tape for long term storage, we have 5 LTO-5 tape drives which you need to sustain 140MB/s to to avoid tape scrubbing, 700MB/s in total and when writing to tape you have to rehydrate/undedup the data. You can store deduplicated data on tape but it's best not to.

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/07/2017 at 10:14 AM, leadeater said:

Toshiba talked about a 100TB but it has yet to actually surface and Seagate is supposed to be releasing their 60TB SAS SSD this year, but again not heard any news about it since the press release.

 

Also can I have 8 of these please?

I tried to understand what you said in your other post but I fail to see/understand (?) The use case of this drive?

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, NLD1st said:

I tried to understand what you said in your other post but I fail to see/understand (?) The use case of this drive?

No idea, but I'll happily have any of them for free :).

 

These will most likely go to the AWS, Azure, Google of the world who needs huge amounts of very fast storage. Netflix might find them useful as well.

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, leadeater said:

No idea, but I'll happily have any of them for free :).

 

These will most likely go to the AWS, Azure, Google of the world who needs huge amounts of very fast storage. Netflix might find them useful as well.

Who doesn't like free goodies haha I would happily replace my wd blue with it haha (the most annoying part sound wise in my build). 

Perhaps in a couple years we can pick on up from eBay after some tech place gets rid of them. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×