Jump to content

20TB SSD RAID with Kingston KC310 Business SSD

I would be concerned with the drives moving around a lot of 4k video the TBW on the drives is not that great. To be honest the TBW on even Intel and Samsung Enterprise drives is not that great. Are your plans going to be to do push\pull or work directly over the network? with raid you get burned extra on wrights for parity so the problem becomes even more compounded the wright parity cost on that raid is plus 60% you will find your self replacing drives quite often.  Does anyone know what Kingston drives do when they cross the TBW do they go read only die or keep going? Most enterprise drives will read only or the MTTF\MTBF will go off. 

 

 

Awsome! But why RAID instead of ZFS ?

 

 

The Cache on the raid cards, the battery's he really needs, multi point read, drive monitoring ( MTTF) etc. raid is going to offer better performance and less chance of a major failure over ZFS. 

Link to comment
Share on other sites

Link to post
Share on other sites

I am also wondering why Linus has chosen to run Windows on this machine... from what I've heard from just about every industry expert ever, running a Windows server is a surefire way to run into issues. Clearly, there is no inability to run a server-grade OS, as one of the recent server videos mentioned that the giant backup server was going to be running a BSD-based solution, so why is this server not going to be running a truly server-grade OS?

Link to comment
Share on other sites

Link to post
Share on other sites

I am also wondering why Linus has chosen to run Windows on this machine... from what I've heard from just about every industry expert ever, running a Windows server is a surefire way to run into issues. Clearly, there is no inability to run a server-grade OS, as one of the recent server videos mentioned that the giant backup server was going to be running a BSD-based solution, so why is this server not going to be running a truly server-grade OS?

I could be wrong, but I think he uses windows for some basis testing to see if it works at all.

Unless he is running a heavily re-skinned version of Windows Server :P

Link to comment
Share on other sites

Link to post
Share on other sites

Nothing Wrong with windows Server but windows Client os is a problem. a 2012r2 server realy could be the best option with windows 8 or 10 workstations as both support SMB 3 letting them use the bonded links like 1 20 gig link instead of 2 10 gigs, server 2012 dedupe competes with alot of SAN\NAS options.

Link to comment
Share on other sites

Link to post
Share on other sites

This video kinda reminds me of this:

 

 

A bit outdated, but still.....

Want to help researchers improve the lives on millions of people with just your computer? Then join World Community Grid distributed computing, and start helping the world to solve it's most difficult problems!

 

Link to comment
Share on other sites

Link to post
Share on other sites

If he puts the cards in HBA mode, he could then load FreeNAS on there and try out the benchmarks again.

Can Anybody Link A Virtual Machine while I go download some RAM?

 

Link to comment
Share on other sites

Link to post
Share on other sites

If he puts the cards in HBA mode, he could then load FreeNAS on there and try out the benchmarks again.

 

Yep! I'd be very curious about this bench!

Link to comment
Share on other sites

Link to post
Share on other sites

Ok, that hardware is impressive.

But what about the software? Are you still using FreeNAS?

 I think Linus did it all wrong. freeNas should not use RAID.

Link to comment
Share on other sites

Link to post
Share on other sites

 I think Linus did it all wrong. freeNas should not use RAID.

FreeNAS can do many things... But what are you saying FreeNAS should use?

Can Anybody Link A Virtual Machine while I go download some RAM?

 

Link to comment
Share on other sites

Link to post
Share on other sites

As a few previus posters have said, he should put the card into HBA mode and use ZFS.

Link to comment
Share on other sites

Link to post
Share on other sites

LOL

I've seen pretty overkill, but this another level of overkill. OMG! Da freaking performance benefits!

Link to comment
Share on other sites

Link to post
Share on other sites

Why use three independant RAID controllers when a single 9361-8i + an Intel RES2CV360 would of worked just fine?

 

He explained in the video WHY he used 3 of them.

Can Anybody Link A Virtual Machine while I go download some RAM?

 

Link to comment
Share on other sites

Link to post
Share on other sites

This server grade stuff is awesome. I love stuff like this.

Gaming PCs:
Intel i7 4790k, EVGA GTX 980ti, NZXT H440
Intel i5 7600k, Asus GTX 970 DC Mini, Silverstone SG13B
HTPC: AMD Phenom II X6 1045t, EVGA GTX 770 FTW, Fractal Node 604
Link to comment
Share on other sites

Link to post
Share on other sites

Hey Linus,

 

I am running a 10 Gbit connection for my home server. Samba/SMB is only single threaded and the speeds you showed are raw read/write speeds. In my test using 4 GB ram disk to ram disk transfers (4K random) - it wasn't even able to saturate 1 Gbit connection. Can you post a follow up video to this where you show a real world network transfers using Samba? I really think SMB is limited somewhat and you may have spent more than the CPU can actually handle.

 

Looking forward.

Link to comment
Share on other sites

Link to post
Share on other sites

NVMe would be nice but I don't think it will help with Samba/SMB throughputs as I mentioned earlier.

 

Linus, it would be really nice if you could perform the test with Windows Server running as a virtual machine because that's where the industry is moving towards and it replicates how most enterprise users have their servers setup (including myself). We are currently running a single Dell VRTX which has 4 instances of VM and upgraded NIC (10 Gbps).

 

 

I really hope that you address the concern with SMB/Samba and its single threaded limitation as well. I am looking forward to the real world network speed test very much. And please don't forget to test with 4 K small files for the rest of us (I have a team of programmers who write a lot of small files, most of them less than 512K to a Linux Box running in a VM).

Link to comment
Share on other sites

Link to post
Share on other sites

@sprintmap

1. His storage array is not your test bench. He has no use for running a windows VM on this thing. o.0

2. Something you should know about storage: File transfer size is set at the OS level via kernel buffers. On linux this buffer size is tracked on a per-device level. I don't know how granular it is tracked in Windows. Case in point...

 

Yesterday a co-worker was trying to stress out a 5 disk RAID0 setup and was only managing to get ~1100 IOPS but only moving around 30MB/s of data. I told him to use 2MB IOs instead of 4K IOs and showed him how to change the max IO size in linux, he went down to ~90 IOPS, but his throughput shot up to 700MB/s. Windows and Linux both will buffer multiple small writes into a single large write, even if they go to different files, in order to make more efficient use of the storage. Specifically, if you are writing to the device 'slowly' (slowly = at a rate such that the max buffer size won't be reached within the ~time quantum~), the OS will buffer all device writes for a ~time quantum~ (50 micro seconds-1 millisecond) and then issue that as a single write to the device.

Link to comment
Share on other sites

Link to post
Share on other sites

I find it amusing how random kids use the world quantum to make it sound scientific.

None of what you said makes any sense. IOPS is a direct measure of MB/sec. Lower IOPS and Higher MB/s cannot happen together.

Secondly, I was using RAM Disk and transfering at block level (so your assumption about kernel remains incorrect in my specific testing).

The problem is SMB/Samba protocol. This has nothing to do with how OS handles small writes. None of the OS I have ever used will refuse to save a 4K notepad file (evident by saving the file, unplugging your computer and replugging it back in, the data persists).

 

My suggestion still stands, Linus should definitely test this in more detail and running the Windows in VM puts more stress on the CPU and it reduces SMB/Samba performance depending on your configuration and CPU type (support for Nested Paging?). Don't believe me? Then setup a Windows Server (Or any OS that supports Samba) on a hypervisor of your choice and report back with screenshots.

Link to comment
Share on other sites

Link to post
Share on other sites

 

I find it amusing how random kids use the world quantum to make it sound scientific.

...I'm about to drop some knowledge on you.

 

http://www.mondofacto.com/facts/dictionary?time+quantum

 

IOPS is a direct measure of MB/sec. Lower IOPS and Higher MB/s cannot happen together.

What does IOPS mean? I/O Operations Per Second. So...if an I/O Operation is 4K, and you can do 300 of them per second, then you can move a total of 300 * 4096 = 1.23 MB/s. "Hah! My hard drive can do more than 1MB/s!" Not in a random read/write workload it can't. The only reason it manages the 25MB/s or so it does during a benchmark is because of on-disk benefits like NCQ making the 'random' IO patterns less random. Seriously, go look up how Native Command Queuing (NCQ) works.

 

Now, what if we have a drive that can do 2MB I/O Operations, but can only manage to do 100 of them per second. 100 * 2,097,152 = 209,715,200. Oh, gee, that's 210 MB/s. Darn, looks like I was right, lower IOPS and higher write size *does* matter! IOPS by itself is a useless unit. It's like saying a rocket engine produces 10,000 Watts. 10,000 Watts...of what? heat? Sound? Thrust (why the fuck would anyone express thrust in watts!)? More over, 10,000 Watts in what timeframe? Per second/minute/hour/day/liftime?

 

IOPS is one half an equation and every person who labels a chart with IOPS and doesn't include the IOsize those IOPS occured at should have gotten a 0 in all their science courses because they were that kid who never wrote down units on the exams.

 

 

Secondly, I was using RAM Disk and transfering at block level (so your assumption about kernel remains incorrect in my specific testing).

I could go for the cheapshot here and say "transferring*", but I won't. Instead I will simply show you a picture of the linux IO stack: https://www.thomas-krenn.com/de/wikiDE/images/4/48/Linux-io-stack-diagram_v1.0.svgand let you see that ramfs is in the same VFS stack location as normal filesystems (ext2/3/4, etc). This means that *every* step that those file systems must go through in order to do work (kernel buffers, IO channels, etc) ramfs must go through as well.

 

 

The problem is SMB/Samba protocol. This has nothing to do with how OS handles small writes. None of the OS I have ever used will refuse to save a 4K notepad file (evident by saving the file, unplugging your computer and replugging it back in, the data persists).

Right, you seem to be confused about what I was saying about IO size and buffering in my original post. That time quantum I mentioned is very small, and OS specific, generally between 50 microseconds and 1 millisecond. If your max_io_size is set to, say, 32KB, and you save a 4K file (and that's the only I/O that happens for 2 milliseconds) then that file first goes to the buffer, chills out in that buffer until the time quantum passes, and is *then* flushed to disk. Why you don't lose data on a power loss is twofold:

  1. The write occurs to the buffer and the file system's journal (I assume you're using a journaling filesystem...because almost all normal file systems use journaling) gets marked with that file as 'being updated'. If a power loss occurs before the write completes and the journal is marked as 'transaction complete' the file on disk reverts to its pre-change version and your changes are lost.
  2. The time window for a disruptive power loss is very small, it is exactly the width of the time quantum (50 microseconds - 1 millisecond).
 

My suggestion still stands, Linus should definitely test this in more detail and running the Windows in VM puts more stress on the CPU and it reduces SMB/Samba performance depending on your configuration and CPU type (support for Nested Paging?). Don't believe me? Then setup a Windows Server (Or any OS that supports Samba) on a hypervisor of your choice and report back with screenshots.

 

So let me see if I get this straight. You want him to install KVM/ESXi/some other hypervisor onto his storage server, load windows as a VM on top of that hypervisor, then re-run his tests? Of course that would be slower than the bare metal Windows install he's using now! I don't need to screenshot that for proof, anyone knows that. Why you want him to get rid of a perfectly good baremetal install is what I never understood.

 

So...which one of us is the kid trying to sound smart again? :) Enjoy your knowledge.

Link to comment
Share on other sites

Link to post
Share on other sites

Equally as valid is why shouldn't linus test it in VM?

 

Linus's video are reflective of what people in general want whether it makes sense to you or not. And if he is not testing it in VMWare ESXI or some type of an hypervisor, then he is not catering to the "Enterprise" audiences when showing the Enterprise products. The audiences will then remain the random kids from the Desktop market who think they know better about the Server side. The person they might be speaking to on the forum could hold a Post Doc position in Computer Science and Engineering for all you know (hint) so you may want to watch the knowledge droplets before I send my 300 page peer reviewed thesis your way rather than random websites. Certain problems are specifically useful to solve in VM environment such as Fourier Transformation (Specifically Fast Fourier Transform) due to cache coherency.

 

P.S: A real professional doesn't need to try so hard. In addition, just look at your lingo - surrounding words in **stars** does not make you sound smart. Just write normally and you will do just fine.

Link to comment
Share on other sites

Link to post
Share on other sites

Equally as valid is why shouldn't linus test it in VM?

 

Linus's video are reflective of what people in general want whether it makes sense to you or not. And if he is not testing it in VMWare ESXI or some type of an hypervisor, then he is not catering to the "Enterprise" audiences when showing the Enterprise products. The audiences will then remain the random kids from the Desktop market who think they know better about the Server side. The person they might be speaking to on the forum could hold a Post Doc position in Computer Science and Engineering for all you know (hint) so you may want to watch the knowledge droplets before I send my 300 page peer reviewed thesis your way rather than random websites. Certain problems are specifically useful to solve in VM environment such as Fourier Transformation (Specifically Fast Fourier Transform) due to cache coherency.

 

P.S: A real professional doesn't need to try so hard. In addition, just look at your lingo - surrounding words in **stars** does not make you sound smart. Just write normally and you will do just fine.

Hah. I get the feeling a real professional doesn't need to do whatever I do, and does need to do whatever you do. Great argument there. "Since you are not me, you are not professional!" Whatever.

 

Linus caters to desktop people. Gamers and individuals are his target audience. I...really can't imagine anything above a mom-and-pop shop looking at Linus's videos and making purchases based on them. The reason he releases these videos is to answer the 'What if...?' style questions some people have about whether a Xeon is really better for gaming than a desktop CPU, or if SSDs in raid are super-duper fast, or do they hit some practical limit after 2-3 SSDs? He makes these videos not for enterprise customers, but because it's fun to answer things most people can only do as thought experiments. I am honored you created an account on LTT just to pick a fight with me.

 

As for FFTs, I'm wondering how you think context switching an entire VM in and out of memory is less of a performance impact than context switching a thread or process. FFTs aren't storage bound anyway, so I fail to understand their relevance to this discussion. Most of an FFT's work is done in the cache and ALU/FPUs. They're great for turning processors into space heaters...and not really at stressing much else on a system. If you meant to sound smart, well, I've worked on parallel computing problems myself and the tricks you need to use to hide latencies are indeed comical. If VMs solve that latency problem for you, great. For us, we just had to tweak our algorithms until we could hide 90% or more of the latencies behind buffers and go. I realize you can't do that with every problem...but I also fail to understand how using a VM would make that issue better. I suspect the VMs you are using are for isolating workers from each other and normalizing hardware contention, not for any actual performance reason.

 

You're welcome to post your thesis here. For all you know I'm actually qualified to read and understand it. Remember, you picked this fight, not me. Sometimes to win at measuring male anatomy you actually have to have the biggest one.

Link to comment
Share on other sites

Link to post
Share on other sites

First off, they aren't V300s

Second, I have a V300, and it boots Windows 7 in under 10 seconds. They are great drives.

 I have one in my fiance's PC.  Everyone told me not to buy it.  Runs perfectly fine for a cheap SSD.  :)

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

Just watched this video, made me laugh on how much things have changed in two years. Start at 5:45

 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 6 months later...

@LinusTech did you use reverse sas cables or forward?

Can Anybody Link A Virtual Machine while I go download some RAM?

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×