Jump to content

RAID Storage Solution - Video Editing & Caching

Hi all,  I've searched for an answer to a few questions, and haven't gotten a clear answer.

 

I currently have a 10GbE client to client set up, where my server is direct connected to my workstation (both run windows 10).  I should preface this with, my 10GbE is fully capable of reaching 1.1GB/s via ram drive to ram drive transfers (tested with a 9GB file).  I have a LSI 9266 8i RAID card in my server with both a RAID6 and RAID0 drive array.  4 6TB WD Black drives in RAID6 and 4 2TB WD Black drives in RAID0.

 

I use RAID0 for my working drive, and RAID6 for the archive and mirror of my working projects.  Crystal Mark shows 500-650MB/s transfer speeds for each.  I get that with real life tests as well. 

 

Here's my problem.  I'm working with 1080p RAW files and I get good results with 1 stream of video on a timeline.  If I add an additional I run into hiccups.  I also run into hiccups when doing compositing.  I'm using Adobe CC suite with Premiere and After Effects.  If I have 2 clips stacked on top of each other I get laggy play performance when scrubbing or even just playing normal speed.  I also get this issue if I create Lossless proxies of the RAW files.  I'm suspecting that I'm getting this issue due to poor seek times from the hardware limitations of the 7200rpm hard drives.  I have watched some youtube videos of Linus implementing SSD caching with multiple SSD's for his editors.  I was wondering if that would solve my issue.

 

Here's the rub though.  If I work with those same RAW files in Davinci Resolve I get flawless playback.  Is this a software issue? 

 

A little insight into both the software side of things and the SSD caching would be great.

 

I'm wondering what the hardware configuration looks like for using LSI's cachecade.  Do I loose drives if I add SSD's to cache with?  Essentially I have 8 Sata headers from my lsi 9266 8i card, and I use all 8.  would I loose 2 if I added 2 SSD's for caching?

 

Thanks for any help.

Link to comment
Share on other sites

Link to post
Share on other sites

On ‎6‎/‎10‎/‎2016 at 10:24 AM, Strikermed said:

-snip-

Ah, sorry, I meant to reply a few days ago but I got caught up in Physics HW.

 

Dang, really nice that you have a 10Gb/s network that works. I plan to get some 10Gb/s Nics with time.

 

First question, what system specs do you have for your main PC? It's possible that you do have the storage bandwidth, but the software is either not using the GPU for playback or your PC is limiting it. Also, usually for video editing, you have a lot of RAM and the project files you're working on stays in RAM (Or at least parts of it). Also, you might check the format you're editing in. I read that some editors convert to a working format which supports GPU acceleration (which plays back way faster than CPU), and then carry the changes over to the RAW file. It may be a software issue. You can test by putting a small project on a RAM disk (Or work locally off on a SSD on the main PC) on the server...if it still lags with even that on your project, then it's not the storage system.

 

If it is a hard drive bottleneck, then yeah, SSD caching will help. Sadly CacheCade isn't exactly cheap...

Yes, you lose drives if you use SSD caching if you do not buy a SAS expander to expand your RAID card's ports.

Do note that the max size for the SSD cache array is four drives (If I remember correctly), so don't buy a lot of SSDs.

 

Yes, you would lose two ports to the SSD cache if you don't buy a SAS expander.

The good thing about SAS is that you are able to split SAS out to more ports via a SAS expander (Since you probably aren't remotely pushing your LSI controller with only eight hard drives).

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks for the info.  Getting 10Gbe was challenging, took 3 reinstalls of Windows Server 2012 R2 before I caved and went with Windows 10 pro, which almost instantly gave me the network performance I wanted without all the tweaking of Windows Server.  For my situation, it works since I don't have a ton of clients, and I don't need anything other than a file share/Network mapable drive.

 

Essentially, my workstation and my server have the exact same hardware minus the RAID cards and the respective Hard drives that go with it.

 

I'll try the RAM drive idea to test if it's the file format or if it's the File Server delivering enough power.  I'll also Test the file format theory.  Do you happen to know a good file format that I can work with, and the workflow that goes along with converting back to your RAW files for the final export?  I'm assuming it's similar to a proxy workflow, but I'm not entirely sure how to do that in premiere.  When I work with proxies, it's usually a round trip with Davinci Resolve.

 

For clarity on my workstation PC:

intel i7 3930K (overclocked to 4.6)

Asus Sabertooth x79 motherboard

32GB Corsair Vengenc RAM

GTX 580

120GB SSD for Adobe Cache (thinking about upgrading this, not entirely sure, but I think it's SATA2)

500GB Samsung 840 SSD (For OS)

intel x540-t1

Windows 10

 

Server:

intel i7 3930K (Not overclocked)

Asus px79 motherboard

32GB Corsair Vengenc RAM

GTX 670

1TB Scandisk Pro SSD (For OS)

intel x540-t1

LSI MegaRAID 9266-8i (RAID 6 - 4 6TB WD Black, RAID 0 - 4 2TB WD Black)

Windows 10

 

I'll do some tests to verify I don't need Cachecade.... The $300 price tag doesn't bother me too much (if that's the correct price range for Cachecade).  I know the SSD's will push my budget further, but if it solves my problem, then I won't have any issues scaling in the future.  

 

I'll give those tests a go, and I'll report back with my findings.  

 

If you know some file formats that will use the GPU and the workflow for that proxy workflow, let me know in the mean time.

 

Thanks for the reply!

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Strikermed said:

-snip-

Hmm, I hope I don't run into the same troubles when I try to get 10Gb/s on my server (It runs 2012 R2). Good to hear you got it working though.

 

Yeah, it is a proxy workflow, I think linus used cineform for his workflow. Sorry, I'm not too knowledgeable about video editing (other than the basics of Premiere Pro). You'd have to play with and monitor the GPU load to see if it's getting used or not.

 

I'm doubtful it's hardware related as both machines have pretty capable CPU and GPU, so it doesn't appear to be a hardware bottleneck (Aside from maybe storage...but that's up to question at the moment).

 

You'll also need to buy a SAS expander to use Cachecade if you want to use all existing eight drives and have a caching array (additional cost).

 

I wish you luck and hope to hear the results.

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks man!  I've been in the process of running Cat6a throughout my house this weekend and week, so it will be sometime before I get this test done (hopefully this weekend I'll be able to test).

 

I can tell you what problems I ran into on Server 2012.  First, you need to disable encryption.  That was the first hurdle that took a while to get over.  The other issues were related to the firewall, which I'm not as familiar with (hence why I went with Windows 10).  I couldn't connect very easily using network drives.  

 

In reference to the intel NIC configuration, just make sure you set the RSS Ques and anything that says "RSS" to the amount of physical cores you have.  Then just max jumbo packets, and ensure that your recieve buffers are maxed, recieve side scaling is on, and your transmit buffers are maxed.  

 

Also check out Cinevates blog, "Confession of a 10GbE Newbie."  I actually discovered many of these things from the 6 part blog, and through comments that him and I went back and forth about.  he actually added the encryption section to his blog from our discussion over troubleshooting.

 

Goodluck,  and like I said, I'll return with my findings about using a different codec when I get the results.

Link to comment
Share on other sites

Link to post
Share on other sites

Ok, I have another questions for you.  I'm trying to gather all the information I can on SAS expanders, and I'm more confused than when I started. 

 

I have an LSI 9266 - 8i which has 2 Mini-SAS SFF8087 connectors that can break out to 4 Sata drives each (or sas if you choose).  My question is how a SAS expander comes in to play, and what kind of expander can I use.  I've seen some expanders that run in a PCIe slot, and I've seen some that are powered by a molex that you can screw into your case.  Like this one: http://www.newegg.com/Product/Product.aspx?Item=9SIA24G2HM6717&cm_re=LSI_sas_expander-_-16-117-306-_-Product

Another thing I'm confused about is the capability of my RAID card.  It says 6Gb/s per port, does that mean I have 6Gb/s on each of the 8 SATA ports, or the SFF8087 ports? This also makes me think about an expander... Since my card can support multiple RAID levels (hence RAID0 and RAID6 running on the same card) and up to 128 drives, would it just be wiser and cheaper for me to get an expander to reach my 12 drive finished product, and also add 2-4 SSD for cachecade if needed?  This would eliminate me having to buy an additional RAID card which I was originally planning to do. 

 

What I'm trying to figure out is if I can use an Expander, and which I should be looking at, and if I will run into any bottlenecks if I do use an expander.  Ideally I'd like to future proof the system in case I would like to drop a ton of SSD's in there to maximize throughput.

 

Thanks in advance

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, Strikermed said:

-snip-

Ah, just for future reference, please quote me in your reply so I know you replied...I almost didn't see this.

 

Think of a SAS expander as a network switch. On that Intel SAS expander you linked, you have nine SAS ports. Any of them can be inputs, any of them can be outputs. Usually you feed the two SAS cables from your RAID card to the SAS expander to get full bandwidth from the RAID card. So out of the nine ports on the expander, two will be input in your case. The other seven are output for you. You can break them up into four drives each (To get 28 total drives via SAS to 4 Sata breakout cables). The beauty of SAS is that you can expand it just like a network, and talk to Sata if you need to.

 

Yeah, there are two types of expanders, the PCIe one or the separate card. I personally would get separate card and just Velcro it in the case somewhere because I don't like the PCIe one just taking up a PCIe slot for no reason (It doesn't actually talk to the PC at all...it just uses the slot for power only).

 

Ah, let me clear that up for you. SAS has four lanes in each connector (If you noticed, the two port LSI RAID card has 8i for two ports, 4i for one port). Each lane is 6Gb/s (600MB/s max). So in total, two SAS ports have eight lanes total, so you have eight 6Gb/s lanes or 48Gb/s total (4,800MB/s). However, I believe the maximum limit of the RAID card itself is 2.4 or 2.6GB/s (I read it somewhere in a LSI document, I can't remember exactly...I just remember it was slightly above 2GB/s). That entire bandwidth can be broken out to Sata (And shared between them), expanded via SAS expanders, or fed into SAS drives.

 

You will probably might be reaching the limits of the RAID controller (depends on which RAID / how many drives).

 

Also, you can feed just one input from the RAID card to the SAS expander and have the other RAID card port breakout directly into four SSDs if you want to do it that was as well.

However, one SAS to the expander would bottleneck the hard drives as you add more hard drives to the mix. I would feed everything into the expander to the RAID card using both ports.

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/10/2016 at 8:24 AM, Strikermed said:

Hi all,  I've searched for an answer to a few questions, and haven't gotten a clear answer.

 

I currently have a 10GbE client to client set up, where my server is direct connected to my workstation (both run windows 10).  I should preface this with, my 10GbE is fully capable of reaching 1.1GB/s via ram drive to ram drive transfers (tested with a 9GB file).  I have a LSI 9266 8i RAID card in my server with both a RAID6 and RAID0 drive array.  4 6TB WD Black drives in RAID6 and 4 2TB WD Black drives in RAID0.

 

I use RAID0 for my working drive, and RAID6 for the archive and mirror of my working projects.  Crystal Mark shows 500-650MB/s transfer speeds for each.  I get that with real life tests as well. 

 

Here's my problem.  I'm working with 1080p RAW files and I get good results with 1 stream of video on a timeline.  If I add an additional I run into hiccups.  I also run into hiccups when doing compositing.  I'm using Adobe CC suite with Premiere and After Effects.  If I have 2 clips stacked on top of each other I get laggy play performance when scrubbing or even just playing normal speed.  I also get this issue if I create Lossless proxies of the RAW files.  I'm suspecting that I'm getting this issue due to poor seek times from the hardware limitations of the 7200rpm hard drives.  I have watched some youtube videos of Linus implementing SSD caching with multiple SSD's for his editors.  I was wondering if that would solve my issue.

 

Here's the rub though.  If I work with those same RAW files in Davinci Resolve I get flawless playback.  Is this a software issue? 

 

A little insight into both the software side of things and the SSD caching would be great.

 

I'm wondering what the hardware configuration looks like for using LSI's cachecade.  Do I loose drives if I add SSD's to cache with?  Essentially I have 8 Sata headers from my lsi 9266 8i card, and I use all 8.  would I loose 2 if I added 2 SSD's for caching?

 

Thanks for any help.

What cpu/gpu/how much ram. Seems like a cpu/ram/gpu problem not a storage issue. RAW video takes lots of cpu to decode. Also consider converting the video to low quality(50mbps prores/dnxhd/cinefrom) and edit in that they use the raw when you do color. No use to use the raw video when editing.

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/17/2016 at 2:30 PM, Electronics Wizardy said:

What cpu/gpu/how much ram. Seems like a cpu/ram/gpu problem not a storage issue. RAW video takes lots of cpu to decode. Also consider converting the video to low quality(50mbps prores/dnxhd/cinefrom) and edit in that they use the raw when you do color. No use to use the raw video when editing.

Thanks for the feedback, I'm being lead to believe that as well.  I have found that I'm not peaking above 60% CPU usage on an i7 3930K (six Cores) and 32GB of RAM.  I'm also not seeing high percentages on my RAID array as well (max 40% when initially reading, then it drops). 

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, Strikermed said:

Thanks for the feedback, I'm being lead to believe that as well.  I have found that I'm not peaking above 60% CPU usage on an i7 3930K (six Cores) and 32GB of RAM.  I'm also not seeing high percentages on my RAID array as well (max 40% when initially reading, then it drops). 

Try lowering the playback resolution. RAW video is very cpu heavy.

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/17/2016 at 1:46 PM, scottyseng said:

Ah, just for future reference, please quote me in your reply so I know you replied...I almost didn't see this.

 

Think of a SAS expander as a network switch. On that Intel SAS expander you linked, you have nine SAS ports. Any of them can be inputs, any of them can be outputs. Usually you feed the two SAS cables from your RAID card to the SAS expander to get full bandwidth from the RAID card. So out of the nine ports on the expander, two will be input in your case. The other seven are output for you. You can break them up into four drives each (To get 28 total drives via SAS to 4 Sata breakout cables). The beauty of SAS is that you can expand it just like a network, and talk to Sata if you need to.

 

Yeah, there are two types of expanders, the PCIe one or the separate card. I personally would get separate card and just Velcro it in the case somewhere because I don't like the PCIe one just taking up a PCIe slot for no reason (It doesn't actually talk to the PC at all...it just uses the slot for power only).

 

Ah, let me clear that up for you. SAS has four lanes in each connector (If you noticed, the two port LSI RAID card has 8i for two ports, 4i for one port). Each lane is 6Gb/s (600MB/s max). So in total, two SAS ports have eight lanes total, so you have eight 6Gb/s lanes or 48Gb/s total (4,800MB/s). However, I believe the maximum limit of the RAID card itself is 2.4 or 2.6GB/s (I read it somewhere in a LSI document, I can't remember exactly...I just remember it was slightly above 2GB/s). That entire bandwidth can be broken out to Sata (And shared between them), expanded via SAS expanders, or fed into SAS drives.

 

You will probably might be reaching the limits of the RAID controller (depends on which RAID / how many drives).

 

Also, you can feed just one input from the RAID card to the SAS expander and have the other RAID card port breakout directly into four SSDs if you want to do it that was as well.

However, one SAS to the expander would bottleneck the hard drives as you add more hard drives to the mix. I would feed everything into the expander to the RAID card using both ports.

Sorry about that, I'll hit quote from now on.

 

Thanks for the info.  That definitely gives me a better understanding of how SAS expanders work.  I'm still trying to find my bottleneck with Premiere.  It very well might be the codec/file type I'm using. 

 

I've done some searches, and it seems I can't find the literature on GPU acceleration codecs.  Trying to find the best codec to use to edit in.  Ideally it would be something that has 10bit 444 color space, but it could be 422.  I work mainly in the windows enviroment, with the occasional macbook pro for small edits.

 

Anyone have any youtube videos, or websites describing what best codecs to transcode footage to for great performance on Adobe Premiere Timeline?

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Strikermed said:

-snip-

Yeah, SAS is very fun, I like it more than SATA (Sadly SAS is so expensive..). The other fun thing is that with real SAS drives, you can have more than one RAID card accessing the same drive at the same time (If the RAID card supports it).

 

It does sound like a Premiere issue, it doesn't surprise me because GPU acceleration on Adobe stuff is still kind of flaky (Adobe, where is my true GPU acceleration for Lightroom).

As for files, sorry, I'm not completely knowledgeable in the world of video editing, I can't help you much there. You just kind of have to try different video formats and see what actually uses the GPU (You should see the GPU usage increase).

 

I'm not sure if it helps too much, but here's Linus' video on their workflow discovery:

 

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...
On 6/20/2016 at 1:56 PM, scottyseng said:

So it does appear that the file format I am using is a bottleneck.  I can see that certain formats tend to use the hard drive speed over the CPU, and other use the CPU and like 1% hard drive.  However, I am curious what kind of performance gains I could get with after effects if using SSD's.  I do get a lot of performance gains in Lightroom, and quite possibly could see the same in After Effects (especially with man layers).  However, I need to do more multi-thread testing with more HDD intensive formats before I make the jump.  It's all exciting, and I would love to play with it all, but I think I need to make decisions based on need this go around :(.  Thanks for the help.

 

If you know of any Expanders that you would suggest, and the best way to install them, I would great appreciate it!  Thanks!

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, Strikermed said:

So it does appear that the file format I am using is a bottleneck.  I can see that certain formats tend to use the hard drive speed over the CPU, and other use the CPU and like 1% hard drive.  However, I am curious what kind of performance gains I could get with after effects if using SSD's.  I do get a lot of performance gains in Lightroom, and quite possibly could see the same in After Effects (especially with man layers).  However, I need to do more multi-thread testing with more HDD intensive formats before I make the jump.  It's all exciting, and I would love to play with it all, but I think I need to make decisions based on need this go around :(.  Thanks for the help.

 

If you know of any Expanders that you would suggest, and the best way to install them, I would great appreciate it!  Thanks!

You should also monitor the GPU usage to see which formats use it the most (Ideally you want the format to use the GPU and CPU the most).

 

You could toy around with a single SSD just to see how that helps your testing.

 

I think for me personally with lightroom, version 6.2 to 6.3 (or newer now) had a massive boost in previewing speed. It was actually using some of my Xeon's CPU...before it used to take like 3-5 seconds to preview a image (without preview rendering). Now it's near instant. It also exports using all threads now.

 

Yeah, it's a lot of testing and you definitely don't want to spend money where you don't really have a bottleneck.

 

http://www.newegg.com/Product/Product.aspx?Item=9SIA24G2HM6717&cm_re=intel_sas_expander-_-16-117-306-_-Product

 

This is a 9 port expander (Though two are used for input from your RAID card, so 7 output. You can break those seven into four Sata drives). You can also look up the 24 lane version (5 ports) or the newer 12Gb/s SAS version (Costs quite a bit more though).

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/17/2016 at 1:46 PM, scottyseng said:

Ah, just for future reference, please quote me in your reply so I know you replied...I almost didn't see this.

 

Think of a SAS expander as a network switch. On that Intel SAS expander you linked, you have nine SAS ports. Any of them can be inputs, any of them can be outputs. Usually you feed the two SAS cables from your RAID card to the SAS expander to get full bandwidth from the RAID card. So out of the nine ports on the expander, two will be input in your case. The other seven are output for you. You can break them up into four drives each (To get 28 total drives via SAS to 4 Sata breakout cables). The beauty of SAS is that you can expand it just like a network, and talk to Sata if you need to.

 

Yeah, there are two types of expanders, the PCIe one or the separate card. I personally would get separate card and just Velcro it in the case somewhere because I don't like the PCIe one just taking up a PCIe slot for no reason (It doesn't actually talk to the PC at all...it just uses the slot for power only).

 

Ah, let me clear that up for you. SAS has four lanes in each connector (If you noticed, the two port LSI RAID card has 8i for two ports, 4i for one port). Each lane is 6Gb/s (600MB/s max). So in total, two SAS ports have eight lanes total, so you have eight 6Gb/s lanes or 48Gb/s total (4,800MB/s). However, I believe the maximum limit of the RAID card itself is 2.4 or 2.6GB/s (I read it somewhere in a LSI document, I can't remember exactly...I just remember it was slightly above 2GB/s). That entire bandwidth can be broken out to Sata (And shared between them), expanded via SAS expanders, or fed into SAS drives.

 

You will probably might be reaching the limits of the RAID controller (depends on which RAID / how many drives).

 

Also, you can feed just one input from the RAID card to the SAS expander and have the other RAID card port breakout directly into four SSDs if you want to do it that was as well.

However, one SAS to the expander would bottleneck the hard drives as you add more hard drives to the mix. I would feed everything into the expander to the RAID card using both ports.

So I understand this much better now that I've read a lot of documentation.  The one thing I haven't been able to get a good answer on is when migrating existing RAID to a SAS expander.  For example, I have 2 RAID sets, a RAID 0 comprised of 4 drives, and I have a RAID 6 comprised of 4 drives.  How do I ensure that these RAID sets will stay intact when I plug the SAS connectors from the RAID card, and connect them to the SAS Expander, and then connect the SAS expander to the individual drives.  Normally, you would have to ensure that each drive is connected to the appropriate numbered cable to ensure that you will be able to recover your RAID.  

 

I haven't done any testing with this, as my data is almost fully backed up on a NAS, but I rather not have to copy everything from the 1GbE NAS.  It would be ideal to be able to plug and play, and every drive and RAID group being accessible. 

 

Any advice on adding the SAS expander without losing my RAID sets or damaging them?

 

Thanks

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Strikermed said:

-snip-

A SAS expander is much like a network switch. It is completely transparent all it is does is route SAS traffic to the RAID controller or HBA card. It just appears as if the RAID card has more SAS ports than it usually has.

 

You do not need to worry about your RAID arrays going bad, they won't. The RAID card will pick up all drives regardless of port. You not need to worry about labeling the SAS cables on which drive, it doesn't matter. The RAID card just cards if all of the drives are plugged into it. However, it's better for tracking reasons to have the drive sleds labelled. If you have a proper server though, you don't need to label at all, your RAID card and light up the location of the drive you which to find.

 

I myself have shuffled the drives around the 24 bays in my supermicro server, the RAID card does not care as long as the drives are there (I have a six drive RAID 6 and a eight drive RAID 6).

 

If you had a motherboard RAID, then you would need to worry about SATA labels / getting the correct ports (you would not want to plug one raid drive to be on a third party sata controller on the motherboard instead of the intel one you're usually using).

 

SAS is different though, it can have multiple devices on a single SAS port in any order. That's the beauty of hardware RAID / SAS based HBA cards for software RAID like FreeNAS.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, scottyseng said:

A SAS expander is much like a network switch. It is completely transparent all it is does is route SAS traffic to the RAID controller or HBA card. It just appears as if the RAID card has more SAS ports than it usually has.

 

You do not need to worry about your RAID arrays going bad, they won't. The RAID card will pick up all drives regardless of port. You not need to worry about labeling the SAS cables on which drive, it doesn't matter. The RAID card just cards if all of the drives are plugged into it. However, it's better for tracking reasons to have the drive sleds labelled. If you have a proper server though, you don't need to label at all, your RAID card and light up the location of the drive you which to find.

 

I myself have shuffled the drives around the 24 bays in my supermicro server, the RAID card does not care as long as the drives are there (I have a six drive RAID 6 and a eight drive RAID 6).

 

If you had a motherboard RAID, then you would need to worry about SATA labels / getting the correct ports (you would not want to plug one raid drive to be on a third party sata controller on the motherboard instead of the intel one you're usually using).

 

SAS is different though, it can have multiple devices on a single SAS port in any order. That's the beauty of hardware RAID / SAS based HBA cards for software RAID like FreeNAS.

Ok cool!  that's great to hear.... Now I know we had discussed possible issues with my my drives dropping out, but I finally got the guts to continue when it drops out, and I discovered in the MegaRAID Storage Manager that I the log says:

 

Controller ID: 0 PD Missing SasAddr=0x0, ArrayRef=1, RowIndex=0x3, EndPD=0xfc, Slot=6

 

I went back through my logs and found that same slot is the issue every time I have an issue at start up.  Could this be a disk issue, or is it possibly a backplane issue?  I suspect a backplane.  Unfortunately, I couldn't physically tell which disk it was due to my case only have a blue lit up light, and no indicator that it was the bad drive :(....  Is there any way that I can tell which SATA connection it is if I have them numbered (by the manufacturer) as long as I know which SFF8087 connector those 4 SATA cables break out from?  Does the above info give any indicator of this?

 

I mean, I supposed I could shut it down and record the serial number of the bad drive to see which drive it is, and then use a free slot in my case to see if it's a bad backplane or if it's the drive that's bad...

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, Strikermed said:

-snip-

Your array was working fine before correct? Does your array show up now or is it degraded? It does sound like either a backplane failure or a drive died on you. In rare cases, the SAS expander might be bad as well (Seen it happen before). I would start by plugging the array straight into the RAID card (no sas expander) to double check that the drives are okay. If you still get the error, one of your drives is acting up). Also, MegaRAID storage manager also lists the serial number of the hard drives in the properties pane if you want to do it that was as well.

 

If you have a backplane, you can right click the drive you wish to trace, and you should have two LEDs per drive tray, one is drive activity, and the other is the drive location LED. On my supermicro chassis, the drive activity led is blue and red for the location.

Untitled-8.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

@Strikermed Just a note before you do actually look at buying CacheCade, do a lot of research in to benchmarks first and get a feel for how it works and what to expect. Results aren't as good depending on workload type etc.

 

I actually have an LSI 9361-8i with CacheCade 2.0 and FastPath, features which I no longer use. I had compatibility issues with both Samsung 840 Pros and 850 Pros where the firmware on the SSD gets all screwed up and performance tanks to 30MB/s and I had to take them out of the server, plug them in to my desktop and run a diskpart clean on them to revive them (multiple times until I just gave up). Use a Sandforce based SSD like an Intel 535 since Sandforce is an LSI controller so should play much nicer with an LSI RAID card.

 

Storage Spaces is a better option for SSD caching and cheaper too since it's inbuilt in to windows. You can leave your disks behind the RAID card and create a tiered simple storage space using an SSD, or multiple, and the RAID array. You would have to backup your data to do this. You could also just solely use Storage Spaces and configure the LSI card to pass through the disks to the OS but there is some nice things that hardware RAID does over Storage Spaces. I currently use Storage Spaces as well as a separate hardware RAID.

 

It may be better to stick with the current advice of find a more optimal workflow and file format than make things complicated with SSD caching etc, extra cost you may not need and potential for issues.

 

Also I use the same Intel X540-T1 cards and directly connect my desktop to my server as well :). Oh and plus a ton of internet points to @scottyseng for his excellent advice/help.

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks for the reply, I unfortunately purchased 2 850 Pro to test Cachecade Pro (wish I had seen Leadeater's post prior to the purchase).  I looked at the compatible drive list, and these are listed (not the exact model number due to the generation changes they make).  I have a 30 day free trial of cachecade, and I expect to test this heavily in that time period.  I may consider exchanging these if I can find a similar priced Sandforce based on your recomendatiosn.

 

as for @scottyseng feedback.  I am currently testing stability in another backplane position.  The drive isn't very old (less than 6 months without continuous usage).  I am doing up to 5 shut downs and startups to mock when I get the disk drop out issue per night for this week.  I found that when I continued by pressing any key (I get an annoying alarm), but in the Storage manager I'm able to just I'm able to  make drive offline, and then back online (before doing this is says "rebuild" next to the drive).  When I do this, everything acts normally.  I have a feeling somewhere the backplane causes an issue at startup, as the power is always on the drive.  Also, I have a Rosewill 4U 12 Bay case, and only has a single blue light per drive indicating power and activity.... No red light to help with indication.  I do have numbered cables inside to help with identifying drives (I may eventually write the numbers on the shots with a silver marker).  I'll report back with my findings later this week.

 

Thanks for the Help guys!  New to this server stuff, but it's fun!

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Strikermed said:

Thanks for the reply, I unfortunately purchased 2 850 Pro to test Cachecade Pro (wish I had seen Leadeater's post prior to the purchase).  I looked at the compatible drive list, and these are listed (not the exact model number due to the generation changes they make).  I have a 30 day free trial of cachecade, and I expect to test this heavily in that time period.  I may consider exchanging these if I can find a similar priced Sandforce based on your recomendatiosn.

You could be fine, it may have just been my use case that was contributing to the issue and my LSI model. I was using it in my IBM x3500 M4 running ESXi 5.5 and 6.0. It may have been my combination of hardware and software that caused the issue, my advice for using sandforce based SSDs still stands though if purchasing new.

 

Most benchmarks and reviews I've seen online are for direct Windows installs and not for using as a cache on a ESXi datastore with many VMs running on it.

 

The Samsung SSDs have an interesting support history for LSI CacheCade. They were originally on the HCL but got removed, probably due to issues like mine or similar, then added back on a year or more later.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, leadeater said:

You could be fine, it may have just been my use case that was contributing to the issue and my LSI model. I was using it in my IBM x3500 M4 running ESXi 5.5 and 6.0. It may have been my combination of hardware and software that caused the issue, my advice for using sandforce based SSDs still stands though if purchasing new.

 

Most benchmarks and reviews I've seen online are for direct Windows installs and not for using as a cache on a ESXi datastore with many VMs running on it.

 

The Samsung SSDs have an interesting support history for LSI CacheCade. They were originally on the HCL but got removed, probably due to issues like mine or similar, then added back on a year or more later.

Interesting.  I did notice that the support for different drives increased through the different documentation for the different versions of Cachecade Pro.  2.0 is the version I'm referring to with it having support for the Samsung drives.  I can say that I did see drives pop on and off (pertaining to the Samsung).  I was happy to find them on the latest version that I found.  If this doesn't work, I'll utilize the drives elsewhere, and try Kingston drives since they are utilized the same way as Linus does in his video system.  Although, I'm trying to do this on a budget!

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/10/2016 at 11:24 AM, Strikermed said:

Hi all,  I've searched for an answer to a few questions, and haven't gotten a clear answer.

 

I currently have a 10GbE client to client set up, where my server is direct connected to my workstation (both run windows 10).  I should preface this with, my 10GbE is fully capable of reaching 1.1GB/s via ram drive to ram drive transfers (tested with a 9GB file).  I have a LSI 9266 8i RAID card in my server with both a RAID6 and RAID0 drive array.  4 6TB WD Black drives in RAID6 and 4 2TB WD Black drives in RAID0.

 

I use RAID0 for my working drive, and RAID6 for the archive and mirror of my working projects.  Crystal Mark shows 500-650MB/s transfer speeds for each.  I get that with real life tests as well. 

 

Here's my problem.  I'm working with 1080p RAW files and I get good results with 1 stream of video on a timeline.  If I add an additional I run into hiccups.  I also run into hiccups when doing compositing.  I'm using Adobe CC suite with Premiere and After Effects.  If I have 2 clips stacked on top of each other I get laggy play performance when scrubbing or even just playing normal speed.  I also get this issue if I create Lossless proxies of the RAW files.  I'm suspecting that I'm getting this issue due to poor seek times from the hardware limitations of the 7200rpm hard drives.  I have watched some youtube videos of Linus implementing SSD caching with multiple SSD's for his editors.  I was wondering if that would solve my issue.

 

Here's the rub though.  If I work with those same RAW files in Davinci Resolve I get flawless playback.  Is this a software issue? 

 

A little insight into both the software side of things and the SSD caching would be great.

 

I'm wondering what the hardware configuration looks like for using LSI's cachecade.  Do I loose drives if I add SSD's to cache with?  Essentially I have 8 Sata headers from my lsi 9266 8i card, and I use all 8.  would I loose 2 if I added 2 SSD's for caching?

 

Thanks for any help.

 

On 6/20/2016 at 2:12 PM, Strikermed said:

Thanks for the feedback, I'm being lead to believe that as well.  I have found that I'm not peaking above 60% CPU usage on an i7 3930K (six Cores) and 32GB of RAM.  I'm also not seeing high percentages on my RAID array as well (max 40% when initially reading, then it drops). 

If you are running with hyper threading turned on, your CPU percentage might only hit 60% as reported by Task Manager, but the underlying cores will be maxed out. I found this out when playing Cities Skylines, trying to diagnose why my simulation wouldn't run much faster than 1x speed. Try turning off hyper-threading and run your editing workflow again, see if you hit 100%.

 

It's entirely possible that Adobe's file access patterns are different when you have multiple clips in the same project. In this case, it is the software that is the problem. If you're editing on hard drives, you will definitely feel the pain, because the cost of changing the read location is very high. An SSD would help mitigate this, but given the files you're working with and your budget, that may or may not be feasible.

 

You mention that you have an LSI-9266, do you have an onboard cache for the RAID controller? They sell them on the side, and they dramatically improve performance for non-sequential access patterns.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, wpirobotbuilder said:

 

If you are running with hyper threading turned on, your CPU percentage might only hit 60% as reported by Task Manager, but the underlying cores will be maxed out. I found this out when playing Cities Skylines, trying to diagnose why my simulation wouldn't run much faster than 1x speed. Try turning off hyper-threading and run your editing workflow again, see if you hit 100%.

 

It's entirely possible that Adobe's file access patterns are different when you have multiple clips in the same project. In this case, it is the software that is the problem. If you're editing on hard drives, you will definitely feel the pain, because the cost of changing the read location is very high. An SSD would help mitigate this, but given the files you're working with and your budget, that may or may not be feasible.

 

You mention that you have an LSI-9266, do you have an onboard cache for the RAID controller? They sell them on the side, and they dramatically improve performance for non-sequential access patterns.

Hey thanks for the info.

 

My RAID controller has 1GB of onboard Cache.  What does LSI call these cache devices as mine doesn't have swappable RAM (I wish it did as I would put the biggest chip I can in there).

 

Turning off Hyperthreading is interesting, as I haven't tried that, and I've heard of some people finding success in certain applications.  I can try that as well for playback performance.  I'll give that a go prior to attempting Cachecade to see what kind of differences I see.

 

Right now I have a pretty good SSD workflow, but project size is much larger on particular projects where I get into the TB range.  I have a Timelapse project I'm working on that is in excess of 2TB, and I recently had a RAW workflow project that took me in the 1TB range with the creation of proxies and what not.  Ideally I would like to see edits done in the native formats without Proxies in the future, and to all be done from one drive for redundancy and easability.  I'd also like to see better playback performance on my timeline, which I think much of the problem is file type.  But there is also that issue you just mentioned with multiple layers playing at the same time, like when I work in after effects where an SSD can come in handy.

 

I'm certainly going to do some testing with the 2 512GB Samsung 850pro drives that I have, and if I don't like the results I'll put them to use in some laptops or create a RAID0 to work off of.  It's a slow process as I work my day job, and do side jobs as I'm working on this system and having to create constant backups and cloud backups (Which are slow on my current connections 2.5MB/s).  So it takes time to make potentially damaging changes like adding an expander, changing RAIDs around (which I might do in the far future (like when Google Fiber finally get's to my area *crossing my fingers for under a year*), and anything with hardware upgrades since my budget isn't huge, but over time I can slowly buy parts.

Link to comment
Share on other sites

Link to post
Share on other sites

Findings with 4K footage from Inspire1:

The native out of the camera format plays well on my system, but uses 40-50% CPU on the workstation to decode the files.  The server storage (RAID 0 with 4 disks) uses very little, may 10% load at max 100MB/s according to Task manager.

 

When I transcode to DNXhr4K 444, I get very stuttery playback.  I get drop frames etc.  Even on half resolution I get this problem.  Task manager shows 100% usage from RAID0, although I'm only hitting 180 to 200MB/s which isn't my max read speed according to crystal mark, and file transfers.  Mind you this is playing 1 clip after waiting for the IO to settle to 0%.

 

When I transcode to GoPro Cineform, I get similar but slightly better results as the DNXhr4K 444.  I'm using the 4k Cineform 444 version of the codec, and I found that it would also drop frames in 1/2, 1/4, etc on the timeline.  I was seeing similar IO activity on the RAID.

 

Any thoughts why I'm seeing 100% utilization, and only seeing 200MB/s activity and getting stuttery playback on proven formats? 

 

Ideally I want a format that doesn't loose much if any quality when transcoding, and that will play without any issues on the timeline. 

 

These results are pre-Cachecade.

 

I am considering trying different stripe sizes and various other adjustments on a few different RAID0 configurations to test those results. 

 

Any suggestions @leadeater or @wpirobotbuilder  The RAID is primarily used for Video editing, and Timelapse editing (so a lot of Camera RAW files).  I read early on that a larger Stripe size would benefit me with video editing.  So I originally set it up that way, and had intended to do some testing, I'm planning to do that testing now before implementing Cachecade.

 

Thanks again for the feedback!

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×