Jump to content

EngineerRosco

Member
  • Posts

    5
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

EngineerRosco's Achievements

  1. In your experience, what's the performance like with software vs hardware?
  2. Ah... Just had a look at the m'board, and I stand corrected. They are all Gen2... Yay.
  3. Yep. Luckily only the x16 is 1.1. The rest are Gen2, albeit x8...
  4. Is that even if the x16 is (frustratingly) PCIe Gen1.1? It would also mean replacing the graphics card. But the short answer is yes, rendering is already a bottleneck in post-processing, but not quite as pronounced as to cause issues. Yet.
  5. We are currently using an older HP Proliant M350 G6 to engineering type analyses here at work, and we're finding our read/write speeds to be a massive bottleneck. We've recently added a Samsung 970 Pro 1TB NVMe M.2 drive on a PCIe adpater card and already seen improvements in overall analysis times of up to 20%. As that upgrade was a reltively cheap upgrade for significant performance improvement, we're now exploring 2x 970 pro's in a RAID 0. Now, before everyone tells me RAID is too risky or why would you need such speed because you wouldn't notice it on a day to day basis, let me explain a typical use case: An analysis with a data file of approximately 10GB, which is the input for an analysis which is likely to need at least 16 substeps each with at least 5 iterations. A results file (approx. 20GB each) is written for each substep, that's 320GB of data written, plus if the analysis requires more than the 96GB RAM then each iteration will also have files written to the drive (in the order of 5-10GB each). Each file is continuous, i.e single file. On a large job, we could see a write total of ~700GB per analysis on a bad day. This can seriously slow stuff down, especially when considering previously we were writing to and from a 3x7200rpm HDD on RAID 5 (eeep...). As you can see, long/medium term storage will not occur (we have mulitple layered RAIDs elsewhere and duplicates offsite for that, which occur daily), but essentailly what we need is a massive sandbox for "live" files. The issue we have is that we want to experiment with the 2xNVMe's in a RAID 0, and while we have a number of PCIe Gen2 x8 slots (which could in theory each take 2xNVMe's) we can't seem to find a controller to handle a hardware RAID 0. Our options are: - Software RAID. We're running Win 7 Pro - A different NVMe in each PCIe slot and a separate controller in another PCIe (though this screams unbalanced/bottleneck) - A combination PCIe NVMe M.2 adapter with RAID controller incorporated - see ASUS HYPER cards, though they only do the 4xNVMe on a PCIe x16 slot... We're limited to x8 (the x16 slot has a graphics card in it...) Any one have any novel solutions? Or know of a suitable combi-controller?
×