Jump to content

Marvin_Nor

Member
  • Posts

    8
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Marvin_Nor's Achievements

  1. Haha. pretty much. There's a lot of it coming from Ignite, and some of the firms we work with also states that 2019 is better. Here's a link that summarizes some of the 2019 improvements. https://blogs.technet.microsoft.com/filecab/2018/06/27/windows-server-summit-recap/ Of these five, I am personally mostly interested in point three. Here's some good Ignite sessions too: https://myignite.techcommunity.microsoft.com/sessions/65880
  2. This also introduces resiliency tiering, parity is intended for archive storage in Storage Spaces / S2D. Can cause some latency, the parity in Storage Spaces is greatly impacted by CPU speeds. Resiliency tiering works on 2016, but it's not all that good for big workloads. Heard it's actually working wonders in 2019, but have no personal experience with it. Yeah, I think the only reason to pin files is in VDI environments.
  3. In that case the Journal would be write and read cache for HDD, but write only cache for the two SSDs in Fast Tier. And it would shuffle data between slow and fast tier if necessary, while also caching the data. The benefit would be more capacity, and data would be retained on SSDs for a longer period of time due to the fast tier, but would be moved to SSDs faster due to the Journal (while moving data from slow to fast tier). So, could be a gain, if the file usage isn't consistent, but more sporadic. Really depends on usage I would say.
  4. Yeah, we considered a lot of solutions, including Nutanix. vSAN and Starwind. But ended up on S2D due familiarity, simplicity, performance and already having it licensed due to our licensing model. SMB Multichannel is great! We're using SMB Direct (RDMA), it's working wonders to be honest. Yeah, Windows has come a long way with 2016 and 2019 regarding storage to be fair. And it's pushing some great performance numbers, especially with 2019 and the utilization of persistent memory as cache. With that said though, just came to think of it. Linus could force the fast disk tier to behave like cache (as it would in S2D) instead of fast tier, if desired. Would loose out on the capacity, but could improve the overall performance, also shuffles data quicker in and out of the fast disks. After creating the pool, you could run this little command to force the SSDs into Journal mode (cache mode).: Get-PhysicalDisk -FriendlyName *part of SSD name* | Set-PhysicalDisk -Usage Journal Then create a virtual disk with a volume: New-Volume -StoragePoolName *Pool* -FriendlyName "Volume01" -Size 250GB -ResiliencySettingName "Mirror" -FileSystem NTFS -AccessPath "E: "-ProvisioningType Fixed Should also set the column back to default (he had 6 HDDs if I am not mistaken, so 1 column per disk pair) before creating the volume. Think this would work.
  5. Ah, right, I come from a primarily S2D Environment with SSD + HDD or NVMe + SSD + HDD, where it won't ever give you the option to define a fast tier unless it's a three tiered solution, or you define costume tiers and make a vDisk based on your own tiers. So it fully commits all fast disk as cache. Yeah, the cache you're seeing is just a dedicated space on the fast tier, as opposed to S2D where all the fast drives will be cache, again, unless you define it otherwise. Personally had bad experience with standard SS, reaching 500k IOPS is no issue, but when I've put workloads on it, the latency really starts growing (used it for backup). So, switched over to S2D for everything, 1-2M IOPS and still below 1ms latency, happy with that.
  6. Is that a Storage Spaces or Storage Spaces Direct setup? The GUI will also always show how much of the cache is dedicated to the virtual disk, as if it was capacity.
  7. The default is to make faster disks "cache" aka, it doesn't count towards capacity (for 2016 and 2019). You can make it true two tier through PowerShell, which will make it reserve some capacity on the fast tier for "cache" (1GB or something you set). Yeah, most fancy features are ReFS only, like resiliency tiering and so on.
  8. Hey Linus, I work with Storage Spaces and Storage Spaces Direct at my work place. If you'd like some help with setting up a Storage Spaces or S2D (Storage Spaces Direct) environment, let me know. There are a lot of documented best practices, but there are also some field best practices regarding how to setup Storage Spaces for good performance. You could run into huge bottlenecks when it's not configured as it should be, or using wrong hardware. There is also some hardware requirements you should meet, especially for disks, so that you don't run into throttle issues. This is due to disk firmware in most cases (been cases where people have seen their performance increase by four times due to a firmware upgrade on disks). Some issues related to disk can be the disk sizes, if you under budget the cache tier, you'll run into performance issues. Rule of thumb is that cache should be 15% of cold data in size (before mirroring/parity). Also, ReFS vs NTFS is something to look into. ReFS uses Block Cache (in memory cache) ontop of the Cache Tier (SSD or NVMe). If you're going to invest in new hardware, a recommendation would be 2 x S2D servers (as a minimum) setup in a 2-way mirror. Also, to shed some light on the way Storage Spaces work with cache: All writes are done to the fastest tier first; NVMe > SSD > HDD. NVMe will work as write cache for SSD and HDD. NVMe will only work as read cache for HDD, it's not a read cache for SSD. SSD will work as both write and read cache for HDD. Data will be cached based on a usage algorithm. So it will only shuffle files down from cache when they're rarely used, or when a file on the slow tier is consistently used more than a file that is in the cache tier. This also means all new written files will be cached for a while. The cache will by default also shuffle out files when it's filled up by 70%, so that it got free room for new writes. Storage Spaces / S2D doesn't really tier data either, unless you have a three tier setup; NVMe, SSD and HDD. With only two "tiers" one will be cache and the other will be storage. In a three tier solution, NVMe will be cache, SSD will be fast tier and HDD will be cold tier.
×