Jump to content

lil_killa

Member
  • Posts

    11
  • Joined

  • Last visited

Reputation Activity

  1. Agree
    lil_killa reacted to leadeater in These Servers are TOO EXPENSIVE   
    It does do that, you must either use Windows Server GUI to configure actual Storage Spaces Tiering or use PowerShell if using non Windows Server (if Desktop version supports tiering). However PowerShell is always the preferred configuration method for Storage Spaces.
     
    Just throwing SSDs and HDDs in to a Storage Spaces Pool and creating a virtual disk is not enough, that isn't tiering. Neither is putting SSDs in to a Pool and setting them to Jounral mode, that isn't tiering either.
     
    This is how you configure tiering using the newer ReFS true dynamic tiering, no schedule task that moves hot and cold blocks around:
    #Setting storage pool to Power Protected Set-StoragePool -FriendlyName StoragePool1 -IsPowerProtected $true #Creating Storage Tiers $PerformanceTier = New-StorageTier -FriendlyName ProjectData_SSD_Mirror -MediaType SSD -StoragePoolFriendlyName StoragePool1 -ResiliencySettingName Mirror -NumberOfDataCopies 2 -NumberOfColumns 2 $CapacityTier = New-StorageTier -FriendlyName ProjectData_HDD_Parity -MediaType HDD -StoragePoolFriendlyName StoragePool1 -ResiliencySettingName Parity -NumberOfDataCopies 1 -NumberOfColumns 3 #Creating Virtual Disk New-VirtualDisk -StoragePoolFriendlyName StoragePool1 -FriendlyName Test -StorageTiers $PerformanceTier, $CapacityTier -StorageTierSizes 1GB, 10GB #Create Volume, use ReFS, get disk number for new virtual disk New-Volume -DiskNumber [DiskNumber] -FriendlyName ProjectData -FileSystem ReFS -AccessPath P: Note: Above method done this way because there is a bug creating tiers using other methods where tier settings are not honored. 
     
    Even the older way using NTFS is still tiering but it's not dynamic and relies on the scheduled optimization task to move the data between the tiers but any sort of manual file picking is not required, that is completely optional feature that in general is not recommended as it consumes hot tier capacity needlessly.
     
    Also like to point out that Storage Spaces tiering is not file level, it's block/chunk level so it does not move files between tiers it moves data.
  2. Funny
    lil_killa reacted to leadeater in These Servers are TOO EXPENSIVE   
    Don't we all wish we were 'limited' to 16GB/s storage system performance though
  3. Agree
    lil_killa reacted to leadeater in These Servers are TOO EXPENSIVE   
    It's all good, I enjoyed watching it fail more, as a viewer ?.
     
    That's still my pick for you guys anyway, as much as I like tiering in principle I prefer the guarantee of all SSD performance. 
  4. Like
    lil_killa got a reaction from leadeater in These Servers are TOO EXPENSIVE   
    Yeah, there's a huge leap between Enterprise setups and consumer one. I would say most SAN solution can get pretty expensive once you start throwing Enterprise Storage into the mix, but @leadeater did say he found some good pricing. However, the big problem still is the lack of knowledge required to properly setup the Storage. There are tons of settings and configurations to get an optimal performing array. I work with Storage Spaces all day long and there are still little tiny things that make it upset.
     
    One solution I've done for someone is just to setup a Storage Space Direct Cluster with enough cache to handle the workloads. Once you have enough HDD storage to handle at least 25% of the "Cold" workload the users tend to not notice because not everyone is pulling from "Cold" storage all at once. 

    Also, there are commands to pin files/folders to Hot storage. It shouldn't be hard to write some sort of automation to automatically pin critical files and unpin pins that are no longer. Furthermore, I think he should have taken a better look into the setup and see if there are any bottlenecks on the Networking side. Two Optane 900ps' can do about 5GB/s Max, but his networking maxes out around 1GB/s which might of results in Higher Latency. 
  5. Informative
    lil_killa got a reaction from vanished in YAY, 100G TIME!   
    Well its time to go to 100G! Lets see if I can saturate these ConnectX-4 Cards.
     

  6. Like
    lil_killa got a reaction from leadeater in YAY, 100G TIME!   
    Well its time to go to 100G! Lets see if I can saturate these ConnectX-4 Cards.
     

  7. Like
    lil_killa got a reaction from PCGuy_5960 in YAY, 100G TIME!   
    Well its time to go to 100G! Lets see if I can saturate these ConnectX-4 Cards.
     

  8. Like
    lil_killa got a reaction from leadeater in YAY, 100G TIME!   
    Threw it in some servers and didn't get a chance to optimize/configure any parameters. Here are the results 50GB transfer. When i get a chance I'll mess with the settings and see how close we can get =D.
     
    CPU Usage was at a whopping 5%.
     
     

  9. Like
    lil_killa got a reaction from Lurick in YAY, 100G TIME!   
    Threw it in some servers and didn't get a chance to optimize/configure any parameters. Here are the results 50GB transfer. When i get a chance I'll mess with the settings and see how close we can get =D.
     
    CPU Usage was at a whopping 5%.
     
     

  10. Like
    lil_killa reacted to leadeater in YAY, 100G TIME!   
    Single server Storage Spaces or Storage Spaces Direct?
     
    Going to be getting 6 servers at work with 4 HPE NVMe 800GB Write Intensive SSDs each, going to throw them in to a S2D cluster just for the fun of it, not the actual use for them but I just wanna try it lol. Pure math on a straight simple setup it's slightly over 60GB/s steady state.
  11. Like
    lil_killa got a reaction from leadeater in Windows Server 2012r2 Cluster Question   
    Yeah, not sure how long its been but since September, 2 node minimum has been a thing. You need 4 node to be able to use the multi tier volumes and erasure coding though.
×