Jump to content

lil_killa

Member
  • Posts

    11
  • Joined

  • Last visited

Awards

This user doesn't have any awards

1 Follower

Profile Information

  • Gender
    Male
  • Location
    United States
  • Occupation
    Technologist

System

  • CPU
    i7-6850K
  • RAM
    32GB
  • GPU
    1080 TI
  • Storage
    1TB Samsung SSD - 6.4 TB Sandisk SX350

Recent Profile Visitors

396 profile views

lil_killa's Achievements

  1. Yeah for sure, but you can find a Mellanox 56G switches on ebay for around $2000 and the 56G ConnectX3 cards for around $200 ish.
  2. Agreed with @leadeater all flash is still the best option. But, be aware the All NVMe Supermicro Chassis you have oversubscribes the PCIe lanes(PCIe 3.0 x 16) to the Front 24 drives(Back Plane) and another(PCIe 3.0 x 16) to the mid 24 Drives(Mid Plane). You're going to only see a Max of 16 GB/s Read or Write (Best case scenario but will probably be less) for each set of 24 Drives. So if you want to utilize all the performance one day its going to be time for clustering.
  3. Yeah, there's a huge leap between Enterprise setups and consumer one. I would say most SAN solution can get pretty expensive once you start throwing Enterprise Storage into the mix, but @leadeater did say he found some good pricing. However, the big problem still is the lack of knowledge required to properly setup the Storage. There are tons of settings and configurations to get an optimal performing array. I work with Storage Spaces all day long and there are still little tiny things that make it upset. One solution I've done for someone is just to setup a Storage Space Direct Cluster with enough cache to handle the workloads. Once you have enough HDD storage to handle at least 25% of the "Cold" workload the users tend to not notice because not everyone is pulling from "Cold" storage all at once. Also, there are commands to pin files/folders to Hot storage. It shouldn't be hard to write some sort of automation to automatically pin critical files and unpin pins that are no longer. Furthermore, I think he should have taken a better look into the setup and see if there are any bottlenecks on the Networking side. Two Optane 900ps' can do about 5GB/s Max, but his networking maxes out around 1GB/s which might of results in Higher Latency.
  4. Yeah, I have 2 identical Storage Space Servers and just testing by transferring between the two on 2016 DataCenter. Haha, that sounds fun tell me how that goes.
  5. Threw it in some servers and didn't get a chance to optimize/configure any parameters. Here are the results 50GB transfer. When i get a chance I'll mess with the settings and see how close we can get =D. CPU Usage was at a whopping 5%.
  6. Gona throw these in an all NVMe Storage Space Server and post the performance, lets see what happens.
  7. Well its time to go to 100G! Lets see if I can saturate these ConnectX-4 Cards.
  8. Yeah, not sure how long its been but since September, 2 node minimum has been a thing. You need 4 node to be able to use the multi tier volumes and erasure coding though.
  9. Yeah things are going to get wild, according to this article the Fury x's are doing pretty well against the Titan X and 980's. Source: http://www.techpowerup.com/213528/radeon-fury-x-outperforms-geforce-gtx-titan-x-fury-to-gtx-980-ti-3dmark-bench.html
  10. Damn I added 2 to cart was logging in then i clicked checkout and out of stock so sad lol
  11. Here one on eBay that $33. http://www.ebay.com/itm/NEW-NetGear-ProSafe-GS108-8-Ports-External-Switch-Gigabit-Desktop-Switch-/331531885118?pt=LH_DefaultDomain_0&hash=item4d30d71a3e and another for $24 + $8 shipping, $32 http://www.ebay.com/itm/NEW-NETGEAR-GS108-ProSafe-8-Port-Gigabit-1000-Ethernet-Fanless-Switch-EEE-802-1P/271827892210?_trksid=p2047675.c100005.m1851&_trkparms=aid%3D222007%26algo%3DSIC.MBE%26ao%3D1%26asc%3D29906%26meid%3D8b69bd8d093c447ea9c108ccfb2fa2a1%26pid%3D100005%26rk%3D2%26rkt%3D6%26sd%3D331531885118&rt=nc
×