Jump to content

Xhypno402

Member
  • Posts

    13
  • Joined

  • Last visited

Awards

This user doesn't have any awards

About Xhypno402

  • Birthday Feb 02, 1983

Profile Information

  • Gender
    Male
  • Location
    Davie, Fl
  • Interests
    Programming
    Computers
    Photography
  • Occupation
    Senior Software Engineer, IBM Cloud

System

  • CPU
    Intel I7-5930k
  • Motherboard
    Asus x99-deluxe usb 3.1
  • RAM
    8 x 4gb Crucial Balistix ddr4 3000
  • GPU
    2 x Asus Strix GTX980 4gb
  • Case
    Custom painted Corsair 750d
  • Storage
    2 x Samsung 950 pro 512gb nvme
  • PSU
    Evga 1200 p2
  • Display(s)
    3 x Asus ROG Swift
  • Cooling
    Custom hardline water with 360x45 and 280x60 alphacool radiators and ekwb cup and gpu blocks.
  • Keyboard
    RoyalKludge 938rgb Topre clone with vortex ptb caps
  • Mouse
    Cold steel wow gaming
  • Sound
    Onboard
  • Operating System
    Windows 10 pro
  1. better safe then sorry would matter if he had an array with 24-48 TB and had enough data usage to warrant it. spending 50%+ more on ram is a poor usage of money if you won't see an actual benefit from it.
  2. Actually you are very wrong in all of your statements. 1. ECC ram does not increase server stability. It increases data consistence for applications. ECC by no means now or ever has stopped a server from crashing Windows, Linux, BSD, or any. 2. Raid cards use ECC because ECC supports low voltage battery discharge while non-ECC does not. This allows highend raid cards to have a battery backup connected to the raid card to allow storage of data in memory till system power is restored to stop the loss of in flight writes. 3. The argument for ecc vs non-ecc has been a heated one with the freenas community. They have pushed for a long time that the use of non-ecc memory can kill a pool. This is also a misnomer. ZFS was designed by SUN initially for Solaris on lower end sparc servers that did not use ECC as they wanted you to use their Sun Disk Arrays for primary data storage on their high end servers with ECC ram for application integrity. I worked for many years with large scale deployments of these at Affinity Internet as the backing for their high performance cluster shared hosting environments. The key to why the ECC memory was used with the app servers is with long standing data in memory there is a chance that data decay of a flipped bit can occur. This can be a major issue if the data that is being read is an in memory data base containing things like transnational data records or a static routing table for 100k websites. ECC does not correct for the common cause of data corruption in memory which is multi-bit or word data errors. This is the type of error that will bork a pool or a strip, not a single bit. Yes with read/write throughput from ZFS the chance of memory corruption is higher, but as has been discovered in many test, the amount of ram usage has to be to multiple orders of magnitude higher then a common nas will see.
  3. For a NAS. Save the money and don't worry about ECC and a mb that supports it. Google ECC vs Non-ECC memory for NAS applications. A few major companies did large scale studies of the advantages for data storage. ECC protects against invalid data blocks in active ram, the benefits to this are only seen at extreme usage points. The most recent of the studies from SolidFire shows that until you move up to a system that is pushing consistent data access in the 12-15 Gbps from storage that you will not see a benefit to the usage of ECC. And trust me when I say, that is a ton of data access. Working previously in the big data field we would only see cluster obtain that usage when crunch a large data set in the 20-25 TB range from 100 nodes in a 100% commodity setup where we didn't want to pay the extra for SANs.
  4. Try managing more then 10 to 15 hypervisor a with opennebula or eucalyptus. Cloudstack has its pluses but for stability, ease of maintenance, and administration it is a far cry from OpenStack which is why when it comes to non-vsphere private cloud setups OpenStack is dominating the market. I have managed quite a few setups of all mentioned and by far I prefer OpenStack. It powers a 60k hypervisor public cloud at Rackspace for these exact reason.
  5. Wow. You have no clue what your talking about. First, it is called OpenStack, not OpenStacks. Second, by no means does it run Amazon Web Services. Third, it is designed as a Platform as a Service tool for building a dynamic platform for services. This could be virtual machines, containers, or data storage. Fourth, devstack works perfectly fine in a VM. The base instructions suggest testing it in a VM. All of the testing of openstack runs purely in VMs, this includes the launching of VMs from the nova api as part of the running of tempest (the openstack integration testing tool). Dont believe me go to http://status.openstack.org/zuul/ and scroll to the bottom and look at the nodes graph. That is showing the building and usage of VMs launched in multiple cloud providers in order to run devstack on in order to run tempest against code changes for the openstack code bases.
  6. It has been a bit due to delays with the painting of my case, but I have done my post tests as I am getting ready to put my cards under water.
  7. Ty for the suggestion. I went out yesterday and bought 2 of the ROG Swifts.
  8. I am currently putting together 2 nice X99 systems for myself and my wife. Both will be watercooled and overclocked with the following hardware. I7-59030k 32GB DDR4-2400 XP941 M.2x4 256GB (OS) HyperX Sata3 256GB (Games) 2 x WD Black 1TB (Storage) Win 8.1 Pro The only difference is the video cards: Mine: 2 x Asus gtx 980 Strix 4GB Wife: 1 x MSI gtx 980 Gamming 4G My wifes current monitor is a Samsung 24" 1080p 60hz while I am using a pair of Acer S231HL 23" 1080p 60hz (one connected to my Macbook pro and 1 on my gaming system) My wife plays RTS and MMO's, while I play RTS, MMO's, and FPS games. Right now I am torn as to which monitors to get as an upgrade that will last for a long while (note, we are not fans of AMD cards and will be staying with Nvidia) The 2 monitors that I am looking at which are currently available are: Acer XB280HK 28" 4k 60hz Asus ROG Swift 27" 1440p 144hz I like the reviews of the Acer XB270HU 27" 1440p 144hz IPS, but I don't want to wait for them to be in stock. I know 1080p -> 4k is a huge jump when it comes to pixel density allowing for greater quality of graphics, But I know the same is true for 1080p -> 1440p if you are use to 1080p. What I am curious about is if the 144hz is that much of a benefit if we have not gamed at past 60hz to this point and are use to it. Any suggests and help would be appreciated.
  9. It will be soon. My corsair 750d is in the shop getting custom painted and all the parts are at home while I am at Disney World for my son's 5th Bday.
  10. I wouldn't put 2 680s as an upgrade for 2 590s. 4 gf110 gpu's I think would be a bit ahead of 2 gk104 gpu's. Not by much, but I think they would.
  11. Long time SLI'er here. Started with evga gtx 280 x2. Then Galaxy gtx 460 SC x2. Then evga gtx 580 SSC x2. ( one blew and evga replaced it, put both in 2 different comps and have been running a single evga 770 sc in my current system ) In about 2 weeks I will be running SLI again. Currently building a new system which will have a build log soon (already up on jayztwocents.com). This will be a new X99 build with Asus gtx 980 Strix sc 4gb cards. This will also be a first for me. I have had my cup under water a few times before, but this whole system will be under water with 2 ek full cover blocks already to go on this week as I put everything together.
  12. Performance issues in this system in gpu rendering won't be the gpu or the dual Titan X's but the ram. 16gb is 8gb below the min recommended from Nvidia for a single Titan x.
  13. There is one hung with the Xeon chips you have to watch out for. Most P87 boards don't support the E3 processors which forces you to the H87 server boards and most require ECC ram, but not all. So make sure you get a non-ECC board or you just wasted your savings for the cheaper processor and would have been better to go with the I7. Also to note if it is gaming you are doing and 4 threads is fine, look at an I5 to save money instead of a Xeon.
×