Jump to content

vrod

Member
  • Posts

    97
  • Joined

  • Last visited

Everything posted by vrod

  1. Here’s the pics. For whatever reason I confused the inlet/outlet so I have to swap those two. Already had the liquid in, so just decided to do the leak test for the “other”.
  2. Hi all, doing my first hard tubing loop and decided to run pure distilled water for a leak test. Leak test ran fine and now I wanna get the distilled water out so I can put in the real coolant. Issue is though, I cannot seem to drain the loop somehow.. Case is a phanteks evolv x and I am using a Kinetic flt 120mm res/pump combo which sits in the bottom front. Only water that came out was from the res itself, from the loop, nothing is coming out. looked at the manual but there’s no notes about draining. Used to run separate res and pump before, draining was not an issue there. Would anyone have an idea, perhaps I overlooked something? Image attached. Thanks! Chris
  3. Dell might very well not have the option in their bios. If you are lucky they might have a tool where you can export the bios configuration to an xml, change the setting and import it again.
  4. Just don’t forget that HP servers are known to speed up the fans if non-hp-approved pcie devices are plugged into the system. regardless, that system will also use some juice, wouldn’t it be better to look for a 2600K or 3770K system? These should be pretty cheap these days..
  5. pfSense would be able to do what you need. Both load-balancing and failover as such.
  6. In regards to going 12Gbps, just make sure that you actually do get SAS 12Gbps SSDs.... It doesn't make too much sense spending the money on a 12Gbps backplane and controller if you will be using 6Gbps end-devices. Intels DataCenter SSDs are among the most robust and popular, but afaik, they only make SATA-based SSD's, not SAS. So that would mean that these would run on 6Gbps. And we all know that HDDs can't reach anything near of even 3Gbps speeds... If you go the HDD route, make sure to get a controller with a decent battery-backed flash-cache. Otherwise your 4K IOps will be pretty bad... If you go for SSD's, then yea go for 12Gbps if you go the SAS-way, or go for a 6Gbps if you go the SATA-way. Nevertheless, PCIe controllers are soon around the corner with NVMe backplanes, so SATA/SAS will probably be phased out "soon". And yes, use RAID6 or RAID10, however just know that you aren't protected from bitrot-issues (these are rare but they occur and can corrupt your entire RAID), like with ZFS or btrfs (since they use CoW and are self-healing). You also have no compression, deduplication or datastore replication options when running the disks locally. What is popular today is to run a VM appliance who hosts the storage on something like ZFS or btrfs, this is what most hyperconverged solutions do today. Proxmox (other hypervisor vendor) supports this kind of distributed storage through ceph (it also supports zfs root), but since you are using VMware and just one host, this probably doesn't apply to you.
  7. Unless there is a specific and valid use-case for the setup, I would recommend running the games locally as well.
  8. Look at the title of this topic, the guy is asking how this is possible, I am giving an answer to that question. I’ve never experienced any performance tanking to 0 through iSCSI... it mostly rather comes down to the configuration and hardware. in addition many games don’t even support being run over SMB. You circumvent this with iSCSI because windows sees it as a logical device. One use-case I had for this was to host the games for my home on 3 pc’s. We had 1tb of games but instead of using 1tb on each pc we managed to clone the zvols of the original gaming-partition and then presenting the clones to the other 2 machines, worked like a charm. A friend of mine runs a netcafe with 300+ machines and he uses iSCSI with no issues for a year now... so your problems must have been isolated.
  9. Are you using active or passive mode? With passive mode you need to have an wxtra range of dynamic ports forwarded as well.
  10. WSUS is a bit of work for such small network. It would probably be easier to just get a http proxy like squid which could cache the contents.
  11. Why would you add an extra overhead like SMB? iSCSI has a much lower overhead and has much better compatibility between unix and windows. And even for large games, the iSCSI method worked flawlessly for me. I played far cry, gta v and cities skylines mostly. maybe the games are big but the data loded per second is not that much. For my case, it was small enough to reside in the ARC (ram-based read cache on zfs) almost the entire time. So basically, not much was ever read from the actual HDD’s... only the first time, loading it into the ARC.
  12. FreeNAS (and ZFS) you could technically make it work with a striped 1TB mirror and 320GB mirror. Scaling would be pretty bad though. If I were you, I would try to get an extra 2TB and get rid of all the other drives. Then make a pool of a single mirror with the 2tb drives. Then create a zvol and present that as a logical device to your pc. Make sure to use wired networking with a transfer rate of at least 1Gbps!
  13. If it’s a 660, it’s not DDR4. i’ve run games over the network before and it was pretty damn fast! You just gotta use the right setup, then it will work pretty well. i used FreeNAS (still am for other stuff) and it has a iSCSI target service as well. What size drives do you have?
  14. Since you have Active Directory, I would recommend Kerio (if money is a problem), or Exchange Online through Office365. it doesn’t really matter though what you use, many employees are stupid enough to put a label with their password on the pc somewhere. I would strongly suggest, whatever you choose, to implement 2FA.
  15. You only need TCP 32400 forwarded. What plex server version do you run?
  16. I was just about to recommend getting a HPE ML350 Gen8 or Dell T620... you get dual socket boards, lots of pcie lanes for s low price. And many times lots of memory with them. Supermicro is great as well!
  17. Are you sure it’s perhaps not better to rent a dedicated server? What internet connection do you have?
  18. Do not use RAID5, no server vendor today would recommend this... Only having those 4 drives will bring bad performance. I would at least recommend you to go the RAID6 way, or better - use mirroring if you don't need the space. RAID5 will run super slow when rebuilding and since you have such big drives, the rebuilding process will also take a very long time, highly increasing the chance that a second drive will fail before the first one is even rebuilt. I've seen it...
  19. There is a OEM Dell card, made for this purpose... 0P31H2 is the partnumber afaik and this should give you 4x PCIe 3.0 Lanes for your U.2 SSD's. As you use a Dell server, I would highly recommend using this card over the rocketraid one. I don't know if this card supports raid, I would be surprised if it didn't. If it did, you should either see the logical device directly in ESXi or have to install a driver (VIB) for it. You use the Dell OEM ESXi image, right?
  20. As far as I know in the US you can do almost anything with a person's SSN (i guess most people remember the equifax hack). In germany you only use the tax number for employment or to file taxes. I don't actually know if you have a citizen-number as I immigrated from Denmark where you have a digital 2-factor citizen-id. Nevertheless, sucks for everyone involved... imagine how many people which might have seen this information.
  21. I guess that makes sense in some way. Only info we need here in germany is the tax number which you can't really do much with.
  22. I can't really understand why something as a SSN or similar would be needed to purchase a PC? Nevertheless I hope the government takes this seriously. Not a customer myself, I know that the people here in Germany would throw almost anyone involved with this to jail.
  23. This brings me back to the Athlon 64 vs Pentium 4 times... frankly I'm hoping intel will lose a lot of their HEDT consumers on this one here. Intel's 14/16/18 core processors were definitely not a part of their plan for this generation before AMD's announcement, which shows how bad it is when there's just one 'guy' on the market. Though, I do not believe they will be very relevant as the AMD chips will be significantly cheaper (rumored to be 850$). I could give up 2 cores if I pay 850$ instead of 2000$. Furthermore, Intel's move on only supporting their own NVMe-ssd's in raid. Wtf? And the even more retarded part of it is that you need to buy a KEY for RAID1/10. And even more if you want RAID5? This has been done on server boards a long time, however you were never limited to manufacturer-specific devices and you always got RAID0/1/10 functions for free, but then pay for stuff like 5/6/50/60. This move by intel is just plain retarded, to be honest. Then there's the interoperability between the memory sllts, all the different CPU's and PCIe lanes: what a chaos. They are more or less cannibalizing their own consumer segment with this new lineup. I have an 5960X. In that generation there was also 5930K and 5820K (haswell). Plain simple. So was broadwell: 6800K, 6850K, 6900K, 6950X. Now it's just a chaos, especially with the fact that you have 2 generations being presented at once... there's even now 4 cores with less PCIe lanes than the previous generation. Anyways, counting on that intel will fail on this completely, get their heads straight and get back to work. This is company-coma all over again.
  24. Good choice, you can get more space for your money with a sata ssd instead of nvme. NVMe is generally targeted towards server or enthusiast workloads such as heavy video editing or several VM's requiring I/O. I run a 950 pro with my ROG R5E10 but i sometimes regret not getting a sata ssd
  25. It does have an effect. The hdd controller needs to find places to write the data. It will often first write the zero's first but if they are scattered in fragments around the various platter you get data fragmentation. You much rather want data written aligned on the same platter rather than to be scattered around the different platters and take up more IOps.
×