Jump to content

2bitmarksman

Member
  • Posts

    135
  • Joined

  • Last visited

Everything posted by 2bitmarksman

  1. Your CPU needs more watts. On battery wattage is capped at 20 watts. When plugged in you can increase wattage limit go 60 or so and be able to hit 3.9Ghz on all cores through Intel XTU or ThrottleStop.
  2. @FloRolf it will only eat power while its in heavy use, so not really bothered while it idles. I will be building this when I get back to the US, so in about a week and a half, assuming deliveries are made on time. Will be sure to post back build pics and results
  3. So I’m at a point where I need to build a new computer, and I know that I am going to be moving soon to an area where space, power, and especially noise is a premium. Thus, I decided instead of having 3 seperate boxes for my pfSense router, FreeNAS HTPC and Steam library, and a gaming PC, I’d like to build a single machine running ESXi, with them all running as VMs. Below is the parts list. Note that I already have a lot of the parts, such as the SSDs, RAM, PSU, some of the NAS drives, etc and are there for referencing what it will look like when done: https://pcpartpicker.com/user/2bitmarksman/saved/hx49Jx 11 ESXi would rest on one of the two 250gb SSDs. I know this is massive overkill, but I have about 4 250gb drives I don’t know what to do with atm, so no sense buying a USB drive and taking up a valuable USB slot. The other 250gb SSD would be for a VMFS datastore (I’ll probably do a RAID 1 of 250gb SSDs honestly) for pfSense, FreeNAS, and possible for an Ubuntu/Docker VM later on. The 1TB NVMe drive I already have as well, and would be passed through via RDM to the WIndows VM. pfSense would get 4 threads, 4gb of RAM, and 32gb VMDK for the VM, along with the 1000GT/VT passed through to it and this would be the WAN/LAN ports. I’ll most likely have a 3rd group for the VMs to communicate, as the WIndows VM will be using the virtual networking for communication. FreeNAS would get 8 threads, 32gb of RAM, and 32gb VMDK. Perc H200 flashed to IT mode would be passed through, along with the 10g NIC port to allow for unfeathered access to the 8 NAS drives. This would store all my Plex data and serve as mass storage for important data and steam games. I may do some testing and benchmarking to see how good/bad it is to run games from it locally vs having to worry about network overhead because it doesn’t ever hit the wire so to speak. The 10g NIC is to allow it to connect to another homelab server and use it as a Datastore in the future. Not 100% sure atm for that. Plex and all its plugins (Jackett, Sonarr, Radarr, Ombi) would be loaded as well, though I may have another LInux VM handle those. Unsure how well FreeBSD would handle them until I research them further. Windows gaming VM would get 12 threads, and 16gb of RAM. The 1tb NVMe drive and the 1080 Ti would be passed through, along with 4 of the USB ports on the back, and ideally the front panel USB ports at least (unsure how this shows up for passthrough) to allow for USB hotplug. Even if I need a USB card, I have a 1x PCIe slot and 16x PCIe slot free just in case, so no biggy if things go 100% as planned on that front. Note that hypervisor.cpuid.v0=FALSE needs to be set in the Advanced options in order to allow the Nvidia card to work, as GTX cards throw a Code 43 if it detects it is in a virtualized environment (gotta have the product seperation from Quadros). This also has the implications that any other computer on the network that can run VMware Workstation Pro can connect to the ESXi host and game on the Windows Gaming VM with next to no latency (0.1-0.2ms max for most LANs)! This would allow you to build this in a Rackmount server chassis, such as the Rosewill RSV-R4000 or RSV-R4412, and stick it in a closet/basement/rack and have a minimalist desktop running something extremely light, such as an Intel NUC. This means you can optimize the rackmounted PC for cooling and performance, and have an office free of fan noise from a power hungry PC. Note however that this would require purchasing VMware Workstation Pro, which is ~$220-250 USD on VMware's website, or about $35 USD or less if you get it from other sources (however, you WILL NOT be able to contact VMware about any problems if you use these cheaper 3rd party options). EDIT: Unsure how this will work with a physical GPU if it will actually show the video feed. It is still possible to make it work though, either using Looking Glass to display what the Nvidia GPU is displaying on the virtual GPU while remoted in, or by using NoMachine to remote into it from your LAN with minimal overhead.
  4. For reference, if you connect a drive via iSCSI the CPU processing needed to write data to the iSCSI drive will be consumed on the PC that it is using it as local storage, not the FreeNAS host. Also to reiterate, unless you have 10g networking, don't bother with the SSD cache, and if you can, try to get an old server that you can cram a ton of RAM into instead like a Dell R710 if you need a highspeed cache. All the spare RAM that ZFS doesn't need on FreeNAS is used as the ARC cache, which is basically a RAM disk filled with the most used data you access. Once it fills, it cycles out the least used/old stuff to the L2ARC, if it has one, or just drops it if it doesn't. For reference, I bought a Dell R710 with 144 gb of RAM without drives and it was 538 bucks. Chelsio 10g NIC is like 25-50 USD and perc h200's are around 32-40 USD. HP servers can also be used and are a little cheaper but usually lower RAM capacities (128gb) and you would need an HBA (they don't have any that are compatible with FreeNAS that aren't for external enclosures). ZIL is good for lots of random writes (like running a bunch of VM's that all want to write small bits of data at once). Generally though, this really isn't needed for large file sharing in a home like Plex and a Steam library or anything that writes large files all at once. Oh, and make sure you setup Jumbo Frames, if you can. Big performance boost right there for lots of tiny files like textures for games over the network.
  5. Dell R510 with a Perc H200, flashed to IT mode with the LSI 9211-8i firmware and a 10g NIC for a FreeNAS box, and a Dell R610 (or 2) or Dell R710 (or 2) for your servers, also with 10g NIC's for a very cheap peer to peer storage and server solution. If you want full 10g networking that uses a switch, I'd suggest the Quanta LB6M. It's a relatively cheap 10g switch if all you want is 10g layer 2 switching. Depending on configurations, you can get all that for between 1200 to 1800 USD (1500 USD without the switch)
  6. I'd recommend FreeNAS running your storage and another seperate box for your servers. FreeNAS is very stable as an OS and ZFS is extremely resilient to things like sudden power loss and hardware failure. You just need to make sure the hardware is correct before proceeding. If you want to take that guesswork out, TrueNAS sells servers and configurations that will work very well, but if you want to do it cheaply, getting an old server like a Dell R510 with a Perc H200 flashed to IT mode with the LSI 9211-8i firmware and populating it with new drives works extremely well and is much cheaper, but requires some investment to learning more about the hardware you will be using.
  7. Agreed with the used parts comment. Most NAS/Server options don't need anything fancy and older parts are both cheaper and generally have better compatibility. Also DDR3 is cheap and you can generally get a lot of it for the money pretty easily
  8. The 1 CPU core per Nvidia GPU is interesting, thanks for sharing As for windows not liking more than 5 GPU's, I believe its annoying but can be done up to 8 GPUs. Something I really want to try is using one of those 12 or 13 GPU motherboards, and use unRAID to run multiple windows instances and utilize GPU passthrough to give windows the cards and see if it works Also, if you want a server idea, here's my very bad experiment of seeing if I could repurpose a 4U server case to be a folding/mining rig: https://www.dropbox.com/s/jjkqngm9ol0fk7d/20170707_214846.jpg?dl=0 https://www.dropbox.com/s/sgq84qlew57q3fi/20170708_203701.jpg?dl=0 https://www.dropbox.com/s/yrc2op9sipnjj3c/20170712_204702.jpg?dl=0
  9. 1) I've looked into the '1 core per GPU' claim you've made and I can't find anything that suggests this is true. If you can point me to somewhere on the F@H site that mentions this I'd like to see it 2) I'm suggesting that, assuming the core requirement per GPU isn't necessary, that you can do this project for either a lot less money, or possibly even build another one with the saved costs (depending on the parts)
  10. Main reason I ask is if the main focus is F@H/CureCoin/BOINC, the building in essence a mining rig is going to give the highest contribution per dollar spent. While you can fold with the CPU, it doesn't provide even close to the contribution of a GPU. Basically, by saving money on the CPU, RAM, and Case by buying cheaper parts, you could then build a second (or third) rig with the saved dollars and increase your contribution towards them. That is, if you are willing to give up the ability for it to function as a server. Alternatively, because F@H you can choose to only fold with the GPU's and leave the CPU untouched, you could go build and run your server with the F@H software running in the background utilizing your GPU's and then turn the CPU folding on and off as you need to.
  11. @nicklordzero I would recommend watching some of BitsBeTrippin' youtube videos if you want some ideas on mining rigs. Also he built a multiminer batch file so you can play around with certain mining algorithms and overclock settings. Just make sure you setup your wallets and all that (BBT has vids on how to do these types of things already there).
  12. If there is any link to the address used for payment and their real ID? Yes (i.e. silkroad address transaction to an exchange address they own, which has their identity on file). Unless they remain completely off the grid and trade their BTC for fiat currency in person and do it that way (sketchy as shit but these are criminals) then it would be much harder.
  13. If you're looking at the short term yeah, but personally I plan on mining and just holding my coins for several years and building a portfolio, as I anticipate that at least some of the coins I'm mining will grow in value by a large amount. Mine a lot of coins when it's easy and sell later when they're much more expensive and harder to get
  14. Ideally, you will want to plug your PSU cords directly into your riser, without using any kind of adapters. Meaning if you have a V006c (PCIe) riser, you would want to plug in the 6-pin PCIe cable into the riser, rather than use the PCIe to Sata adapter that comes with most of those. Also, each connector (Molex, PCIe and Sata) all come with a certain wattage that they are rated to output. Molex is around 100W, Sata is around 75W and PCIe at 150W. Need to make sure your PSU rails don't draw too much power off a single line or it will melt the connectors and start a fire. Personal rule of thumb is you can connect 2 risers to a single molex/PCIe string (never more than that), and 1 to a sata string (while 2 should be possible, it makes me uncomfortable drawing that much power over a sata rail).
  15. Do not mine Bitcoin with GPU's. Nicehash is not a bitcoin miner, they just pay you in bitcoin. Instead, I would recommend mining Zcash or Hush (Equihash algorithms) or mining Ubiq, Expanse, or Dbix (Ethash algorithms) while dual mining Sia coin. As for your question of what would be decent: https://pcpartpicker.com/user/2bitmarksman/saved/YXdsYJ - Base parts The PSU, graphics cards and case/rig frame costs are left out as this depends on how you want to build your rig and what cards you can get your hands on. And the cards you get will determine the PSU (or PSUs) you will need to support them. As of this writing, the best recommendation for cards I can give is for the 1060 3gb cards as there are some that aren't overpriced right now on Newegg ($220) and give a decent hashrate for all the algorithms I've put up above. At the end of the day man, you gotta do your own research. If you don't understand it you're going to get burned, sooner or later.
  16. Unless mining cards are significantly cheaper than their gaming counterparts they aren't going to be a very good buy. Mining with 'gaming cards' allows them to retain some resale value after they have been used for mining and these mining cards I doubt will have any resale value. I feel that Nvidia and AMD would have been better off just 'making more cards' than switching manufacturing machinery to make mining cards. Even if it's only like an additional 20% or something per batch.
  17. - Enable 4G encoding (I see you got that) - Set all PCIe slots to Gen 2 - Make sure you're installing windows with a UEFI install with GPT partition and that UEFI is the prefered bios selection for bootable media Honesty you've done most of what I could think the problem may be, outside of reinstall windows as UEFI with GPT partition.
  18. Best more or less depends on what the devs do with it. I will say that Sia is basically 'free' if you dual mine with Claymore on Ethash algorithms (ETH, ETC, EXP, UBIQ, DBIX), and Storj just announced they are teaming up with FileZilla to integrate their blockchain with FileZilla, which is great for proving their usecase if it does well.
  19. The server pool randomly dropping? Not really, but it will help prevent you from connecting to a pool that you aren't setup for.
  20. Are you still hardset on making your rig a combo server + miner, or do you just want to make a mining rig?
  21. Personally I'd recommend using Claymore and Mining either ETH, EXP, UBIQ or DBIX with your 480 if you want to mine. Nicehash is easier to setup but usually has lower hashrates and a higher fee than just mining the alt-coin and then converting it to BTC on an exchange. Also, you will want to make sure your cooling is sufficient on your card for mining if you want your card to live a good amount of time. Try to keep your card under 75C and the fanspeed on the card under 50%. you can check and manually set these things with MSI Afterburner to test things out. Above 80C and you're gonna start hurting the overclocking ability of your card in the longterm and above 66% fanspeed all the time will heavily wearout the fan as well. It would be a good idea to have some good case airflow if you wanna mine with your personal PC
  22. Go into the miner.cfg and change the other servers to the ones you want the connect to in the event that your flypool goes down. Or make flypool the only server in that list and it will only try to connect to that server
  23. The fact that suppliers won't increase production for cards even a little is what annoys me the most. I understand not wanting to take addition risk of they make a large supply and the mining craze stops and suddenly there is a huge pileup of excess stock, but the constant out of stock except for vastly overpriced GPU's is absurd for this long of a timeframe.
×