Jump to content

BiTBiTE

Member
  • Posts

    21
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

BiTBiTE's Achievements

  1. Essientially that's what I was thinking. That feeding the loop with chilled water (let's say your 12c) directly before the components would yield results in the short term. I ordered some materials last night, I have a day off coming up so hopefully I have some time to test the setup. Thanks for the insight guys!!!
  2. Hm. Maybe altering the flow rate over the TECS would yield results? An external chilled res? I was just caught off guard because even with every article I've read saying that TECS didn't work in a watercooling loop, I still seen a notable difference in temps when utilizing a TEC placed on my flow meter. I've already sealed my test board so that one will receive a TEC direct on the Q6600 to see what happens. Max wattage of the Q6600 is 180w. I have a 400w tec in the mail, but it hasn't even left China yet!! I'm going to do some actual testing on my day off. It's the only way to know. The worst that can happen is a learning experience that I'm more than up for. I'll have to order some stuff tho, which could take a while. I'll keep this updated as the testing goes/parts arrive.
  3. Initially I was just messing around with old components and wanted to see how low I could get a cpu with a TEC. I've tried to find an equation to calculate exactly how much heat is dissipated with a certain TEC given the load put on it but I've come up empty handed. I know they're seriously inefficient, requiring double the wattage of the component trying to be cooled, but I'm basing this project on two key factors: 1. They are not intended as the sole cooling component, rather an assisted solution that won't be powered at all times (only for tests) 2. With the setup I have in mind they will add tons to the aesthetics of the case. What I'm wondering is how much they will cool the liquid. If I was able to find the equation to calculate the cooling capacity of my loop I'd be golden. In the end, I'm doing this for the lulz. Ive had lots of projects but never posted anything over the years, from my ac pc, to my mineral oil computer and the like. I don't have any modding friends and the wife's growing tired of listening to me lol.
  4. If you look at my rig pic, take note of the flow meter on the bottom. The pipe to the left has a t line adapter; one side goes to a valve out the back which is the drain, the other goes to the components starting with the gpu. I'm going to make a new pipe (actually two) that goes from the inlet of the flow meter (right side), out the back and another in for the loop. The external pipes will let me keep the TEC chiller external to the chassis so that any condensation can be dealt with. To cool the peltiers I've opted on two hyper 212 evo which will stick out the back of the case, having 120mm fans to cool the heatsinks. I was going to watercool the TECS but I'm trying to keep it as aesthetically pleasing as possible. The evos will look amazing out the back and will add to the steampunk look that I have already. I tested one peltier on top of my flow meter (it's laying down currently) on top of the plexi face and I noted a ten degree difference after 3 hours of gaming. Note that I didn't even apply thermal, rather just set it on top of the meter. My idea is to buy another vortex flow meter, replace one side with 3mm copper plating to contact the TECS, then position the visible flow meter side so that it's visible. I think it would look totally Rad.
  5. Hey guys!! Just wanted to fill you in on a project I'm working on. Feel free to chime in with anything that comes to mind. Ive been experimenting the past day or so with the TEC modules I finally received; the first was a single 12706 then two 12715's. I'm really intrigued with them. I've sealed a board with an old Q6600 I had laying around and was going to mess around with one directly on the CPU, but then I wondered how it would do in conjunction with my watercooling loop in my main rig. I know about the heat dissipation issues of the TECS and am currently using a hyper 212 evo which it's doing the trick. I know (first hand) of the condensation issues, but with the way it's going to be implemented it puts the TECS on the outside of the case with the evos outside the back. I have an all copper tube build which is modular so it wouldn't take much work to change out one pipe. It wouldn't be always powered, rather only when running benchmarks or games and such. I'm placing the peltiers after my res, before it enters the gpu then cpu. (My cpu/gpu are in a parellel config, which was more for aesthetics but I did try both parallel and series config and seen marginal difference at best.) The point of the TECS is not to be the main cooling component, rather chilling the water before entering components. As the cold water enters the gpu it will be heated before entering the cpu, negating the need for insulation past the gpu entry. Whatever heat the liquid picks up by the components is then cooled by my Rad, then chilled again before entering the components. I'm oc'd at just shy of 5.2Ghz on my 8600k, really hoping to push this chip to 5.3. It's already delidded with a Rockit copper IHS installed w/ liquid metal. Temps are good; breaking 74 after a few hours of ghost recon wildlands but considering the OC I'm happy.
  6. I've realized that I've missed very important considerations when choosing my hardware; I should have at least wd reds or a sas hba with some sas HDDs. Tbh I never really intended to upgrade to 10gb, but I seen the cards at a reasonable price and decided to give it a shot. I'm still trying to figure out if I should get wd reds or go the sas route. My main disadvantage is that I can only run my remaining pcie slots in 4x mode. If I come across a sas hba that can be powered by a 4x pcie then I may go that route instead.
  7. TBH it's more of a project than anything. I enjoy trying to figure out these things. Besides work and family all that I have are my computers lol. To infinity and beyond?
  8. So from what I've gathered, as Electronics Wizardy said I think I'll run a disk benchmark on the raid0 array once I get back home and see what it's at. I was hoping to not have to start upgrading all my drives but that seems like what I may have to do. Thank for the input!!
  9. I was afraid of that. I have two follow up questions. -would I benifit in creating a raid0 array on the window's pc as well? -would adding a zil/l2arc cache on an ssd help me in my scenario? From https://www.ixsystems.com/blog/o-slog-not-slog-best-configure-zfs-intent-log/: What is needed for performance improvement is a dedicated SLOG, like a low-latency SSD or other similar device (ZeusRAM, etc), so your ZIL-based writes will not be limited by your pool IOPS or subject to RAID penalties you face with additional parity disk writes.
  10. Hey guys! Avid YouTube viewer for a few years and I thought to myself who better to ask for some help Basically the problem I'm having is that I wanted to have a 10Gb connection to my FreeNAS server, but I'm experiencing horrible file transfer speeds. What I was expecting was to achieve around 800-1000mb/s on a 40Gb test file, but instead I max out at 170-180mb/s. I can transfer the same file over my 1Gb connection at 110-115mb/s. Here's my setup. I'll list both 1gb and 10gb connections. --- FreeNAS Server freenas 11.2 Core i7 3820 40gb ram 1tb Seagate barracuda boot drive 2x 4tb Seagate barracuda in RAID0 3x 1gb NIC in LACP lag config (eth0) 1x Mellanox MNPA19-XTR with default drivers (mlnx0) --- Main PC: Windows 10 Core i5 8600k @5.0Ghz Evga Gtx1080 SC 16gb ram 256gb wd black NVME boot drive 500gb evo 860 app drive 2x 4tb Seagate barracuda storage drives (no raid) 1x 1gb nic 1x Mellanox MNPA19-XTR --- Both PC's are connected together with a Cisco passive sfp+ cable for the 10GB network, and the 1GB nics are all connected to a procurve 1810g-24 managed switch. What I've done: -Set an mtu of 9000 on the freenas server, set jumbo packet on Windows to 9000. -max number of RSS processes: number of physical cpu cores -max number of RSS queues: number of physical cpu cores -receive buffers: 4096 -send buffers: 4096 I booted to LinuxLive on the window's PC and did an iperf test which confirmed my bandwidth between the two was 9.4Gb. The interesting thing I noticed was that this speed was consistent if I tried uploading the file to and from the server, with different drives (remember the two 4tb aren't raided in the windows pc), but when I tried transferring the file from the ssd on the windows pc to the freenas s server, I was able to achieve transfer rates of about 500mb/s, but it was really erratic and would drop to zero every few seconds. If anything else is needed I'd be more than happy to provide it.
×