Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

LIGISTX

Member
  • Content Count

    2,320
  • Joined

  • Last visited

Awards


About LIGISTX

  • Title
    Junior Member

Profile Information

  • Gender
    Not Telling
  • Location
    California

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I don't know what the lights on the GPU are, but use something like MSI Afterburner to see what the GPU is doing. Have you tried any stress tests like Unigen Heaven or Vally? Or 3D mark? If it crashes when it gets to high load, its possible the card is bad, or your PSU is failing. See if you can replicate the issue in a benchmark/stress test since they are more repeatable.
  2. What GPU do you have? 86 is pretty warm... Can you turn the fans on the GPU up via afterburner or EVGA precision? I would leave the CPU fans set to the CPU, don't want a CPU load with a low GPU load causing super high CPU temps since the fans won't ramp up... Unless you can verify that even at low fan RPM on the radiators the CPU temp will stay reasonable. You can easily test this, just lock the radiator fans at the lowest RPM they would be at if the GPU temp is low, and run a CPU stress test.
  3. Hmm, define GPU utilization spiking. GPU should always be at about 99-100% utilization... It will try and pump out as many frames as possible, thus running at 100% (assuming your CPU is fast enough to keep it fed with data), which your 1800x more or less should be.
  4. You can do push pull, but if you do, make the fans that blow into eachother the same RPM. Don't make the top outside and top inside fans run at different RPM for example. But, this also won't help. The heat coming off the radiator will be the same, so the temp hitting the GPU will be the same. Think of it this way, the CPU makes the same heat, that heat is going to go into the case. This isn't really something to worry about tho. The GPU won't get too hot. It'll be fine
  5. I wouldn't do that, thats pulling a lot of amps through the GPU fan headers. Is it possible, maybe, but I don't know if I would do that. You can always use software like speedfan, or hardware like a corsair commander pro to do this. I use a commander pro for my system in my Sig. Works really well actually. I have everything controlled by GPU temp as well. My top fans in my 3x140 run at 600 RPM and front 2x140 are off unless my GPU goes over 45c. Thankfully, even under full CPU load, the top rad at 600 RPM fans is enough to keep it plenty cool, and the slight airflow through the front rad from the top fans breathing through partly through them. Speedfan is a free way to do this, but it can be a bit of a pain, and a few years back PUBG anticheat BS flagged it as a tampering software aaaaand thus I have a commander pro now.
  6. You would want to go the other way, AHCI to RAID. https://blog.workinghardinit.work/2018/11/28/moving-from-ahci-to-raid/ That said, you would only have to do this if you plan on using your motherboards hardware RAID. And remember, it is possible, if your motherboard dies and you replace it with another one (not the same model), you won't be able to get into the drives any longer as the RAID controller would be different. That is a potential risk to consider. I wish I could help you more with the windows RAID options, I just personally don't have any experience.
  7. That is only required for mobo hardware RAID. For windows software RAID it shouldn't be needed at all. And its not every individual device, its usually the entire controller. So all SATA ports on the mobo (Assuming a single controller mobo) would respect the setting you set. At least, this is how it worked years ago when I used to use hardware RAID 0 for my short stroked 10k rpm Velociraptors in RAID 0.... lol. Its been a while for sure.
  8. Then you wouldn't need to reformat. You would need to put your SATA devices into RAID mode, and if your using a SSD that is SATA, that would cause windows to not boot correctly. You can easily fix this though, just google windows boot AHCI RAID and things will pop up.... I have done it many times myself, just off hand I don't remember. But again, why is RAID 5 desired? I assume you want a little data redundancy, but just remember, RAID IS NOT A BACKUP. And RAID 5 IS REALLY NOT A BACKUP. If its just for movies and such that you can "easily" replace, its fine. If it was for "important" information, RAID 5 isn't enough redundancy.
  9. Yea, I mean, that will work fine. I run an i3 for a homelab, ESXi with multiple ubuntu VM's under it, and FreeNAS. But... depending on the amount of storage space, FreeNAS really does like RAM, which is why the 16 GB of RAM is recommended. And Intel NIC for drivers... Nothing wrong with an i3 at all, hell, they support ECC (which I have), and its plenty fine for a few Plex transcodes with only 2 threads assigned to my Plex VM, just saying, posting hardware is always a good idea as we the community can give feedback
  10. O, well, wait. Do you have windows installed on one of those 4 TB drives? If so, thats your issue. You can't set up RAID after the fact. If you don't have windows on one of those drives your trying to build into the RAID pool, then it will work fine. Also though, why are you trying to set up a RAID 5 array? If you intend to run windows on that array, I wouldn't really recommend that... There is math involved in reading and writing to a RAID 5 array, and the performance hit, to already slow mechanical drives, wouldn't be the most fun system to use. RAID 5 can read faster than a single drive, but writes, depending on the situation, can be slower.
  11. Does your motherboard support RAID? Can always do it in hardware... That said, software is likely more resilient as it won't rely on a certain mobo implementation. I have only done hardware RAID in windows, so I really don't know much. The only advice I had was doing it in your mobo as it almost certainly has RAID 5 support.
  12. Is anything overclocked? Tried DDU to kill all drivers and re-install the drivers? If not, google DDU, download that (display driver uninstaller), itll reboot you into safe mode to really kill the drivers, then re-install once it reboots to normal windows.
  13. I run Freenas and it is pretty easy to use and get going. But, its certainly not as hands free as synology would be. Also, please post your specs before you make a choice so we can possibly help give you pointers on the hardware for freenas. Although, to be honest, all you need is 16 GB of RAM, any modern i3, and a boot SSD (or a USB stick, but I would grab a relatively cheap ~50 dollar SSD) and thats basically it. But still, would be curious what parts your considering. Also of note, the mobo should have an Intel NIC which many do, just make sure the one you have in question does. Intel NIC drivers are much, well, better.
  14. I mean, 66c is totally fine. I wouldn't waste any money trying to reduce that temp. IS that under load? My way over the top water cooling setup gets my 8700k under game load to mid to high 50's.... granted, I am going for silence, all of my fans are ~650-700 RPM under that load, but still. Point is, 66c at full load is TOTALLY fine, great even! Not at all worth spending money to reduce. CPU's are built to run 24/7/365 at load, and load typically is higher than 66c. Your CPU will last WAY longer than you will want it to last, even if it ran at 66c all day, every day, for the next 5 years straight
  15. Yea, fo sho. Id still look into using a hypervisor of some sort, run a NAS type OS in a VM and windows next to it... Get benefits of an OS meant to run a file system (freenas and ZFS for example), and you don't really lose any performance. Also, yea, data integrity is up to you. But the odds of losing the data in the event a single drive goes down is not negligibly low. My RAID Z2 (RAID 6) with 10x4 TB drives has dropped two drives before..... Dropped one, and dropped another while doing a resilver. That was freaky. With 8 TB drives, your rebuild time is even longer, and the probability of a loss during a rebuild is, well, up to you to take the risk of. RAID 5 is "dead" for this reason, but, thats obviously totally dependent on how easy it would to recover that data, or how painful/time consuming it would be.
×