Jump to content

HunterAP

Member
  • Posts

    371
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Contact Methods

  • Steam
    Hunter-AP
  • Twitch.tv
    hunter_ap23

Profile Information

  • Gender
    Male
  • Interests
    Tech, Computers, Software Development, Gaming
  • Biography
    I love writing code and playing Dark Souls
  • Occupation
    Senior Software Engineer

System

  • CPU
    Main PC: AMD Ryzen R7 3700X
    Workhorse PC: AMD Ryzen R9 5950X
  • Motherboard
    Main PC: Asrock X470 Taichi
    Workhorse PC: Gigabyte X570 Aorus Master
  • RAM
    Main PC: 4x16GB GSkill Ripjaws V 3200MHz
    Workhorse PC: G.Skill Trident Z 4x32GB 4000MHz
  • GPU
    Main PC: EVGA RTX 3060TI XC Ultra
    Workhorse PC: EVGA RTX 3090 XC3 Ultra Hybrid
  • Case
    Main PC: be quiet! Dark Base 700 (Black)
    Workhorse PC: Phanteks Eclipse P600S ATX Mid Tower
  • Storage
    Main PC: Intel 660p NVMe (512GB), Samsung 850 Evo SATA 2.5" (1TB) , WD Blue SATA 2.5" SSD (2 x 1TB)
    Workhorse PC: Samsung 850 Evo (500GB), Samsung 970 Evo (500GB), WD Black NVMe SN750 (2 x 1TB), WD Black SN750 NVMe (2 x 2TB)
  • PSU
    Main PC: EVGA SuperNOVA P2 750W 80+ Platinum
    Workhorse PC: Corsair HX 1200W 80+ Platinum
  • Display(s)
    Main PC:
    Workhorse PC: Dell S2716DGF 27" 1440p 165Hz IPS & Dell S2716DG 27" 1440p 144Hz TN
  • Cooling
    Main PC: Noctua NH-D15
    Workhorse PC: EK EK-AIO 360 D-RGB
  • Keyboard
    Main PC: Cooler Master Master Keys Pro L RGB (MX Browns)
    Workhorse PC: Logitech G815 Lightsync RGB Wired "Tactile" switches
  • Mouse
    Main PC: Logitech G502 Proteus Spectrum RGB
    Workhorse PC: Logitech G502 Lightspeed Wireless
  • Sound
    GoXLR
  • Operating System
    Windows 10 x64

Recent Profile Visitors

2,117 profile views
  1. So I'm using an MSI MEG X670E ACE motherboard with an MSI RTX 4090 Suprim Liquid I don't think it'd be possible to use washers to artificially raise the slot height to use with the Camlink Pro, since like I mentioned the 4 HDMI inputs on it use the full height of the slot. Picture for context: As for mounting stuff behind / in front of the 4090 with another riser, I was thinking of doing that too. The included riser takes up the 2 full height slots closest to the board, and with the 4090 Suprim Liquid taking up 2 slots I could theoretically mount something in front of it, but getting a riser there would be pretty difficult. Here's a picture of the case without anything in it: The included riser goes to the top x16 slot, so adding a 2nd riser would mean I'd have that 2nd riser go around / through the included riser. Not sure how I might pull that off. And in that pic you can see that the included riser is mounted to a recessed spot, and there aren't any mount points for another riser. My next best guess is to get a small riser and try to fit the capture card through the tiny opening for the half height slots: Not sure if this is the best idea though
  2. The Hyte Y60 case has 7 half-height expansion slots running by the motherboard, and they include a PCIe riser that lets you mount things like a GPU vertically on 3 full-height expansion slots. I'm trying to mount my RTX 4090 GPU vertically, and find a way to install an Elgato Camlink Pro in the system. The issue being that the Camlink Pro is a full height card that you can't turn into a half-height one, as the HDMI inputs run along the full height of the card. Is there some setup I could do with another riser or the like to get the Camlink Pro to fit in the Hyte Y60 or am I SOL?
  3. I got another idea: compare GPU's with every possible ray tracing / upscaling options available. An example for #1 would be if a game supports DLSS 3, a DLSS 3 frame generation toggle, AMD FSR, and Raytracing, then run the tests where you have the following combinations: - Nvidia GPU without raytracing and without upscaling - AMD GPU without raytracing and without upscaling - Nvidia GPU with raytracing but without upscaling - AMD GPU with raytracing but without upscaling - Nvidia GPU with DLSS 3 no frame generation but without raytracing - Nvidia GPU with DLSS 3 and frame generation but without raytracing - Nvidia GPU with DLSS 3 no frame generation and raytracing - Nvidia GPU with DLSS 3 and frame generation and raytracing - AMD GPU with FSR and without raytracing - AMD GPU with FSR and with raytracing - Nvidia GPU with FSR and raytracing That way you can measure the performance improvements of: - FSR vs DLSS 2 vs DLSS 3 vs DLSS 3 with frame generation - Raytracing performance between generations of Nvidia and AMD cards
  4. I know lots of people would love to be able to use this tool to measure the performance of their systems and upload those results to a public database. My question is, what is the plan for validating the results, or that the project executable wasn't tampered with? I figure there would need to be some validation steps either locally or on the remote machine where the database is (assuming that there will be a public one at all) would be needed - any ideas so far?
  5. Only reason I'm sticking with this system is because some projects I'm working on use QuickSync so I can't exactly move to a Ryzen system (unless I get some Intel ARC GPU). I'm definitely open to finding a single x16 card that has chips to support non-bifurcation setups, but I can't seem to find any reputable ones that are 4.0, so far I've only found 3.0 cards.
  6. So I have an Intel 12600K system that I'm using for some projects. I don't need a dedicated GPU and have been using the iGPU, and I already own a case that only fits mATX boards. I initially got an Asus Hyper M.2 PCIE 4.0 x16 to four NVME Gen 4 M.2 adapter card to use as fast storage for some of my projects. Issue is that it seems that 12th gen CPU's and the related Z690 motherboards only support bifurcating the PCIe 5.0 x16 slot into x8x8 mode, and doesn't support x8x4x4 or x4x4x4x4. I can still use two of the four drives, but obviously that's not ideal. I've been looking around for a solution and could only come up with two solutions, but actually finding the hardware to go through with them has been pretty difficult. The cards I have found are some combination of being too expensive, not being really reviewed online, and typically only being PCIe 3.0 and not 5.0 or 4.0 Here are my ideas so far: 1. A PCIe 5.0 / 4.0 x16 to x8x8 splitter, as well as getting two x8 to two NVMe M.2 SSD adapters. I can't seem to find any sort of splitter that I feel confident would work, nor can I find any PCIE 5.0 / 4.0 x8 to dual M.2 adapters that don't require bifurcation 2. A PCIe 5.0 / 4.0 x16 non-bifurcation four M.2 adapter card. I believe I've found a few, but they're either not well reviewed or insanely expensive. If anyone has recommendations for these pieces of hardware or some alternative solutions, I'd be happy to hear them!
  7. 1. Storage Spaces does not offer snapshotting, customizable compression settings, or triple parity, among other features I regularly use in ZFS. 2. I have no interest in using Windows as my NAS. Linux is arguably better for performance in server applications, and when it comes to kernel updates I can choose when to reboot - windows does it automatically whenever it thinks is a good time to, which it's often completely incorrect about, or it will reboot even if you are using the machine. 3. Plex works fine anywhere, it's not an argument for using Windows over Linux. Also managing permissions for different datasets in ZFS is easier than it is in Windows: I can create separate datasets within a pool that are assigned to a specific user or group, rather than in Windows where I have to do more work to do the same thing. Just cause it would be simpler doesn't mean it's better or suits my needs.
  8. Based on what they seem to be doing at 6:04, it shows all 3 of their GPU's as selectable for the graphics card to use in the VM, so I suppose that means it should be totally doable with just 1 graphics card if I'm running just one VM. Thank you for posting the vid!
  9. 1. Cause I'd prefer to use ZFS or BTRFS which are more reliable forms of redundancy, and I'm already using ZFS on my current Ubuntu NAS. Storage spaces just uses basic software RAID. 2. Those operating systems are more reliable to use than Windows (no forced reboots due to updates, better performance when kept online for long periods of time) 3. Plex and other apps work just fine on those operating systems too, and I can host a ton of apps on containers - with the above-mentioned reliability. 4. If I need to shutdown/restart my Windows VM, it won't affect my ability to access my NAS shares or any other hosted app on any other machine.
  10. I'm looking into consolidating my PC's by putting my normal Windows desktop into a VM ontop of either FreeNAS or unRAID. The idea would be that TrueNAS / unRAID would handle all the same stuff my current NAS does (manage storage pools, set up network sharing, Plex library hosting, etc.) while I reserve some CPU and GPU power for my Windows VM. The question I'm unsure of is this: if I have a single GPU (namely an Nvidia RTX 3060ti), can it be used for both the host NAS OS and in the VM for display capabilities and hardware video decoding at the same time? Or would I need another GPU entirely to allow for this?
  11. So only situation I'd think would make sense is if you have more RAM slots than you do memory channels (dual channel with 4 slots, triple channel with 6 slots, or quad channel with 8 slots) and were able to specifically make RAM disk only use the memory in one of the channels. So if you had 4 slots and the system supported dual channel, you could set some memory from the 1st dual channel group, and do the same for the other dual channel group, then RAID 0 those together. At this point I realize it's all theoretical and not at all applicable, just find it interesting.
  12. Say you have a system with an enormous amount ff system memory, like 256GB, which is more than 95% of people would ever need in a single system. Now what if you are not maximizing RAM usage, but are hitting the maximum read/write of a set of drives and need a faster solution. Could it be possible to create multiple RAM disks of the same size, and then RAID 0 them together? For simplicity, let's assume that the contents of the RAM RAID 0 do not need to be saved on reboot, as they are copied somewhere else before a shutdown occurs. Would any operating system even understand how to do this? Since RAM disks already appear as disks, the OS simply sees that X amount of memory is in use by some process/service, would it cause issues? I feel like Windows would have trouble handling this since you'd have to use Striped Volumes, whereas on Linux it might be easier with the available programs out there. EDIT: Seems like I found some answers on the topic where people have tested this. Since RAM is considered one device where all the I/O on it occurs on the entire device (basically the RAM is splitting the I/O operation across all DIMMs), making a RAID 0 RAM disk wouldn't work.
  13. I know about positive & negative airflow situations, and in my situation I can equalize it and don't have a lot of dust build up. I suppose I can go with just a push config on both intake and exhaust 360mm rads and live with that. And unfortunately you can't fit any 3.5" drives behind the motherboard. I have three 3.5" drives and two 2.5" drives, and the case has two slots for 2.5" drives and 2 3.5" cages in the front of the case under the shroud, and all of these I listed so far are populated. My last 3.5" is in it's own cage above the shroud at the front of the case near the intake fans, and I have nowhere else to put it in this case.
  14. In my situation, those two little HDD cages below the PSU shroud are already populated by 3.5" drives, I just have one more that's mounted above the shroud which will supposedly be a conflict. I figured a push-pull is better than just one or the other, and the case isn't really airflow oriented so the more air I can get through the the better, no? And I'd be going with two slim 360mm rads since I plan on cranking both the CPU and GPU, or at least letting them both auto OC (PBO + XFR for the 3700x, and cranking the power/temp slider on the RTX 2080). Doing those things already makes my system quite hot so I didn't get a chance to really OC them manually.
  15. My system has a Ryzen 3700X on a Noctua NH-D15 and an EVGA RTX 2080 XC Ultra, all in a be quiet! Dark Base 700. My CPU idles at 60x and ramps up to 78c under full load, and my RTX 2080 idles at 36c but will crash by hitting >86c unless I force the fans to a much higher speed. I figure I'd go with a custom loop rather than an AIO since I can put two 360mm rads (on in front, on up top) in my case, if the rads are thin enough. I went through the EKWB configurator and came up with this parts list: I have a few questions. 1. Could I replace those EK fans with Noctua NF-F12 fans? They're slightly cheaper and I've always and a good experience with Noctua fans 2. My case supposedly supports a 360mm rad in the front, but I've seen that I have to remove the PSU shroud cover near the front of the case to fit it, as well as people saying that I can't have any of the HDD cages that are above the shroud be in use. Is this the case if I use these slim rads? 3. Follow up to #3, do I have enough clearance to do push-pull on both rads?
×