Jump to content

Zando_

Member
  • Posts

    15,603
  • Joined

  • Last visited

Everything posted by Zando_

  1. Gotta compromise somewhere. I didn't realize the performance difference between FSR and DLSS, I only use it when DLSS or XeSS are not an option (I have Nvidia and Intel GPUs). If you think you'll be fully dependent on DLSS and Nvidia's frame gen then just get the best Nvidia GPU you can afford (seems the 4070 Ti in this case). I play at 4K60, and I have a 2060 Super and an Intel ARC A770. The ARC is better for rasterization but has some compatibility issues with the games I play, leading to me using the 2060 Super more often. Enough of my games support DLSS that I don't notice the drop in raw GPU power, and the ones that don't are usually old enough that they'll run playably on not-eyesore settings. It's not exactly the same situation you're in, but similar enough that I thought it was worth sharing.
  2. The 7900 XTX has a solid lead over the 6950XT at 4K, it'd be even better vs the 6700 XT, considering the 6950XT already has a massive lead over that. Tomshardware has their 4K roundup with both the 7900 XTX and 6950XT on the charts here: https://www.tomshardware.com/reviews/amd-radeon-rx-7900-xtx-and-xt-review-shooting-for-the-top/4. And the 6950XT numbers with the 6700XT on the charts here: https://www.tomshardware.com/reviews/amd-radeon-rx-6950-xt-review/4. 4K wants the most GPU you can throw at it. If you have no other reason to prefer Nvidia than DLSS, I'd say you're better off with more rasterization performance and VRAM. You should be able to get away with a higher native resolution, which should look as good or better than DLSS at a lower internal resolution. The only stickler would be games that only have FSR or DLSS, no XeSS. You'd have to make do with FSR there.
  3. The 4090 is one of the most efficient GPUs for heavy workloads (which is what it's designed for). Using Folding@Home as an example, it's 6th for PPD/kWh: https://folding.lar.systems/gpu_ppd/overall_ranks_power_to_ppd. The only GPUs beating it are other Ada Lovelace (4000 series) cards that are running further inside their efficiency curve than the 4090. It's more efficient than every other GPU before it. The gap between the Ada cards and even the previous generation is pretty wide, and that's one single generation, if you compared it to Maxwell or Kepler (decade-ish old architectures) the results would be comical. And they are, the 4090 does ~2.4m points per kWh, the Maxwell 980 Ti does... 0.184m points for the same kWh of power draw.
  4. It'll likely be fine. Your CPU cooler should have its own fan(s) as should the GPU, and the motherboard usually doesn't need much so it should be fine with the air pushed around by the CPU/GPU coolers and just natural radiation (of heat, not nukes ). Should be fine. The only thing that might see a slight performance drop is the GPU, if it's a modern one that boosts less and less the hotter it gets. Won't throttle, will just boost less, which is usually a ~3-5ish fps difference. Likely not noticeable unless the GPU is already pushed to its absolute limits (like my 2060 Super I ask to run 4K, that ~3fps vs stock/base clocks is noticeable because it's sometimes the deciding factor between reaching my display's minimum refresh rate - 40Hz - or not). I've ran PCs with not enough fans or straight up in a box with no fans... they got hot. That's about it, no major throttling, when I did this I was using a Maxwell GPU so it didn't change its boost clocks at all either. And not hot enough to damage the hardware, as others noted it'll shut itself off before that happens. Yep. And lots of those work PCs don't just have 0 intake fans, there's no intake vents anywhere on the front panel. The PC lives off what the exhaust fan can feebly pull through the gaps in the chassis. They run fine for... I've worked for the company I'm at for 7 years and I believe we've still got some of the Skylake machines kicking that were purchased around the time I was hired. I've had a dead one or two, not bad for $350-500 machines that are approaching a decade old in sub-optimal conditions (some are office PCs, others are in and out of hot/humid/dusty production rooms or warehouse/pick line locations).
  5. Elden Ring. Helldivers 2 is rapidly getting up there too. Probably Destiny 2 in its prime (warmind to just before sunsetting), and Star Citizen, though both those games are very far from perfect.
  6. I believe the NF-A12x25s remain the best airflow/pressure/noise balanced fans... they're just rather pricy. What temps were your NVMe drives actually hitting? A 20C increase isn't bad unless it puts them out of safe operating range.
  7. Like @RONOTHAN## said there really isn't much better. Noctua has their F series that focuses on static pressure, but IIRC the Redux P12s are based off the older version of that fan anyways, so they're already pressure optimized fans. You do *not* want the 3000rpm iPPC fans if you're trying to avoid jet engine noises. I have 'em in 140mm form, they are very much a jet engine when spun up. We haven't invented a way to ignore physics, so pushing air harder requires more force, and we get that by spinning fans faster. There's only so much you can do with the blade design itself.
  8. Because most cheap boards only have 4. That's why I mentioned the HBA. Those drives are extremely expensive. Not sure what you mean with the SMART monitoring, that's been a thing since... the mid 90s, looks like. I've got drives out of vista PCs that spit out SMART data, that's really the oldest hardware I've personally used. I've never considered a drive not having it, but I guess they must have not at some point. No. SMART is not that reliable and fluke failures are a thing. We don't do redundancy for fun. OP hasn't mentioned what drives exactly they intend to use (how many, what capacity), what OS they intend to use, and how much data they intend to store. So we can't really advise on what RAID/ZFS/other array type they should use.
  9. Case is one of the cheapest options with 8 3.5" drive bays - in slots too, and right in front of the intake fans, so the drives will stay cool and be more easily swappable. If OP doesn't need that many drives then yes there are cheaper options. As @DrMacintosh said, sorta depends on how much exactly OP is intending to store. Good point. Can get that same kit in 2x8GB instead of 2x16GB: https://pcpartpicker.com/product/P4FKHx/silicon-power-sp016gxlzu320bdaj5-16-gb-2-x-8-gb-ddr4-3200-cl16-memory-sp016gxlzu320bdaj5. It is only $20 cheaper, so if OP will want 32GB in future, it is cheaper to just get a 32GB kit, as it's less than 100% more cost for 100% more RAM. Also depends on the OS OP intends to use. For a beefier NAS box I'd prefer TrueNAS Scale, and by default that will only use 50% of the RAM for caching, so ~16GB. I believe you can manually override this, but if you then boot up some containers/VMs or something and forget to change the limit, you can run out of RAM and the system will hard crash.
  10. Something like this: PCPartPicker Part List: https://pcpartpicker.com/list/BVCDVW CPU: Intel Core i3-12100 3.3 GHz Quad-Core Processor ($121.98 @ Amazon) Motherboard: ASRock B660M Pro RS Micro ATX LGA1700 Motherboard ($94.99 @ Newegg) Memory: Silicon Power GAMING 32 GB (2 x 16 GB) DDR4-3200 CL16 Memory ($53.97 @ Amazon) Case: Antec P101 Silent ATX Mid Tower Case ($109.99 @ Newegg) Power Supply: Corsair CX750M (2021) 750 W 80+ Bronze Certified Semi-modular ATX Power Supply ($74.98 @ Amazon) Total: $455.91 Prices include shipping, taxes, and discounts when available Generated by PCPartPicker 2024-03-29 14:42 EDT-0400 That board has 4 SATA ports, the case fits 8 3.5" drives and the PSU includes 8 SATA connectors. So grab a used HBA and you can use all 8 drive bays. The board has a PCIe x16 and x4 (physical x16 but x4 bandwidth) slots, so you should be able to fit both an HBA and a 10Gb NIC if you have a 10G LAN. Picked the i3 12100 as the 12th gen chips have good idle power draw, the iGPU can be used for en/decoding (and means you don't need a GPU filling a slot and adding more power draw), and most NAS tasks are single-threaded so there's no need for more cores/threads (if there's something else you want to do that needs them, the i5 12400 is a very good pick).
  11. Yep. It'll even snitch on apps now, it will double-check with you on whether you want to allow the app to track you cross-app (to other apps in general or back and forth from your browser). I always say no to this.
  12. It's the 2nd M.2-ish connector behind the M.2 slot I believe: The spec sheet notes it's a "High-Speed Custom Solutions Connector (PCIe x4)". That connector looks like what you'd need for x4 PCIe. EDIT: actually re-looking, I think that is the SATA M.2 slot, the one below it looks like an M.2 slot for a wifi card? Unless that's integrated on the board. The custom PCIe connector may be on the other side of the board. You would need a separate PSU to run the drives, yeah. I'd grab a USB 2.0 (not 3.0, needs to be a 2.0) thumb drive and give Unraid a shot. If you don't need the speed of ZFS - and I assume you don't, as you wouldn't get it over a USB hub to begin with - then Unraid should do what you need as far as NAS duties. It's set up for consumer drives, can handle mismated arrays, AFAIK it should be fine with USB hubs, and can do stuff like sleep the drives, which will help with power draw. ZFS keeps them spinning always, and will have issues if you stop it from doing that (drives drop from arrays). Honestly the drives spinning (assuming you are using HDDs) was probably most of the power draw you were seeing. Each drive is ~6-10W, so you're looking at up to 40W for 4 3.5" drives spinning constantly.
  13. ZFS is built for datacenters, it wants/needs full access to and control over the drives. It won't work with RAID controllers unless they're flashed to function as a basic HBA, no shock that it'd dislike a USB hub. Does the NUC have an M.2 NVMe slot? You can get an M.2 HBA with IIRC 4 or 6 SATA ports.
  14. Yep. CPUs are the same basic tech across the board. Enterprise motherboards can use higher quality capacitors and be built a tad better overall as they're intended for 24/7 operation with minimal downtime. If you're worried about that very small percentage chance of failure then you can just get a server board for a mainstream chip, ASRock and Supermicro make some. ^^^ 1st gen Threadripper has poor single core performance (very important to many game servers as they are often single-threaded), and the power draw will be quite high vs a mainstream chip. It does add up when ran 24/7. Also, if you're running Windows as the host OS, 1st/2nd and 3rd gen TR still have TPM stutter with Windows (the whole system hitches for a couple milliseconds). AMD fixed this for AM4 but never bothered to for the X399 and TRX40 platforms. I believe if you run Windows 10 with TPM off it should dodge that, but W10 will be EOL sooner rather than later, so given there's 0 advantage to TR I don't see the point of trying to make it work for this to begin with. The best machine for this sorta thing is usually a 12th gen Intel based setup, as you can get DDR4 boards for them (cheaper RAM, though DDR5 is very cheap now so this matters less), they have very low idle power draw, and great single core performance. Anything Ryzen that's Zen 2 or newer is excellent as well. What exact chip you want depends on what board you wanna go with, and how many cores/threads you think you need. You can get up to 16c/32t on AM4/AM5.
  15. Not really. As I said, it can work as a neat space heater. Not gonna beat a proper HVAC system for anything but a single room though. My power bill doubled when I ran a few GPUs 24/7 for a folding event. In winter, so assisting the HVAC not fighting it. Not everyone can do so, nor do they want to. <- that's the only appropriate reaction I have for this statement lmao. You assume everyone's parents can just magically afford a higher bill. Mom and Dad don't poof money out of a hat infinitely for their kids projects. I've had the privilege of financially stable parents who can both shrug off an increased power bill and have helped me financially in a bunch of other ways, most people do not get that. It is very important to understand that that is a less and less common privilege these days, so you don't come across as an entitled twat. I don't get the fixation on F@H. As I said earlier it seems to be going fine, and they regularly have more hardware than they can actually use enrolled. I'm not sure what point there would be in the masses running F@H on very slow and inefficient devices (really anything but a decent to great GPU is practically useless for F@H, I only run CPU folding when I want the heat output). If you want to effect the world in a positive way, look into how batteries are produced from raw materials to final product (both environmental and human cost), then come back and advocate for burning through the limited useful life of mobile and laptop batteries to accomplish barely anything as F@H doesn't scale well on those devices. If they could even be served work units. On the speed of F@H, back during the covid push in 2020, the network was 2x faster than the fastest supercomputer on the planet: https://www.tomshardware.com/news/folding-at-home-worlds-top-supercomputers-coronavirus-covid-19. This article also does a good job of mentioning that you can contribute with your hardware if you want, but not being too pushy over it. On mobile, unless they've changed it, DreamLab seems especially bad on modern phones: https://foldingforum.org/viewtopic.php?t=32951. Relevant bit here: Lots of even cheaper phones now are often OLED, seems a bad idea to try and kill screens faster for dubious benefit. TLDR: Folding @ Home seems a weird thing to want to get regular (non-techie) folks into. It's doing very well (again AFAIK, I haven't seen some massive "we're losing hardware help" message from the F@H team, and we have some hardcore Folders here on the forum so it would have been mentioned if so) in its niche, and doesn't really fit anywhere else due to a multitude of overlapping concerns that may or may not be an issue depending on the individual non-techie person.
  16. As others said, power is a cost. If you use a laptop or phone and don't leave it plugged in all day, then the additional battery cycles are another cost eventually, as you'll have to get that battery replaced or suffer with shite battery life. Temps and noise are also a concern for both, running loud and hot can be a major inconvenience. There's also no great need (AFAIK) for a massive influx of folders. F@H already struggles with distributing work units during some folding events, they'll often have access to so much hardware that they don't actually have anything for it to do. ^ basically this. It's a fun thing (due to the points system/leaderboard) for computer gearheads to do. Like charity events for car or motorcycle gearheads, etc. An excuse to flex the stuff they work on in their hobby for a good cause. Not a practical thing for everyone to engage in. Though it can be practical in the winter if you have cheap power and a very power hungry computer, you can run F@H as a decent space heater under those conditions. As dedayog noted it's the opposite when it's warm out, you fight your own AC then.
  17. Thermal paste isn't great for CPU Die -> IHS, that's typically why people delid to begin with, in order to replace the original thermal paste with liquid metal. Given that + the load the temps don't sound too crazy. A little high for that voltage, I've run soldered chips at far higher voltages and similar clocks (4.7GHz) with much lower temps, but A) solder is better than most TIM and B) they were HEDT chips with a larger IHS and CPU die, and thus a lot more surface area to get heat out through.
  18. Yep. The only ones that were killer value for gaming was seven years ago when Ryzen did not exist, and you could get a cheap 6-8 core chip with an X chipset motherboard and easily overclock to 4.2-4.5Ghz with a competitive IPC at the time. Zen/Zen+ endangered these Xeons, Zen 2 wiped them out. EDIT: Worth noting that this is in the US market, overseas I know mainstream Ryzen/Intel chips can be much more expensive, even used, thus why these Chinese boards exist to take advantage of mass Xeon selloffs from upgrading datacenters.
  19. Between the IHS and CPU die or between the IHS and cooler cold plate? ^^^ Also this, P95 smallFFT is no joke, it will pull some obscene numbers. You can run ASUS Realbench if you want a similarly beefy but less intense load.
  20. What board do you have currently? 4K is not super CPU intensive, if you have a decent B550 board then just throw ~$250 at a 5700X3D and put the rest towards a new GPU (likely new PSU with it depending on what you have rn) and a nice display.
  21. The only cases I know of with that layout are old, large ATX cases. Why not just get a compact mATX or ITX case and simply set it on its side? Lots have airflow options that would allow for this.
  22. Oh that's a good point, hadn't considered that. Where is the max throughput stated? I can only find the throughput for the whole thing (not just the SATA bit), seems to be 6.4GB/s for the higher end chip. Wikipedia claims up to 8GB/s for the later versions. Should be an Athlon 64 system, given the nForce 3 chipsets were made for that platform: https://en.wikipedia.org/wiki/NForce3. Isn't fully compatible with Windows Vista, likely why OP appears to be on Windows XP.
  23. That would be considered hardware RAID. It's running in the firmware for the hardware itself, not in software. It should do RAID 0 fine given that's an intentional feature of the SATA controller: https://www.anandtech.com/show/1274/8. It allows for hot spares and such as well, if you want to do hardware RAID then it seems a capable chip.
  24. Hold CMD + I when it boots up to boot into Internet Recovery. Or CMD + R (regular recovery mode keybind, but it will boot to Internet Recovery if no OS is present). It will usually try and download the launch OS though, which can be annoying to upgrade from. On my mid-2012 MBPs that was Mac OS X Mountain Lion if I booted to Internet Recovery with no OS present.
  25. Not loose. That's basically it. Very lightly snug. Don't need to crank em down, they just need to keep the board from flopping around. Nope. The equal-length standoffs keep the board even, the screws just hold the board to those standoffs. No need to worry about super precise torquing on the screws.
×