Jump to content
Search In
  • More options...
Find results that contain...
Find results in...


  • Content Count

  • Joined

  • Last visited


About porina

  • Title
    Slightly Salty, Fairly Fishy

Profile Information

  • Location
  • Occupation
    electronic/acoustic engineer


  • CPU
  • Motherboard
    Asus Maximus VIII Hero
  • RAM
    G.Skill Ripjaws V 3200 2x8GB
  • GPU
    Gigabyte 1650
  • Case
    In Win 303 NVIDIA
  • Storage
    Samsung SM951 512GB, WD Blue 1TB
  • PSU
    Corsair HX750i
  • Display(s)
    HP LP2475w
  • Cooling
    Noctua D12
  • Keyboard
    Logitech G213
  • Mouse
    Logitech G403
  • Sound
    Cheap Amazon speakers that aren't bad at all
  • Operating System
    Windows 10 home

Recent Profile Visitors

21,578 profile views
  1. Not sure I can find the reference again, but the CX are "better" than the equivalent CXM and usually cheaper. 450W is plenty for the proposed system. Depends on local pricing and availability, but certainly CX450 would be on my shortlist. Most of my "budget" builds use CX or CXM series.
  2. I love those caches. They in large part mitigate the lack of bandwidth internal to Zen 2. I guess we have 7nm to thank for the more generous amount. I've not noticed any adverse effects from having and not using AVX-512. They can use a lot of power in use, but at the same time, they are doing a lot of work. We even see the same with Zen 2 and AVX2 workloads. With the improved FPU compared to earlier Zen, Zen 2 does more work, but it very noticeably uses more power to do so. As far as I can tell AMD have not come up with anything magic in their implementation, other than having a better limiter than Intel to keep it under control. Run some AVX2 heavy code on Zen 2 and you'll see it hit power limit and clocks drop compared to lesser workloads. It is a better limiter than offset ratios of Intel, although Intel also do have a power limiter, just that it is rarely used on enthusiast systems. Outside of specific niches, it's only really useful for marketing, and in real world efficiency in some form is more important. As long as we don't get to P4 era again when it comes to chasing clocks...
  3. I recall seeing that when 14+ came out with Kaby Lake, but I hadn't been paying close attention to 14++ of Coffee Lake and beyond. The thought occurs, given Intel's 10nm was supposed to be relatively dense, at least in its original configuration, it is too much of a leap that might contribute to the less than stellar clock scaling seen on Ice Lake? As more features are crammed into cores like AVX-512 extensions, that might still put pressure on size. I guess that is just a design balancing point any CPU design has to make. Presumably 5.0 will still fall back to 4.0 or even 3.0 if needed. While not an area I look at too closely, many of the high bandwidth interconnects in HPC area are related to 5.0. Not much point for consumer maybe, but there may eventually be some trickle down.
  4. Agreed, but I hope I'm wrong on this and they pull something out for mainstream consumer that isn't Skylake derived again. Also for the generation after, with PCIe 5.0, DDR5, maybe some chance of 5 GHz, think they might aim for May 5th launch date? Don't know if it might be a bit early to add 5nm to the list perhaps... I've never kept up to date on server parts, are these off the shelf or "on request" specials? Skylake might do it with extreme cooling. Kaby Lake could do it overclocked on a good sample. Coffee Lake v1 was the first offered at 5 GHz single core turbo, and fair chance to hitting all core with manual OC. Coffee Lake v2 was the first offered at 5 GHz all core turbo, if you exclude the special Skylake-X part which was also rated all cores at 5 GHz. Back to AMD, we don't really know how whatever process they choose will behave. If they continue not to optimise for clock, but instead balance of power efficiency and IPC, then 5 GHz for the masses may remain out of reach. PCIe 4.0 is of limited benefit also, but there seems to be wider industry support around 5.0 so that might get more traction faster. We are/were on 3.0 for rather a long time.
  5. I have and use a Huawei Mediapad M5 8" practically daily. It's decent for general use but its showing its age when it comes to gaming. As it is getting on for 2 years since launch I'm not sure how much longer it'll get updates for. If performance matters at all, unfortunately it seems like the only other player is Samsung. Believe they have released a newer model but with the war the US has declared on them, it is practically not an option outside China.
  6. Game storage is more about read performance and the Intel is fine there. To wear it out before warranty you'd have to write 200GB a day, every day. You might write a lot when first setting up, but after that writes are much less. You'd probably have to spend more time downloading games than playing them to wear it out.
  7. I thought the "I" in RAID was for Inexpensive, although looking it up it looks like Independent is an alternative. Thanks for the info. This kinda stuff is beyond me... as an enthusiast I just want to brute force it in hardware. Tinkering around in software is just a necessary evil in order to operate the hardware. I wish there was a modern day version of the battery backed up dram as storage cards... ram pricing is low enough to make it viable now.
  8. The "raid 0" like comment was essentially striping across platters. The presumption is the disk logic would already be coded for this. This is trivial to implement. If we have independent heads for each group of platters, this could only offer a minor improvement to random access, up to 2x in the absolute best case if the OS or application level is able to balance the load between them. Maybe there is some niche out there that has data that fits this scenario. It wont help large transfers in one or both groups and in worst case might increase latency if access operation requires data from both parts. My proposal of two groups of heads accessing all platters would increase both throughput and latency, at the cost of more hardware. That would be more interesting.
  9. You'd have to ensure the oil doesn't directly or indirectly get cooled enough to itself start having condensation forming on it. Say if you're on idle load for a time, that could happen.
  10. Got that, which is why I said you still need to cool that contained air. One possible way as I suggested is to have a radiator inside there, connected to the cooling loop also.
  11. While we in the enthusiast space look for improvements every generation, there are other use cases where you just want something you know works, and don't change it. This leans more towards embedded systems that silently work in the background of our daily life. This reminds me of an interview, I think it was with one of the designers of the Mars rovers. Compared to the best technology at the time, the cameras they used on it were relatively old, but they stuck with them as they were a known quantity. They knew how it worked, and that it worked in space. Going to a new "better" camera sensor would take more work, introduce uncertainty, and also they had the ultimate bottleneck. Bandwidth. They didn't have a way to transfer more data than they were already capturing with that camera.
  12. I had similar thoughts to this in the past. In essence you need to close off the mobo area to air. Filling it with some kind of sealed oil container is one way, but seems messy to me. I was thinking instead if it would be possible to have an airtight container instead. Then you have two options: include some desiccant to absorb any moisture in that air, or replace the air entirely. Personally I think I'd use CO2 since fire extinguishers are relatively cheap here (~£5 for 2kg refill - I have some left over from fishkeeping uses). Either way, you'd need to keep that gas air tight so moisture can't get in. The danger here is that any components would warm that air, and you'd still have to have some way to remove heat efficiently from that air. Maybe an extra rad mounted internally to chill that air.
  13. nvidia's approach is only one route of many. They get some kinda usable RT performance by getting some number of rays, then using the tensor cores to de-noise those rays. If you try the Quake II RTX and turn off the noise reduction, it is really heavy in "noise" from the limited number of rays. For a brute force approach you'd want a load more rays than that. AMD will face a similar consideration. How much hardware do they throw at doing the rays? With their chosen number of rays, will they need to do other post processing to make it look good? This is what we wait to find out. It may be that they choose to go for a similar number of rays, but implement denoise in a different way for example. While we may still be some way off Intel's joining of the GPU club, they currently offer an open RT denoise library. Although this is CPU based, they're not inactive in this area of software. https://openimagedenoise.github.io/
  14. What is your actual sound output device? Make sure it is still the default device in Windows sound settings. It may be as simple as it is now pointing to the wrong device.
  15. It depends on what they put in them. My experience with both brands are for non-gaming systems so I'm not in a position to answer that. Dell did buy Alienware if that counts for anything. Personally I wouldn't buy a system from either as a first choice.