Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Chen G

  • Content Count

  • Joined

  • Last visited


This user doesn't have any awards

About Chen G

  • Title

Recent Profile Visitors

419 profile views
  1. you sound like you won't ever do much overcloking.
  2. 1. It does, because long before a color shift would happen, you'd have a very obvious pattern burnt into the screen and you'd stop using it for that reason. 2. This is another one of those things where a lot of anecdotal accounts differ and nobody knows what's going on, like battery endurance on phones, some people have their battery bulging and about to explode within 1 year, while some others still have 90% capacity after 5 years. It's a similar situation here, if you just look at one pixel, depending on how you use it, its decay half-life could differ by a factor of 10, 100, and possibly over 1000. For example the most vulnerable red pixels could become visually dimmer after about 2000 hours at 500nits brightness, but if you instead run it at 50 nits (about equivalent to minimum brightness), it could last almost forever, or at least longer than these panels have been released for. screen savers while they certainly don't save your OLED screen, are not necessarily more harmful than showing the desktop, if it uses all pixels evenly then you'd much rather do that than burn the desktop into your your screen.
  3. Extremely cool but a PCIE extension that long can't be healthy...
  4. SVP, this has been around for at least 5 years.
  5. You cannot solve this with a manual fan curve of any shape, you're not getting it. And I have come up with my own control scheme, which doesn't use silicon temperature to control the fans.
  6. On the OLED TVs, red goes first and and then blue/green, white last. If you can run all the pixel completely even for a very long time, yes the color will shift. That doesn't really happen for two reasons: 1. there's no such content that runs every sub pixel for the same amount 2. If you just run a slideshow of static colors, your per pixel brightness is severely limited because of ABL, which means you'd have to run this for an extremely extremely long period of time to see the effect, many tens of thousands of hours.
  7. Nanocell is still LCD OLED is still better top end LCD has bigger "color volume", meaning any pure color can be as bright as whatever the peak brightness is supposed to be. Where as commercial OLED TVs can only show peak brightness in white, not any pure color. In general commercial OLED is only unmatched in dark room enviroments. As soon as you boost brightness to compensate for ambient light, the gap closes very quickly. OLEDs don't "die", each pixel gets dimmer with use, which depending on the brightness level, can be anywhere between a few thousand hours, to eternity, before you notice a difference in brightness (aka burn-in).
  8. No they do not, not past a certain point, hence "unnecessary", meaning past a point at which it would make a difference in performance. Then you'd get it below the temperature such that you still get full boost. But that costs more money, money which you could've used for better parts rather than higher overclock. Just because there is such a triangle, doesn't mean every build is optimized. Take my build for example, I could've added something more expensive to it that does not reduce noise or improve performance. It would then be a less optimized build because it is only more expensive for no improvement. So just because somebody else doesn't care about noise levels, doesn't mean they are already optimized for cost+ performance. Maybe he could've have had less noise at no cost, if some of these principles were followed. Well the cardboard and packing tape aren't visible from the outside. But I am exploring a more aesthetically appealing solution to sealing the case this time around.
  9. I know about the raven but but seriously silverstone makes the ugliest cases, they're not like cheap plastic low quality ugly but they're like, tasteless no designer with proper artistic background ugly. Gamer's Nexus also showed how you don't need the 90 degree rotation to have Raven airflow, and that's basically what the Mac Pro does as I mentioned in OP. It's just that most people think more fans + more holes = better cooling and case manufacturers must pander to that perception rather than what is actually cooler. All you really need is a case with a triple 140 on the front with good intake, no other holes except the traditional 120 exhaust, and some special bracket that turns the unused expansion slots into a 120 fan mount. There you go that's your Raven without the 90 degree rotation. Instead we get these crazy cases with holes on top, holes on bottom, holes going side ways, just tones of wasted airflow. I scrolled through your build log, yea there's nothing innately more efficient about 200mm fans, in fact I'm pretty sure the best 120mm fans are more efficient at the same RPM. It's just that you need more of them for the same airflow, and that's more expensive, and takes up more space (because of wasted space in the frame and fan hub). So if you can afford to stick in 2x480 rads with 8x120 fans, that's certainly a substitute for 2x200mm fans. (if you can also somehow solve the proximity with radiator problem)
  10. That's why I was talking about general principles and not specific parts. Big fan/ more fans at low speed is always more efficient than small fan at high speed. Fan right up against radiator always less efficient than fan somehow placed away from radiator yet can still push air through radiator. It doesn't matter how the fan is designed, if you move it away from a radiator it will be more efficient.
  11. yes, the traditional approach is the safest approach. But they don't need that safety for pre-assembled systems like laptops and GPUs. And no it actually wouldn't, because you need a certain amount of power output to maintain a certain heatsink temperature, changing the paste does not change the amount of heat you generate, so it won't make the heat sink hotter. You're thinking it backwards, changing paste keeps the heatsink temperature constant while lowering core temperature, because the delta is reduced.
  12. You've never experienced the fan spinning up just because you're opening an application? You don't think it would be an improvement if it didn't do that? Short bursts of performance is the most realistic workload, where as constant loads like gaming isn't.
  13. Yes more stupid than the mess that is controlling all your RGB stuff. Controlling fan speed directly based on the temperature of the silicon is just stupid especially with modern boosting behaviours, not only is it annoying, it's annoying at basically no benefit to cooling. Think about a load cycle, say you run one cycle of Cinebench, break it down to short time intervals, what's happening? You hit start, the CPU immediately boosts into peak power, but since the silicon is several barriers away from whatever your cooling solution is, it immediately gets hot, say from 30 Idle to 60 degrees. So what does your MB do? It immediately kicks up the fan, which is again, annoying. But think about the cooling performance here, you're not getting any significant performance for this immediate response because your radiator fins (whether attached to water pipe or heat pipe) are still at almost room temperature, so blowing room temperature air over them does nothing! During this initial stage, the real cooling is not being done by fans or radiator fins, you'd get the same cooling with no fans and no fins. All that's happening, is you have heat flowing into your cooler, which isn't yet saturated or full of heat. It's like having swimming pool one pump pumps water in (the CPU), and another pumps it back out (fans & radiator), except the second pump's intake pipe is not all the way at the bottom of the pool it's got like many holes along it such that if will only pump the maximum amount if the pool is completely full. Would you in this case tie the speed of the second pump to the speed of the first? Of course not, that would just be a waste of electricity. What you really want to do is make it pump when the pool is full or close to full, because this way you get the most out of your electricity. In terms of computers you don't really care about the power requirement of the fan but you probably do care about the noise, and also how much dust it pulls into your system. Now think about what happens after the run ends, your CPU temperature drops off a cliff so the fan speed also drops significantly. But you don't really want that because there's still a lot of heat in your cooling system. It's not a problem if you'll just go back to watching Youtube videos because it'll eventually go away even at low fan speeds. However what if you want to run more Cinebench? Since your cooling fins are now still at maximum temperature, any air you blow through them will have maximum cooling effect, yet you're not doing that because it looks like CPU temperatures have dropped. Wouldn't it be much better instead to just keep blowing air and take advantage of the hot cooling fins? In the swimming pool analogy it's like ok I know the pool is full of water but since pump 1 has stopped, I'm gonna stop pump 2 also. Yea but what if it has to start back up again suddenly, do you just overflow? Or should you keep pumping water out until it's not almost full any more? The most logical way to set fan speed is instead, to map it to the temperature of the heatsink, not the silicon. However I get that's not very easy for DIY components, but it is very easy for laptops and OEM machines yet most of them still don't do it for some reason, that's just unbelievable laziness. Also GPUs, you've got plugs for those lights anyway, why not also just put a thermal probe in the heat sink for better fan control!? I just got a high end X570 MB from Gigabyte, it has pins for thermal probes, it has water pump support, all that good stuff. But they lock the CPU_FAN plug to depend on CPU core temperature only, why? They could've just print it in the menu how to plug in a thermal probe and stick it in your tower cooler or radiator for best fan controls. Not only do they not bother, they don't let you do it. They could also just put a thermal probe somewhere around the CPU socket such that it will be affected by the air coming from the CPU cooler, indirectly measuring its temperature, but no, they rather tell you useless temps like PCIE slot temperature, just why? Why would I want to know that? (Yes I know the other fan headers can do this, but why not the CPU header?) In fact you could use something else that would naturally approximate this ideal fan behaviour, for example temperature of VRMs. Sure depending on your cooling solution it would not necessarily take equally long to heat up or cool down, but at least it's not annoying as the CPU temp would be, and it doesn't take any BIOS code to run, and it doesn't even require any more hardware! The only down side is it wouldn't be ideal if you're running an AIO or something, but AIOs shouldn't rely on the MB to set its fan speed anyway! They're AIOs, it should just have its own water temperature sensor and control its own fan based on that!