Jump to content

pyrojoe34

Member
  • Posts

    1,644
  • Joined

  • Last visited

Everything posted by pyrojoe34

  1. This would be strange... unless they plan on selling a blower version as well but I'm not sure why they would do that to their AIB partners. Not sure why anyone who would want an open air design would get a non-AIB GPU so who would this be for? For anyone who puts GPUs in server racks and/or for people who run 4+ GPU systems (not gamers but people like our computational biology lab), blower coolers are really the best option.
  2. The most common reasons for this are either: -Do you have a framerate limit set or vsync turned on? (this can be in game settings, in control panel settings, or in GPU tools like PrecisionX) or -Is your CPU running at 100% (either overall OR per-thread)
  3. I had an issue with my laptop like this where the trouble was a power limitation because I tried using a 60W brick when it wanted a 90W brick. The problem was that even after using a 90W brick, clearing the CMOS, updating/downgrading/reinstalling the bios, reinstalling windows, changing bios/OS power settings, reapplying thermal paste (even though there are no thermal issues), manually OCing, etc... It still refused to ever go back to the proper clock speed. I ended up being able to fix it by using throttlestop and having the service autostart on boot and I now get the expected speeds. You should make sure there's not another obvious issue first (go through all the possibilities) but this is a last ditch option if you need to. Be careful with it though and make sure you are constantly monitoring temps since it will no longer autothrottle clock speeds and if you get too hot you can destroy the PC.
  4. Ah, I see the problem here, it looks like your computer is on fire... you might want to put that out... Btw, that temp might not actually be real. See how many sensors are actually on your mobo (maybe in manual or maybe you can call tech support and ask). AIDA64 and HWINFO both find 4 motherboard temps (one of which is like 125C) on mine but there are actually only two physical temp sensors on the board (one on CPU socket, one on chipset) and the other two are just bugs in the monitoring programs and not real.
  5. They still have a better performance per thread than AMD does. It's as simple as that. Unless your workloads almost exclusively consist of multi-threaded operations (still fairly uncommon, most uses are a mix of single and multithreaded tasks and the parallel computations may still have to regularly wait on sequential tasks before continuing) then you're better off with faster single cores. I do some bioinformatics work and although many steps are easily parallelized, my scripts still contain many bottlenecks where only one thread must to finish a task before it can distribute the workload cluster and benefit from more threads. I'll often run some of these parts on my OCed home rig independently before sending it to the HPC since the HPC has a ton of cores (24c/48t in total I think) but they're all 7yo, 3.0-3.6Ghz, Xeons they are slow at single threaded tasks. This is the same thing as AMD vs Intel at the moment. AMD leads in raw performance per $ but Intel still leads in performance per thread. Don't get too caught up in raw performance numbers not linked to the real world performance. If those were accurate then you would see Vega GPUs destroying the Nvidia lineup based purely on FLOPs but that is far from the case.
  6. I am very happy with my Predator XB271HU. G-sync, 1440p, 27", 144Hz native (OC to 165Hz), IPS, 100% sRGB (but only 80% Adobe RGB). It's only 16:9 but paired with a secondary monitor (I use a 29", 21:9, 1080p as a secondary) it's a great setup. Snagged the display model at Costco last year for $400... totally worth it.
  7. Agreed, the two that make the biggest difference is dimming the screen, and making sure Nvidia Optimus is running (and/or manually making sure that any apps that don't need the dGPU are set to use the iGPU instead). Undervolting would help but IMO is more trouble than it's worth unless you know it's severely overvolted (which I don't think is super common anymore.)
  8. To do a conservative guess: P= your current average power draw T= how long your battery lasts right now d= how much less power the new drive uses (use 1-3W to estimate) new battery life = (P*T)/(P-d) You'll see with a low power system (say a 30W laptop) it can make a bigger difference but with a gaming laptop that uses 100+W anyway the difference will be like <5 minutes.
  9. It's not a huge difference since neither takes much power compared to the CPU/GPU and screen. Depending on the drives the difference is maybe 1-5W (peak) at most but with modern laptop HDDs it's probably less than 2W difference. It also depends on active time, a secondary drive is probably not super active so it uses much less at idle.
  10. There will always be a hysteresis when it comes to that, it’s pretty normal. Heat takes a little bit to build up and dissipate so the temp will lag behind the usage. This is super apparent with beefy coolers and especially liquid cooling where the cooler heats up and cools down much slower than the utilization changes.
  11. Similar issue with drive utilization. It’s hard to quantify drive use because there’s so many components to it but the % metric you see is actually “active time” (time the disk is not idle) rather than a measure of read/write speed or or latency. It’d be nice if a cpu could actually tell you where the bottleneck is (would be really useful for people who want to optimize code). Is the bottleneck the cache? Is it the scheduler, the ALU? Certain specific component (like a hardware encoder)? Is it bottlenecked by RAM latency (poorly predicted caching?) etc Not sure if any tools like that exist but hardware level monitoring of specific intra-component processess would be a really nice feature to have for many people.
  12. The irony here is that an AIO that allows you to replace liquids are the only ones who will have issues with liquid evaporating (within their lifetimes).
  13. This is a very good thing for PC gamers that I’ve been waiting for for a long time. I was very happy when the latest gen went x86 for the same reason. This means PC ports will suffer much less from being an afterthought on games. This means less work for the devs and builds that are easily ported between all systems with the only major difference being the default settings, not the core code. This also means that backwards compatibility in future systems will not require virtual emulators with heavy overhead since the hardware is all based on the same architecture (x86). The only thing that would keep older games from being compatible on all future consoles is dev decisions, since there will be almost no actual redesign needed to make it work. Sounds like a win for all gamers regardless of which systems you use.
  14. This might be a dumb question... but are you actually saving the settings in the bios before you exit?
  15. What games are you playing. My 1080 @ 1440p is a great match. I'm also not crazy about max fps so I typically go for max settings that gives me >75fps average. In most games (BF1, PUBG, FC5, ACO, etc) that means maxed out settings which gives me 80-110fps avg (depending on the game). Personally I would say go 1440p, especially if they are the same price. He's right that ultra at 1440p will not get you 144fps in the newest AAA but it is still very good. The one caveat is that I have g-sync which makes a huge difference in the experience at lower framerates. With g-sync I don't even notice the occasional lower framerates (in the 60s/70s) during high demand/stress moments. There have only been a small handful of games I had to turn down a couple minor settings on to get over 75 avg like Ghost Recon Wildlands at Watchdogs 2 (both of which have some very over-the-top settings that destroy framerates when enabled) and PUBG before the 1.0 release (now it gets 90-110 with everything at ultra except post-processing which I don't like anyway).
  16. I don’t see why they would. As far as I can tell there’s no real use for them in gaming (at least not yet). Mostly just machine learning applications and this way they can force the professional market to commercial GPUs over gaming GPUs. That would be good news for gamers (less non-gamer market competition) but bad news for prosumers and budget academic labs. At some point I could see mixed precision used in gaming for physics and real-time AI but the development of those into games would likely be a few generations away anyway and be very limited due to lack of experienced developers and inherent programming challenges with implementation of such complex code.
  17. First thing to do is run some memory tests. Run Windows Memory Diagnostic (extended) and/or memtest86. If everything is at stock settings (no OC) then that BSOD can often be due to faulty RAM. I used to get those on my laptop, after a couple memory diag runs it found a few bad sectors, sent the RAM in for RMA (lifetime warranty on most RAM), haven't seen it since.
  18. Do you really need the extra performance? Going from 4.7 to 4.8Ghz only translates to a 2% increase in raw performance (less in real world tasks)... Is it worth stressing (and possibly bricking) it for 2% larger benchmark scores? If you're goal is to do this for 3D mark leaderboards or bragging rights... then sure do whatever makes you happy. But if you intend to use this overclock everyday for normal tasks... then IMO it makes no sense to push it for 2% (or even 4%) gains... I know this doesn't actually address your question (sorry, I don't have answer) but that's just my 2¢...
  19. Time to RMA... sounds like a defective/unstable GPU. If it fixed itself it's probably not the panel so it sounds like the GPU. If it's still within retailer policy then exchange it at the store, if not contact Acer.
  20. I have the same issue trying to update a VM of Win 10 (same error code but different update). DISM scans and all the windows tools didn't work at all. Still haven't fixed it. All my other computers installed it no problem. I think the issue was trying to install updates out of order or something. Did it happen to you right after a fresh install? Mine happened as it was trying to do a ton of updates after a fresh install. You might just have to reinstall Windows...
  21. Simple answer: Yes, the RAM will limited to the speed of the slowest RAM in the system. Detailed answer: All the RAM will run at 2133Mhz by default. You can use the XMP profiles to bring that up to what the RAM is rated for but all the RAM will run at the same frequency/timings. You have two options: Run all 8 sticks at 2400Mhz or try to manually overclock your 2400Mhz sticks to what you are able to get stable. I don't think they'll get 3000Mhz but you should be able to get 2666 and might be able to manage 2800 if it's a good lot. I don't think you'll be able to get to 3000Mhz on the current RAM.
  22. Yup, the 2x8GB non-ECC works perfectly fine. So far I have noticed no issues or signs of instability (although it has only been a day... so we'll see). I just can't combine the ECC and non-ECC (which makes sense, I was hoping the ECC would default to non-ECC and work anyway but no luck there). Maybe one day DDR3 prices will go down enough and I will switch back to ECC... since the system is non-mission critical (it's essentially a personal quaternary backup server and VM host for testing code before sending it to the actual compute rig), I'm not overly concerned with having the benefits of ECC.
  23. I run a 2560x1440p/144hz and a 2560x1080p 60hz simultaneously without any issues. It downclocks to idle speeds as normal.
  24. Well I gave it a shot... it does not appear that ECC and non-ECC play nice together. No post... I guess I'm stuck using 16GB in single channel mode for the time being...
  25. So I tried Option 2 and It would not POST. I am now using Option 1 (which works) but would like to have the extra RAM if possible. Any chance you think Option 3 will work? I don't want to waste more time testing it right now to find out unless you think it's possible...
×