Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Maxxtraxx

Member
  • Content Count

    612
  • Joined

  • Last visited

Everything posted by Maxxtraxx

  1. My choice is Noctua NT-H1, according to Toms Hardware's extensive testing on 39 different thermal compounds, it is in the top 10 for performance and it is quite inexpensive. The article that I mentioned can be found HERE
  2. 8k resolution, or 144hz 4k? not that either of those are really doable or a thing for mortals.
  3. are you able to find reference to the setting in your motherboard manual?
  4. Hmmm, the per core ratio is correct at 32, which makes sense. Just did a quick google of this particular issue and found some individuals running into a problem with intels adaptive thermal monitor erroneously tripping and downclocking the cpu to 800 mhz.(they specifically reference Z chipset boards, but we are stuck at 800 mhz also) Try turning off the intel adaptive thermal monitor to see if the clocks return to normal. For asus board's bios it may be under the heading: Advanced-->CPU Configuration-->Intel Adaptive thermal monitor-->Enabled (change to disabled) Reference for the thread I found Here
  5. Your CPU ratio is set to 8x, that equates to 800Mhz. it should go to 32 or 36 (boost), i'm unsure what automatically controls that with a locked processor but that seems to be a large culprit to me. that ratio should be a minimum of 32.
  6. both showing a max cpu speed of 799Mhz Definately go with a CMOS reset, take a look in your motherboard users guide if you're unsure how to do it, then rerun the tests and see if the CPU is running at full speed of 3.2 Ghz.
  7. You've got a CPU problem, It's only hitting 800Mhz of it's total 3.2 Ghz maximum
  8. Definately Not your GPU causing a problem It's temperatures are fine (max of 40C, the fans arent even kicking on) and its maximum utilization is only 60% Please take another screenshot of the CPU numbers, I'm seeing a max cpu speed of 800Mhz poking out from the top there
  9. We need to attempt to single out the source of the problem in the pc, and that firestrike score combined with the card working in your friends pc tends to indicate that your graphics card works fine but is being held back by something else. Some runs in a few gpu and cpu benchmarks/stress tests with temperature and clock speed reporting would be the next best step. HWMonitor would be an excellent first step to give us both CPU and GPU temp and clockspeed readouts under stress Download Here a Free GPU benchmark like Ungine Valley found Here a Free CPU stress tester like ROG Real Bench found Here run these programs individually with HWMonitor open and reporting and take some screenshots immediately after finishing each, then reset the HWmonitor values and run the next.
  10. Common sense... because it's AMD? Because that's not really a Nvidia thing...
  11. I like that homemade racing setup! Is that a seat from a 2000ish Chrysler/dodge minivan? Looking at the shape, grab bar and seat cushion pattern.
  12. Price... absolutely, $330 (in U.S.) is no joke, I had never paid over $120 for a case until my Dancase. Did you notice their kickstarter campaign for version 2 of the case? It's windowed panels and some minor tweaks.
  13. The Define S should fit your bill minus the slim optical drive. Find it here
  14. I addressed that particular issue earlier in the thread twice with links to articles detailing the similarities and differences between Maxwell and Pascal, the article concludes that on a hardware level the throughput of Maxwell and Pascal is Identical but Pascal has some slight improvements on the gpu scheduling side that do help performance especially in Dx12 over Maxwell as well as the benefits of better color compression and much faster memory.
  15. Also, 980Ti vs 1070 980Ti has 2,816 cuda cores at 1.5Ghz 1070 has 1,920 cuda cores(32%fewer cores) at 2.0Ghz (25%faster clockspeed) Sooo, nanosuits is correct, they are similar, but the 1070 comes up with similar performance despite only have 25% faster clockspeeds for its 32% fewer cuda cores
  16. "So if GP104’s per-unit throughput is identical to GM204, and the SM count has only been increased from 2048 to 2560 (25%), then what makes GTX 1080 60-70% faster than GTX 980? The answer there is that instead of vastly increasing the number of functional units for GP104 or increasing per-unit throughput, NVIDIA has instead opted to significantly raise the GPU clockspeed. And this in turn goes back to the earlier discussion on TSMC’s 16nm FinFET process." (quoted from Anandtech page "Running up the clocks")
  17. I can agree to an extent that HBM will get cheaper as it is used more... BUT it's still a new technology and will be for several more years(means higher costs) the memory stacks themselves seem to be quite problematic to produce and have low yields atm, the silicon interposer is still more expensive to make than a board. When it comes to size and packaging, as parts continue to diminish in size we may see a new standard for GPU card size emerge, but at the moment the generally accepted GPU card size has no problem fitting the memory chips onto the board. The power consumption differences are IMO small, with fury x having a memory power consumption of ~14.6 watts and the memory of a GDDR5X 256 bit chip drawing ~20 watts (reference Here), we're talking under 6 watts on a 180w processor. GDDR5X apparently will draw similar power to GDDR6 and both are more power efficient than GDDR5. I can see and agree with the potential benefits... but it's not mature yet for the gaming/consumer segment IMO. Add in the thought that all the cards that have thus far used HBM (fiji and vega) have been rather underwhelming combined with GDDR5X and 6 being very competitive with it... I can agree with Nvidia, the advantages do not make sense yet. I think the advantages will outweigh the problems eventually... but not yet.
  18. Well, without the test to show and being that close from memory... then they sound identical.
  19. Well, with GDDR6 on the (wikipedia quote here) "The first graphics cards to use SK Hynix's GDDR6 RAM are expected to use 12 GB of RAM with a 384-bit memory bus, yielding a bandwidth of 768GB/s." combine that with color compression algorithms, that gives a total throughput of over 1TB/s in many circumstances. IMO...For data/compute centers, HBM is a great way to go... for gaming... at least on sub 8k displays, do we need anything more than 1TB/s? I'll take the cost savings myself!
  20. Do you have any sources showing this to be the case? Anandtech seems to contradict you. Article: here
  21. I've been thinking and doing some browsing on the subject and my personal memory and computational subsystem in my prefrontal cortex spit out some answers... From what I've been reading, the latency between gddr5/x and HBM should be nearly identical, it's primary improvements being power usage, memory system size and theoretical bandwidth(though GDDR5X and GDDR6 seem to be doing VERY well against it). I've yet to see anything related to a latency improvement. BUT My brain was recalling the horrible frame time/pacing issues that the fury cards (and many older AMD cards now that i think of it) have experienced and was relating that to HBM(a faulty reasoning and assumption on my part), whereas the more probable cause after some reading the musings of individuals much smarter than I, is that AMD's memory schedulers sitting idle for extended periods (caused by driver issues possibly?) being the likely culprit for that problem.
  22. If my limited knowledge serves me at all here, I may be completely wrong here, HBMs memory latency can be worse at times. From what i understand, one of the issues with HBM may be that because of its low clock speed if the chip misses a memory call with the HBM the amount of time it is required to wait compared to GDDR5 and GDDR5X is exaggerated due to the time between clock cycles... So while HBM can move a lot of data per clock cycle, if the chip misses a cycle to call for what it needs it has to wait longer till the next cycle comes up.
  23. I've not heard that before, do you have any information to back that claim up? The only area I am aware of is in FP16 where nvidia did reduce it's performance to push compute users into their higher end cards. I quote from Anandtech: "So if GP104’s per-unit throughput is identical to GM204, and the SM count has only been increased from 2048 to 2560 (25%), then what makes GTX 1080 60-70% faster than GTX 980? The answer there is that instead of vastly increasing the number of functional units for GP104 or increasing per-unit throughput, NVIDIA has instead opted to significantly raise the GPU clockspeed." From what i remember Pascal included some optimizations to reduce the amount of unused gpu time. Such as: 1: Greater memory bandwidth and better color compression "The net impact then, as NVIDIA likes to promote it, is a 70% increase in the total effective memory bandwidth. This comes from the earlier 40% (technically 42.9%) actual memory bandwidth gains in the move from 7Gbps GDDR5 to 10Gbps GDDR5X, coupled with the 20% effective memory bandwidth increase from delta compression. Keep those values in mind, as we’re going to get back to them in a little bit." 2: Load Balancing to eliminate idle time: "In concept it sounds simple, and in practice it should make a large difference to how beneficial async compute can be on NVIDIA’s architectures. Adding more work to create concurrency to fill execution bubbles only works if the queue scheduling itself doesn’t create bubbles, and this was Maxwell 2’s Achilles’ heel that Pascal has addressed." All Quotes from Anandtech: Here
  24. ITX case's come in many sizes, from VERY small to just a bit smaller than a normal ATX case They hyper 212 evo is designed for an ATX case, if you want a custom CPU cooler for an ITX case there are many options that are very much smaller than the hyper 212. Example: Noctua NH-L9i, Extremely low profile, ok cooling(definitely not for heavy overclocking) See example below, Extremely small form factor itx with NH-L9i, currently running at 4.4Ghz and quite cool.
  25. If you can scratch together the extra $25 it would be a better choice for gaming. I understand the need to meet budget may eliminate that but if you can stretch somehow you'll likely be much more pleased.
×