Jump to content

tabuburn

Member
  • Posts

    1,137
  • Joined

  • Last visited

Everything posted by tabuburn

  1. Username: tabuburn Favorite Videos: https://www.vessel.com/videos/JemZ8O7Hy https://www.vessel.com/videos/LCoY5zfFf Social Media Shares: https://www.facebook.com/logartaj/posts/10153146758379054 https://twitter.com/judethemechkb/status/580383580511027201
  2. Reason I love Zotac is because here in my country they and Palit are the only ones that do not price their products with ridiculous mark-ups.
  3. I remember about 3-4 years ago stumbling upon LTT when I was building a gaming rig for myself after so many years of being away from DIY and needed a quick and fun refresher. Since then, I've been steadily hooked and keep coming back for more.
  4. Finally, the front facing speakers are starting to become a thing. Also, the quality on that custom skin looks nice.
  5. The finish on the body is beast. I'm all for aluminum but this one is a league on its own with its resistance to smudges and fingerprints.
  6. The back buttons are like a godsend for people with smaller hands.
  7. Gotta click that link for the most beautiful smartphone ever!
  8. The guys over at Corsair did a little memory benchmarking on the Beta version of Battlefield 4 and were surprised to find out that they got a significant boost in performance switching from 1600MHz to 2400MHz RAM speeds. These results should be taken with a grain of salt since this was a Beta version of the game and they were using FRAPS instead of frame capturing but we should still expect to see some improvement when the game comes out. Specs they used were: i7-4770K overclocked to 4.4GHz 2x overclocked GeForce GTX 780s in SLI 32GB of Dominator Platinum Spec-wise, you'd think RAM speeds wouldn't matter unless you are using integrated graphics but with 2 GTX 780's you'd never think it was. The graphs are really striking: Look at those differences! At 1920x1200, average framerates get a 22.7% jump while the minimum framerate was still a healthy 9.7%. At 5760x1200, average framerates go up by 15.2%, and minimum framerate by 22.9%. Maybe this might be a trend with the other upcoming games that require 64-bit operating systems, who knows. Source: http://www.corsair.com/blog/bf4-loves-high-speed-memory/
  9. Maximum PC has done some benchmarks on Samsung's new 840 EVO drive and the results are really quite impressive considering its use of TLC Flash. At the suggested retail price of $650 for 1TB, it's a lot cheaper than 2 512GB MLC Flash based SSD's. The only downside is the lesser 3-year warranty but considering it's Samsung, this would still be pretty reliable.
  10. Apparently, the PS4's CPU will be just like the XBone's in that only 6 of the 8-cores will be given to developers. The other 2-cores will be reserved for its OS. Source: http://www.pcper.com/news/General-Tech/PC-Leading-Crew-Next-Generation-Platforms
  11. That's a good point. I still think a wait and see approach might be more conservative since we do not yet know how the developers will code their games for the next gen consoles. I think it might be a given that not all of them will totally use all those resources, seeing as games are still split between CPU-intensive and GPU-intensive. There will also be some devs that might opt for Ubisoft's move to code on PC first then trickle it down to the consoles and we don't know what hardware they'll use to code it to seeing as some of the XBone's demos were using Intel and NVIDIA hardware.An interesting thing that I'm keeping watch is how Intel's Hyperthreading tech will perform if games will be more multi-threaded since in media creation applications, which are multi-threaded, they perform quite well.
  12. From what I've read about how the CPU's on the PS4 and XBone work is that a couple of their cores are allocated to run their respective OS so in-effect, you're left with about 6 of the original 8 cores to used for gaming. It's also important to take note that the cores on the Jaguar CPU's they will have are not equivalent to their desktop counterparts. Not sure how that will translate to gaming since Intel's core quality might balance out their lack of quantity. It's still up in the air as to whether AMD desktop CPU's will really benefit from AMD console CPU's since publishers will be starting to code for PC then trickle it down to the console because of how PC-like they've become.
  13. Actually, it's within the region of what the FX-8350 @ 5GHz performs. Bit-tech did a benchmark of what a FX-8350 @ 4.8GHz performs. It got a Cinebench score of 8.25 @ 4.8GHz so a 5GHz one would be within in the 8.30-8.35 range just like where the FX-9590 is at.
  14. Kitguru has posted a review of AMD's 5GHz CPU, the FX-9590. As far as gaming is concerned, it does pretty well but when it came to CPU intensive applications like 3D Rendering and Media Encoding, even on synthetics, they get beaten by a 4770K at stock speeds. That simulated Cinebench benchmark using the FX-8350 clocked at 5GHz was spot on though. At the price point this thing is at, the 3930K/3960X/3970X would be a much better choice considering those CPU's dominated pretty much in the testing. Source: http://www.kitguru.net/components/cpu/zardon/amd-fx9590-5ghz-review-w-gigabyte-990fxa-ud5/
  15. The way you were wording them suggested running 4pcs. That's why dual GPU cards are worded "2 690's" or "2 7990's" if there is more than one of the cards being acknowledged. Besides, you do not need a dual CPU motherboard to use 4pcs 690's or 4pcs 7990's. Any motherboard that supports 4 graphics cards will work.
  16. You cannot run 4 7990's in Crossfire because that card only has one Crossfire connector. That means you can only run 2 of them in 4-way Crossfire. The 7990 is a dual GPU card. Technically, you can run 4 of them in any motherboard that supports 4 graphics cards but the system will only recognize 4-way Crossfire, not 8-way.
  17. Sort of. Just like what ASUS did on the HD 7970 Matrix. They bin a whole bunch of GPU's and use only those that are better than the average ones.
  18. Maybe but looking at the past performance of the GTX 680 Classified showed that a lot of those had better binned GPU's than what others had so I would expect this one to have as well.
  19. Renowned overclocker, K|ngP|n has been playing with EVGA's upcoming GTX 780 Classified and was able to achieve 1410MHz on air using EVGA's new ACX cooler. He was able to achieve this much overclock by using an overvolting, possibly EVGA's EVBot, tool which allowed him to manually push the voltage past the 1.2v limit set by NVIDIA all the way to 1.35v. These results should be taken with a grain of salt and your mileage may vary because of the silicone lottery. Also, remember that K|ngP|n is an overclocker directly sponsored by EVGA and is part of EVGA's overclocking team. But considering the Classified series is the top of the line for EVGA, the GTX 780 GPU's used on them will definitely be cherry picked just for that line so even if not all will be able to achieve this much on this, even getting half or two-thirds of this would still be nice. According Chiploco's sources, EVGA will be sending out samples for review and have a retail launch sometime next week. Expect this to be costing a premium just like previous Classified cards but if you consider how much overclocking potential these cards have, these will definitely be GTX Titan killers. Source: http://kingpincooling.com/forum/showthread.php?t=2347 http://www.chiploco.com/evga-geforce-gtx-780-classified-overclock-27002/
  20. I still don't understand the topic of this post. The reason why it's called "effective" or "usable" VRAM is because of the example I gave and you gave yourself. Each card in the SLI/Xfire configuration is rendering its own frame and storing it on its own VRAM. The same way a single GPU does. Those cards are just alternating on which frames they are rendering. It's not how much is being used by both per second or whatever measurement of time. If it that was the case, then 2 cards with 2GB of VRAM each in SLI/Xfire would be have 4GB available to be able handle much higher resolutions and/or textures. That's how they define what the "effective" and "usable" means on VRAM usability.
  21. Actually, the Unified memory architecture will be introduced on the upcoming Maxwell series from NVIDIA, not on AMD's Volcanic Islands GPU's (9000 series). As to what Unified memory architecture is, this is what NVIDIA's CEO said about it during the 2013 GTC: Source: http://blogs.nvidia.com/blog/2013/03/19/live-jen-hsun-huang-at-gtc-2013/
  22. Check one of PCPerspectives Interview and Q&A sessions with NVIDIA. It was explained there that the way SLI works is that the 1st GPU renders the 1st frame and just before it finishes, the 2nd card starts rendering the next frame. It was also explained that there are a lot of steps to rendering even before reaching the VRAM and then to the display.The possible reason why the effective VRAM is only equivalent to that of the single card is because each GPU works on a different frame and stores that frame they worked on their respective VRAM's. - 1st GPU => 1st frame => 1st GPU's VRAM - 2nd GPU => 2nd frame => 2nd GPU's VRAM Basically, it's like in a single GPU configuration with each one working on its own frame hence why in 2-way, you almost get 2x the performance. The same applies to 3-way and 4-way but it's more complicated.
  23. Actually, in SLI/Xfire, each card's VRAM stores the same textures/frames/etc to keep the cards in-sync. It is the rendering portion before they reach the VRAM where they alternate.
  24. By general, I meant general purpose usability. Unlike, NVIDIA's GeForce that is pretty much good for only gaming, AMD's Radeon cards are quite capable for other applications, especially OpenCL-based, aside from gaming hence why a lot of people are buying them for Adobe, Bitcoing mining, Folding@Home etc.
  25. Sounds about right. Looking at past trends, each generation change usually starts at around a 40% performance jump. Current rumors suggests that AMD will still be using the 28nm process since their manufacturing partner won't being going down to 20nm until Q4 2013 or later so a doubling of stream processors will be unlikely.
×