Jump to content

cagoblex

Member
  • Posts

    79
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Profile Information

  • Gender
    Male
  • Location
    United States

System

  • CPU
    Xeon Platinum P-8124 18C 36T 3.0GHz
  • Motherboard
    Asrock Rack EPC621D8A
  • RAM
    Samsung 2Rx8 64GB DDR4 2666V Registered ECC
  • GPU
    Quadro M5000

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Honestly I think 11700 would be enough for most games. The 11700K is only $50 more, but then you have to invest more into a Z590 motherboard, and maybe faster RAM. So the whole platform will cost you anywhere from $100-200 extra.
  2. CPU temp wise yes. But using air cooler will put a lot of pressure on the case airflow. I've seen many builds with very poorly designed air flow. So that can be a problem. For AIOs at least all the hot air are being blown out of the case.
  3. AIO is the only option for 11900K. Your case will catch fire if you use a Noctua air cooler and a high end GPU like 3080.
  4. We just published the review for the 11700K. And here comes it’s bigger brother 11900k. Unlike the previous generations, the i9 flagship 11900K has exactly the same spec as the 11700K. Actually it even has a slightly lower base clock, 3.5GHz compares to 3.6GHz of the 11700K. However what makes it an i9 is Turbo Velocity boost. As long as you have adequate cooling, the i9 processor will boost its frequency for an extra 100MHz on top of the already high Turbo Boost 3 frequency. But apart from that, it is the same chip as the 11700K. Same core count, same threads, same cache, same PCI-E support and same iGPU. The reason this review is taking so long is because of the new Intel ABT, or adaptive boost technology. Basically what it does it that it will override the TB2 or TB3 multiplier limit and gives a higher all core boost given there is enough thermal and power budget. So long story short, it will give the 11900K an all core boost of 5.1GHz under ideal condition compares to the 4.8GHz default. Asus just released the new BIOS update that supports the Intel ABT. In this review we will first take a look of this processor without the ABT enabled, and then compare the improvement ABT brings separately. And after that comes the important part, can it beat a LTT gold sample 10900K? Let’s find out! Disclaimer on 5800X score. The AIO I am using does not have AM4 mount bracket, so these scores are tested with a cheap air cooler. With limit thermal budget, it is not achieving max PBO frequencies. A lot of readers have pointed that out. I don’t have the 5800X on hand right now so I cannot retest it. It is not that I intentionally lower the score for AMD to make Intel look better. Obviously Intel did not sponsor this review and myself is using an 5900X while writing this. So it is completely due to the limitation of testing equipment. The point of this review is not Intel vs AMD anyways so it shouldn’t be too big of a problem. If you really want to know how 11900K stands against AMD, I will include a little bit of 5900X score in the final part of the review so you can get an idea. Firstly let’s take a look at CPU-Z. Thanks to the higher boost clock and the presence of Thermal velocity boost, it is slightly faster than 11700K, but only by a tiny margin. In Aida 64 Memory testings, it is pretty much the same as the 11700K. Next is Cinebench. In Cinebench R15, Again it’s almost the same as 11700K. It is technically higher but it’s within the margin of error. In cinebench R20, the difference is a little bigger than R15. It is about 100 points, which is about 2% faster than 11700K in multi core, and about 6% faster in single core. In V-ray, again, it has a higher average clock speed of 4.68Ghz compares to 4.55GHz of the 11700K. But the result is 200 points higher, which is about 1.2%. In Blender, we tested both classroom and BMW. It is 4 seconds faster in BMW and 28 seconds faster in Classroom. In 7 zip, it is about 3% faster in decompression while being almost within margin of error in compression. In hand break, we are transcoding a 1:31 4K video into 1080P H.264. It is 3 seconds faster which is about 4%. In Geekbench 5, it is 5% faster in single core and 2% faster in multi core testings. In Y-cruncher, it is only 1 second faster than 11700K in single thread testing, and 11 seconds faster in multi core testings. Next let’s take a look at the games. All platforms are paired with EVGA RTX3080 FTW3 video card. First is 3D Mark, let’s just focus on the CPU part. Again they are identical, almost within the margin of error. It actually got slightly lower score than 11700K. In hitman2, again it’s pretty boring. It is basically a 11700K with negligible gain. In Horizon, it is exactly a 11700K. In Shadow of the Tomb Raider, it’s even worse than 11700K. I actually retested just to make sure, but still I got the same result. In dirt 5, it’s 2 fps faster than 11700K. Yeah I exaggerated the difference so it looks less boring. Ok this is kinda boring right, let’s see what Intel Adaptive Boost Technology brings. I just hate it because it made me run all the benchmarks again. I have included a stock 10900K here to give you an idea of where 11900K stands against 10900K. The reason I did it is simple. It is because ABT makes close to no difference to the performance. But since I spent all this time testing it with ABT, so I decide to keep it. Otherwise this whole section should not have existed if I don’t add something new to it. And here is the result. The video card being used for the benchmarks below is the Asus TUF 6700XT video card, not the EVGA 3080 FTW3. 11900K definitely a lot faster than 10900K in single threaded testings, but 10900K is still faster in multi threaded testings thanks to it’s two extra cores. Next it Cinebench R15. Again ABT makes almost no difference. The 11th gen is faster in single threaded tests and slower in multi threaded testings. And it’s the same with R20. There is nothing really worth to be talked about. In 7 zip, 10900K is faster in decompression while tiny bit slower in compression. In Blender, 10th gen and 11th gen performs almost the same. Again ABT doesn’t help in any way. Y Cruncher is where the 11th gen and the ABT shines because it uses AVX512 instruction which is lacking on the 10900K. However the single core score is lower when ABT is enabled, but multi core is a lot faster. The 10900K is a lot slower than 11900K in this case. In v-ray, ABT makes no difference, and 10900K is still faster thanks to its two extra cores, but not by a lot. In 3Dmark Timespy, the 11th gen is about 20% faster than 10900K in CPU score. ABT only gives it 1% gain on 11900K. Ok that’s still boring right. Now it’s time to do something interesting, let’s overclock both of them and see how they competes. For the 10900K, I am using a LTT gold sample, which is a bundle of a cherry picked 10900K and a Cooler master ML360 Sub Zero cooler. Technically it should give excellent overclocking result. So let’s give it a try! Ok here we are in the BIOS, Asus gives it a 102 SP score, which is high but not the highest I have ever seen on a 10900K. It is predicting that it will need 1.398V at L4 LLC to run non-AVX workloads at 5.3GHz.So let’s start from that and see how it performs. With the Cooler, we need to install the Cryo software and make it work in Cryo mode. So the cooler will communicate with the processor and adjust the cooling real time. We will make a separate video on this set if you want to know more details. Right now we can see, at idle, the cooler is pulling about 100W of power from the 8pin connector, and the CPU temp is around 22 degrees. Let’s open HWinfo64 monitor, and here we can see the processor is running at 5.3Ghz on all cores, the VID is actually a little higher at 1.44V. But VID only means what the CPU thinks it needs at a given frequency, not the actual voltage that’s being fed to it. And the temps are in the lower 20 degrees range, which is pretty cool. Now let’s run some R20 and see if it will get a pass. Keep in mind R20 actually uses AVX so it would normally require higher voltage. And we got a pass, and got 6252points. If you take a closer look at the results, it is still slower than our 5.3GHz all core 11700K we tested last time. But that’s not the full potential of this gold sample. Let’s try something extreme. Let’s put 5.5GHz on four cores and 5.4GHz on six cores and see how CPU-Z score looks like. Now we got over 8000 points for multi threaded and 651 for single threaded! This is a very remarkable score under non LN2 cooling. Let’s try V-ray. And no it cannot pass V-ray. But will it pass Cinebench R20? Let’s see. And...it’s a freeze. Now we have restarted and let’s lower the frequency a little bit. Let’s do 5.4GHz on four cores and 5.3GHz on other six. And it still freezes for R20. So after some tweaking, the most reasonable combination factoring temp, voltage and performance is 5.3GHz if we want to run AVX workloads. And our 11900K is not as highly binned as the 10900K. It has a SP score of 71 in BIOS. And BIOS is predicting it needs 1.6V to reach 5.3GHz all core for non-AVX workloads. However when we apply 1.6V to it, it thermal throttles immediately even with the Cryo cooler. After some tweaking, the best combination for this particular chip running AVX workload is all core 5.2GHz at 1.52V. And here are the results. In CPU-Z, the 10900K is still 1000 points faster than the 11900K thanks to its extra cores. But in single core performance, the 11900K is 10% faster than the 10900K. In R20, it’s the same situation. The 11900K is slightly slower in all core performance but definitely have a lead on single core. In V-ray, they are almost the same, with 10900K being 4 points faster. So conclusion. I have very mixed feelings about Rocket lake processor now. Yes, 11700 non K version, and the 11500 we are currently reviewing, are a competitive choice at their respective price point. But for 11900K, I’m not too sure. In Intel’s defense, it is able to achieve roughly same all core performance, with 2 less physical cores on the same manufacturer node, which is the biggest breakthrough for Intel in the past few years. However, being at the same price point, even an overclocked 11900K can’t see the taillight of the 5900X. Trying to be competitive at mid range, and completely give up on high end market. Does that sound familiar to you? Yes this was the position AMD has been in for almost a decade before Ryzen. So great job Intel, on Rocket lake, and on the other hand, you really need to do better than this. You can watch a video version of this review here:
  5. I did not take out the back plate but I will do it tonight and find out!
  6. Hello everyone and welcome to another review. Today we are taking a look at the Asus TUF 6700XT. Yeah I know I promised to publish the i9 11900K review but I would need a little extra time on it as I am overclocking it and comparing it against the LTT gold sample 10900K. So today we will do a 6700XT review with the 11900K CPU, so at least it will give you an idea of how a 11900K will perform in gaming. Disclaimer here before you look at all the data, the RTX3060 and RTX 3080 scores are tested with a 11700 non K version instead of the 11900K. So you should not compare the score directly. There are enough good reviews of 6700XT online if you just want to know how this card performs. The point of this review is, however, should you pay almost double the money to get a ‘Premium’ 6700XT. I was trying to search for other 6700XT TUF reviews online however there is none. This is very strange considering the TUF is a pretty popular card. But anyways, here comes the first review. First impression of the card, it’s huge. Here are a few things for comparison. This is a box of Founder’s Edition 3070, and a AMD Dual GPU HD7990. In the front of that is a Founders Edition 2080 with HP shroud, and then a Tesla P4. There are two of some future Xeon CPUs that I cannot say too much about, but it's the biggest CPU Intel ever made. And in the very front a kaby lake 7700K. So you get the idea, it’s ridiculously bulky for a mid end GPU. Let’s take a closer look at it. So it has the same TUF shroud that Asus put on all other TUF cards of this generation. The aluminum shroud feels solid, and the only RGB available is the small TUF logo on the very right of the card next to the dual 8 pin connector. On the back of the card is a regular TUF back plate. The capacitors on the back of the GPU looks very nice. It is a very well built for mid range GPU with $479 MSRP. The only problem is, it carries a $799 price tag, which puts it in the same market segment as the Strix RTX 3070. Now, let’s run some benchmarks and see how it goes. Again these are not being tested with the same CPU so please don’t use it to compare their relative performance. Firstly 3DMark TimeSpy. It is getting a pretty solid performance here. It is about 25% faster than a 3060, while being about 40% slower than a 3080. In the Unigine benchmarks, we ran Heaven, Valley and Superposition under 1080P Extreme settings. In Unigine Heaven, it is about 50% faster than a 3060 and less than half of the performance compares to 3080. In Unigine Valley, the 6700XT has a even bigger lead over the 3060 and is only 30% slower than 3080. In superposition, things changes a little bit. The 6700XT is only 30% faster than 3060, while only getting 60% performance of 3080. In actual games, we ran Dirt5, Horizon Zero Dawn, Shadow of the Tomb Raider and Hitman2. The 6700XT runs surprisingly good in hitman 2 under 1080P max settings. It is almost as fast as a 3080 and 50% faster than 3060. In Horizon Zero Dawn, it’s only slightly faster than 3060, while still much slower than 3080. In Dirt 5, it’s about 30% faster than 3060 while being about 30% slower than 3080. In Shadow of the Tomb Raider, it almost doubles the FPS of 3060 and being only slightly slower than 3080. In the compute benchmark however, Nvidia is still having a lead. It is slower than 3060 in both Vulkan and OpenCL, and less than half of the performance of 3080. So these scores looks alright for a $479 card. But being a $799 card, what are you getting for the extra $320? A better cooler, for sure. If we take a look at the temp here, it is even running cooler than the Strix. Disclaimer, all other 6700XT model temps are from the review of Techpowerup. They did not specify their testing condition, but it shouldn’t make too big of a difference unless under extreme conditions. Yes the TUF has a great cooler, we already know that. Other TUF models has all been praised for their excellent temperature. Now what about overclocking? After some tests, we are able to get it to 2815MHz core and 2150MHz memory. That is about 200MHz higher than what a reference card can get, and very close to Techpowerup’s 6700XT strix’s result. With the overclocking, here is the increase we get. We are getting 6% increase in both Unigine Valley and Heaven. Again in my opinion there is no real point of overclocking video cards nowadays for gaming purposes. If it can’t run a game smoothly then it won’t be smooth even if you overclock it. If it runs a game at 100fps, then overclocking to make it 110FPS won’t give you too much of a noticable difference. It is really more for fun and benchmarking. If you are spending more money on expensive models for better cooler and better RGB, go ahead. But if you are spending the money solely to expect better overclocking, then I would say save a little more money and just go one SKU up. Even if you overclock the hell out of a 3060, it is still not gonna be as fast as a 3060ti. But if we talk about MSRP, many high end 3060 has higher MSRP than low end 3060ti. So the TUF 6700XT is definitely one of the best 6700XT ‘available’ on the market. But for $320 more, should you buy it? I’m not sure. Yeah I know, RTX 3070 and 3080 are being sold way over Nvidia’s MSRP as well. But again, it will be pointless to discuss the value of a card based on their inflated price right now. Nothing is worth buying at the price they are being sold for on eBay. But just assume the $479 6700XT actually exists, I would say it will probably be a better idea just to buy the reference model instead. That’s it for today’s review. Thanks for reading! You can watch a video version of the review here:
  7. No it's not just your retailer. It has been happening to retailers around the world. I'm not sure what's going on at this point, but it might be due to supply issues. It seems most OEMs are still selling them 11th gen prebuilds on 30th March.
  8. I will look into the new microcode. Yeah it seems like intel is selling them before official embargo.
  9. It’s around 60-70 with Corsair H150pro and the power is around 124-150W.
  10. I tried with both 3080 X trio and 3080 FTW3. Same temp. But the board I got is technically a pre production so that could make a difference.
  11. 11900T is stuck at 35W in this test. But with the TDP limit removed, it’s about the same as the 11700 non K version. It is about 20% better than 10700 in terms of IPC and the iGPU is much better if you care about that. But the addition of AVX512 and the new cache algorithm does help in a lot of real life workload
  12. I don't experience the same toasty PCH. It is pretty cool even under Prime95 workload.
  13. upgrading from Comet lake to Rocket lake doesn't make much sense. I would say wait for Alder lake, unless you need Thunderbolt 4 and AVX512
  14. Hello everyone and welcome to another review. Today we will be looking at the i7 11700K and i9 11900K. We have covered the basics for Rocket lake and Z590 in the past, and you can check them out in my former reviews. So we finally got the top of the line 11900K, and the second best Rocket lake 11700K. We will be taking a look at the 11700K first and see how it performs. Spec wise it is the same as the 11700 we reviewed last time. The only difference is it has a higher TDP of 125W compares to 65W of 11700, and it’s unlocked. The turbo frequency is 4.6Ghz all core and 5.0GHz single core with Turbo Velocity boost. Spoiler alert, it is almost the exact same chip as the 11900K we will review next. In today’s review I am pairing it with the Asus Maximus XIII Hero we reviewed last time, 16 GB of Gskill DDR4 3200MHz CL14 memories, Western digital SN850 PCI-E4.0 SSD and the EVGA 3080 FTW3 video card. Let’s start with CPU-Z. So as expected, it’s a 8 cores 16 thread parts, with 125W TDP. Let’s run CPU-Z benchmark first. It is getting slightly higher score than the 11700 non K version. And it’s getting higher score than the 5800X in both single core and multi core testing. It would be a serious competitor to the 5800X. Let’s move on to Aida64. Intel is still having problem with memory performance on Rocket lake. However even with the lower numbers on memory performance, it still pulling ahead in all benchmarks compares to Comet lake. Next is Blender. We are rendering both classroom and BMW with the processor. And here is the results. The 11700K is slightly faster than the 5800X in BMW but about 10% slower in classroom. It gets very close to a unlocked 11700. In 7 zip, again it’s very similar to a 11700 with unlocked TDP. It is about 20% faster than 10700 Comet lake processor but it’s still slower than the 5800X. Next is Cinebench. We are running both R15 and R20 here. R20 would utilize AVX instructions so AVX frequency offset does matter here. In Cinebench R15, we also tested the OpenGL performance with the iGPU. Same as the few other Rocket lake chips we reviewed, it has a 30% increase over Comet lake. So the new XE architecture is really something to be excited for. As for the CPU part, it is not much faster compares to the non K 11700, but it’s still a 25% increase over Comet lake. It’s a very impressive increase considering it’s still on the 14nm node. For Cinebench R20, it is getting the same single core score as 5800X, while being slightly slower in multi core. However it is the fastest among the Intel desktop CPUs we’ve tested so far. Next is V-Ray. It’s a rendering benchmark that tests the CPU’s performance in rendering pictures. Again the 11700K is not as fast as the 5800X, but it’s still marginally faster than Comet lake. It has an average clock speed of 4.55GHz, which is the same as 11700. In Handbrake, we are transcoding a 1:31 second 4K 30fps video into 1080P H.264. It is two seconds faster than the non K version, and it’s a tie with 5800X. For Y-cruncher, it is a few benchmarks that can utilize AVX512. It’s almost exactly the same speed as the non K version, while being almost 60% faster than 5800X in multi threads benchmark. So if your workload is AVX heavy, you still have a good reason to choose Intel over AMD. Next let’s take a look at the gaming performance. First let’s run 3D mark Time Spy. It is running about 2% faster than i7 11700. But it’s still slightly slower than the 5800X. In hitman2, it is about 2fps faster than the non K version, which is within the margin of error. They 5800x is pulling way ahead in Hitman. In Horizon, the trend continues. However this time it’s a tie between Rocket lake and 5800X. In Dirt 5, the 11700K is pulling about 3fps ahead of the other two Rocket lake. It is also about 8% faster than the 5800X. For Shadow of the Tomb Raider, It’s also the fastest among the group. However it’s still slower than 5800X in CPU games. Lastly for stock frequency, let’s run Prime 95 and check the thermal and power consumption for 11700K. The cooler we are using today is a Corsair 360mm H150Pro AIO cooler with three PWM controlled fans. I enabled AVX512 so that we can see the maximum possible power consumption and heat dissipation. It thermal throttles even under default clock speed, and the maximum power consumption recorded was 260W. This is ridiculous for a desktop processor. But that’s how AVX512 works, and that’s why the LGA3647 Xeons are limiting the AVX512 frequency at a much lower speed to keep the thermals in spec for passive cooled servers. Alright, enough of the default benchmarks. Since we have a K version, the whole point is to overclock it right? As we discovered in the past few reviews, Rocket lake requires a much higher voltage compares to Comet lake. I am seeing voltages close to 1.5V even at stock speed. Let’s enter the BIOS and see what ASUS has to say about this chip. So we have a super low SP score of 60. It is predicting that it needs close to 1.7V for 5.3GHz all core. So let’s start with 1.5V 5.2GHz, and it’s a freeze. Our goal here is to complete Cinebench R20. I did get it to 5.3GHz all core with 1.62V voltage however the chip is thermal throttling really badly. And the Cinebench score is actually worse than 5.2GHz. After some tweaking the best score I was able to get was at 5.2Ghz all core with 1.55V core voltage. The CPU temprature is among the low 90s and we are getting great improvements in CPU-Z and Cinebench R20 tests. We ran Cinebench R15, R20, V-ray and CPU-Z benchmark to test the improvement of the overclocking. A 700+ CPU-Z single core score is the highest among any processors on the market so far. Somehow V-Ray reported 5.5GHz but this can't be true... Overall I am very impressed by how it performs. At $399 MSRP, it’s very hard to say no to it. It trade blows with 5800X but with the addition of AVX512 support and a wider motherboard choice. It supports Thunderbolt 4 which would be the standard for external devices in the near future. The only drawback I can think of is the higher power consumption and the high heat dissipation. But still, even at stock speed it is still way faster than Comet lake. Should you buy one? It depends. If you are upgrading from Kaby lake then it’s definitely a game changer. But if you have Coffee lake or Comet lake, you would be better off to wait for Alder lake which will come out at the end of this year. I understand there are Intel fanboys and AMD fanboys and it's an endless debate that has already last for a decade. But let's be honest here, Zen 3 is a great architecture, that is without any doubt. But Intel is at least catching up this time with Rocket lake, and keep in mind Intel did this on 14nm node compares to 7nm. This is almost an impossible task to do but Intel did it. So big thumb up to Intel! You can watch a video version of it here: Thanks!
  15. Hello everyone and welcome to another review. So just like everyone else has been doing, I will do a brief review on the RTX3060. The 60 series has always been a sweet spot card for gamers. The GTX1060 was released 3 years ago in 2018, while the RTX 2060 was released two years ago in 2019. They are all 192bit in memory bus. The GTX1060 and the RTX2060 both has 6GB of memories, while Nvidia decides to put 12GB on the RTX3060. Yes I know it doesn’t make any sense. But on theory might explain why Nvidia decides to put 12GB on it. Technically you can’t pair 8GB of memories with 192bit bus, but 6GB seems a little low for graphics card in 2021. So why not double the size and make it 12GB? Price wise, thanks to the mining hype, you can even make money in some cases by selling your two or three years old graphics card, which is so ironic. The GTX1060 has a MSRP of $249, the RTR 2060 has a MSRP of $349. However you are welcomed to check on eBay to see how much they goes for right now. The RTX 3060 carries a price tag of $329, however not a single card is release at that price point. You will be spending an average of $450+tax for an AIB model RTX3060, of course that’s given you can grab one at MSRP. You might end up spending double that money if you are shopping on eBay. The RTX 3060 uses GA106 die, it is the smallest die so far in the Ampere family. It supports all the new feature and second generation RT core that’s brought by the new architecture. It has a TDP of 170W, and you can increase the power limit by up to 15% depends on which model you are getting. So only one 8pin power connector is present on the board. In this review we will be looking at the Asus TUF RTX3060. It is one of the more expensive model and it’s priced at $489. To be honest this is definitely a bad value considering it’s performance. And it’s a horrible value if you are getting it for $800 on eBay. The cooler design it similar to the TUF 3070 and TUF 3080. And in our review we found out it’s definitely an overkill for the small GA106 die. And RGB wise it’s also the same as it’s bigger brothers. The TUF logo on the back of the card will light up. In this review we will be pairing it with the Rocket lake i7 11700 and Asus Maximum XIII Hero. We will be using two sticks of G skill DDR4 3200Mhz CL14 memories. And we are doing a different approach on this one. Just based on the MSRP, the RTX 3060 TUF is exactly half of the MSI RTX3080 Gaming X Trio. So for half the price, does it offer more than half of the performance? Let’s find out. First let’s take a look at GPU-Z. The TUF has exactly the stock specs. Only with a slightly higher boost clock. Although being a lower end variant, it still supports PCI-E4.0 X16. It has 3584 Cuda cores instead of 4864 on the Ti variant. But this is actually not the complete GA106 core. The complete GA106 cores has 3840 cores and it’s all enabled on the mobile version. The desktop version however, disabled one group of compute unit. Spec wise it’s actually less than half of the RTX 3080. The the memory bandwidth is also much lower compares to RTX 3080. Let’s run some benchmarks and take a look at how it performs. First let’s start with 3DMark Timespy. As expected, we are getting roughly half of the performance on our RTX3060 compares to RTX3080. And it is almost a perfect 50%. In Unigine Superposition. The trend continues. It is getting less than half of the FPS and overall score. Actually it’s getting similar scores to RTX2070 Stock cards. In Unigine Heaven, it’s again less than half of the performance of RTX3080. Even an overclocked RTX2060 can get higher scores than this. In unigine Vally, it’s actually getting less than half of the average FPS compares to RTX 3080. However the score is at roughly 75%. In actual game tests, the trend continues. In dirt 5, it’s getting 52% FPS compares to RTX 3080. In Hitman 2, it’s getting roughly 65%. In Horizon, it gets about 60% of the performance compares to our RTX3080. In Shadow of the Tomb Raider, it’s still getting 50% or less performance compares to RTX 3080. Lastly is the compute performance. It is getting about 55% performance. Which shows similar scaling in game tests. Temperature wise, since it’s a much smaller die, it stays very cool. The fan is capped at 50% and it’s still whisper quiet. The highest temperature recorded is 60C and that’s with furmark. It is not a realistic representation of actual daily workload. In actual gaming, it stays below 55C in most of the times. And the fan only start spinning when the card gets over 45C. So overall it’s a very quiet card. I don’t notice much of a difference between fan noise and difference in temperature between quiet and performance BIOS. For overclocking. The TUF has 10% head room in terms of power limit. I only had couple hours with this card so I did not have time to fine tune the curve. But I was able to get an easy 220MHz overclock on the core. And we got close to 10% increase in performance. The 3Dmark Timespy score increases from 8084 to 8601. Lastly mining. I know I know many gamers hates miners because they contributes a lot to the crazy video card prices. So Nvidia claims that they have nerfed the mining performance on the RTX3060 so gamers will be able to get them. The answer is yes, and no. Yes Nvidia did actually limit the mining performance on this card. In our test it is only able to get about 21Mh per second in our Ethereum mining test, which is only half of what it’s capable of. However this is a firmware level limit and I already heard about people cracking the firmware to remove the limit. And in my opinion, the real reason Nvidia limit the performance on this card is because they want to sell their mining cards, which is coming soon. Just my opinion on mining. So I was one of the first miners in crypto currency. Actually I think I still have about 10 bitcoins but I lost my wallet years ago. It’s about half a million in value as of today’s bitcoin price, but am I mad? Actually no. I’m glad I got out of crypto mining. Don’t get me wrong, it is okay to mine as a business. By saying business I mean operating a mining farm on a larger scale. Because of my job, I get to know some miners that’s mining on a data center level of scale. It is treated as a real business. They have professionals to handle everything from purchasing, deploying, maintaining and decommissioning the equipment. It is more like the way to operate an actual cloud data center like AWS. They have very professional risk management and financial teams behind them. For individuals however, let’s be honest. The majority of people does not have the experience, or the actual knowledge that is required to operate an actual business, not to say a high risk business like this. They don’t have the proper purchasing channel, and they have no idea of the actual risk related with mining. Everyone thinks they can get an ROI in 100 days just based on the hardware cost and the current price of crypto. But it’s a lot more complicated than that, especially some miners are mining random cryptos that seems to have a really short ROI on paper. Let’s face it, with the pandemic right now, most people cannot afford to throw couple grants, or even more into water if anything goes wrong. If you talked to individual miners about the risk, they will tell you they have everything under calculation. But that answer simply tells you they have no idea of the risk involved. If you are familiar with the financial world, even Warren Buffett can’t tell you there is very little risk involved just because he has done his math and research. There are too many factors involved and there is no way one human being can draw a conclusion like that. Not to say, unlike Buffett, most miners cannot afford the lost if anything goes south. So mining with your spare card or PC? That’s fine. But if you are spending $1000 buying RTX 3070 on eBay hoping you can make money on mining, please, forget it. So should you buy a RTX3060? I guess it doesn’t matter because you won’t be able to get one anyways. But even if you can get one at MSRP, I would still say the current MSRP is way too high for a card with this kind of performance. Unless you absolutely need a card right now, I would say just wait and see. You can watch a video version of this review here:
×