Jump to content

Man

Member
  • Posts

    339
  • Joined

  • Last visited

Everything posted by Man

  1. According to a year-old YouTube video (linked below), it's impossible to run the 4090 below 905mV, for some reason. I was just wondering if that's the case with other RTX cards as well? The reason I'm asking is because most Ada cards have enormous heatsinks and I'd really like to make a passive cooled, super power efficient custom profile. Ideally, I'd like to run the card at or near idle voltages (~700mV), even if that means running the card at ~2GHz or lower. So, is it even possible or are all Ada cards voltage locked to 905mV?
  2. Actually, you're misinformed! The reason Intel nuked AVX512 is because of big.LITTLE. The little cores don't support AVX512 instructions, just the big cores. As you can imagine, that created a lot of headaches for users so instead of fixing the issue by forcing AVX512 to the big cores, Intel downright amputated it from their CPUs. Kind of a... d!ck move, if you ask me! That'd be a little... unfair, seeing that the i5-12600K has a pretty sizeable 20MB L3 cache, almost twice as much as Alder/Raptor Lake i3s. Umm... no?! The 7900XTX's power consumption is less than 50W more than the 4080 which is like what, +15%? That's a reasonable trade-off 'cause you're also getting ~10% performance bump in bandwidth intensive games. Let's not forget that the 7900XTX has a much wider bus (384-bit vs. 256-bit) with 24GB vRAM (12 vs. 8 GDDR6 chips), and a massive L3 cache (96MB vs. 64MB) so it's actually more efficient, especially when you consider that we are comparing a chiplet with a monolithic die. Sure, Intel's doing okay, if you move the goalpost! But the thing is, these unknown Chinese GPUs don't have Intel's money. I mean, Intel nabbed Raja Koduri, the fellow behind AMD's GCN + early 1st Gen. RDNA architectures. That's some serious pedigree. So yeah, I don't think Intel is doing great... with the kind of money and expertise they've (probably) thrown into Arc. Well, first-gen. K10s (Phenom vanilla) were marred by silicon issues (GloFo's 65nm was a mess) so they couldn't 'rev' up all that much higher than 2.5 GHz. But Phenom II's, fabbed on 45nm, were actually quite competitive. Not top dogs, but competitive. I mean, AMD was taking pot-shots at premium Core 2 Duos with tri-core Phenom II X3s and quad-core Athlon II X4s with smaller cache pools than full-blown Phenoms. And then, of course, there were hexa-core Phenom II X6s for productivity tasks to compete with premium Core 2 Quads. Frankly, I regretted getting a Core 2 Duo E8400 instead of a Phenom II X3 or perhaps an Athlon II X4! In any case, it was Bulldozer where things went down the sh!tter. But on the bright side, AMD only had two generations worth of 'duds.' They quickly learnt their lesson and made all the right choices with Ryzen, unlike Intel who stuck with Netburst for like what... 7+ years?!
  3. Sure, by throwing power efficiency out the window by pushing the voltage/frequency curve way too far... not much unlike late-model Pentium Ds! Notice Ryzen 7 7700X's power consumption (198W) and compare it with i5-13600K (255W). To put things into perspective, the i5-13600K's power consumption is mere 7W less than the i9-7900X, a so called "HEDT" CPU. That's kind of... bad! It's a sign of things to come. Intel finally realizes that it'll take years to catch-up with Ryzen. Either that or 14th Gen. is going to be revolutionary on the level of Conroe (Core 2), or at least Sandy Bridge, which I highly doubt! Check the comparison between i7-7700K and i3-13100F below. I can see some point of big.LITTLE in mobile space but desktop? I'd much rather take grown-up P-Cores than baby E-Cores which are more or less worthless. Personally, Intel mostly bothered with big.LITTLE to skew multi-core synthetic benchmark resuls in their favor! Plus, they nuked AVX-512 because of big.LITTLE so... Just because Intel calls it "7nm" doesn't mean it can keep-up with TSMC's 7nm N7. Besides, N7 is old news. For perspective, N5 was a full node jump over N7 and the upcoming N3 will be a full node jump over N5. Even Samsung is now well ahead of Intel with their 3nm GAAFET, which is 'supposedly' on par with N4 or at last N5 which are still on FinFET. I can only imagine how things will improve once TSMC finally move to GAAFET. They'd a shot with Larabee but nope, they just dumped it in order to focus their foundries output on the booming Core 2 sales. It's an okay choice, I suppose, if you're only going to play modern AAAs. But aside from that it's pretty much useless. However, I "do" want a 3rd player in the dGPU market. Everyone does that, mate. i5 CPUs are essentially cut-down i7s! While there's indeed an IPC uplift as a 4.1 GHz i3-12100F matches a 5.0GHz i7-7700K, this difference is most likely coming primarily from larger L2 and L3 cache pools (256KB/8MB vs. 1,250KB/12MB), as well as DDR5 memory. The i3-12100F simply has far more bandwidth to play with.
  4. I've closely followed CPUs for the past two decades (I'm 34, by the way) and, honestly, I believe Intel has never been in a worse position. Their consumer CPU business is in shambles, and even in the datacenter market, their market share is steadily declining. They're currently in the midst of a CPU rebrand, desperately retiring the iconic i3/5/7/9 in an attempt to change consumer perception. Their CPU architecture reached its peak with Kaby Lake (i7-7700K) or perhaps the short-lived i9-9900K, which was completely annihilated by Zen2 (Ryzen 7 3700X). Their feeble attempt at "big.LITTLE" has failed miserably, lacking any redeeming qualities. Their foundry is falling far behind the likes of TSMC and even Samsung, becoming a cash sinkhole. Their newly established GPU division has struggled to gain any traction, securing less than 1% market share. Their foray into the discrete GPU industry has proven to be a major misstep, now that crypto mining is dead for the foreseeable future. Despite all these challenges, Intel seems unwilling to change their ways. They persist in changing the socket every two generations, locking clock multipliers, restricting user voltage tweaks, limiting overclocking to high-end chipsets, and just screwing over users by doing things like disabling AVX-512. The only thing they're willing to change is the brand, like that's going to help! Given Intel's deep-rooted bureaucracy and anti-consumer practices, I simply don't see how the company can survive AMD's relentless assault. Now, some might argue: "What about Netburst, a.k.a. Pentium 4 and Pentium D?" Well, at least back then, their foundry business was a force to be reckoned with. They were rolling out 65nm CPUs in early 2006, at least a year ahead of both TSMC and Global Foundry. So, at least their foundry business was at the top. But now... well, I'm sure you're already know!
  5. For some reason, I absolutely miss these cartoon characters, even though the teenage me found it to be extremely cringe (I'm 34). In any case, I doubt you can get much out of that 9500GT, which essentially falls between the GT210 and GT220 in terms of core/texture count. But since it says "512MB GDDR," that means you've won the vRAM lottery as the 9500GT is available with either DDR2 or GDDR3 and the latter is enormously more powerful than the former. But a real shame you didn't get the 9600GT, which is literally twice the card and was basically the 3060Ti of its time.
  6. Currently have an old Xeon E3-1245. Thinking about moving to either a Ryzen 5 3600 or i3-12100F. Now, I know for a fact that overclocking is possible on R5-3600 + B450/550. However, I'm not sure if the same is true for Intel Alder Lake. I'm not particularly interested in BCLK overclocking, as I'm sure this feature is only supported by high-end motherboards. I just want to overclock that i3 to its peak 4.3GHz frequency on all cores, in case I do end up with that CPU. The question is, do H610 and B660 support turbo overclocking or is it something exclusive to the Z Series motherboards? I'd also like to play around with undervolting so I'm also wondering if it's possible to tweak the vcore voltage of the i3-12100F on H610 and B660 chipsets via either BIOS or (preferably) ThrottleStop. Or is it disabled thanks to the 'Plundervolt' fiasco? Thanks in advance!
  7. Avast? But... Why?! EDIT: Oops, just realized this thread was started a decade ago. All is forgiven! But in any case, most people don't need anything 'fancier' than Windows Security that comes bundled with Windows 10 and 11. It's 100% free and works well enough, as long as you don't wander too far into the beaten path, if you catch my drift.
  8. I don't see how that's relevant? While they definitely weren't free, I doubt you can "buy" these magazines anymore if you want to, unless they still happen to be in print or 'digital circulation' (i.e ebooks, etc.) - which they clearly aren't. It's mere archiving i.e preservation of information. It's perfectly legal though a lot of people - mostly publishers - are hellbent on turning it into a grey area lately!
  9. Are there any MMO gaming mice available that offer full software customizability on a Mac? I'm looking for a mouse with at least 9-10 customizable buttons but it's pretty darn difficult to find one that's actually meant for Macs with full software control etc. Thanks.
  10. Frankly, I don't understand the hate. It's great for the environment, and I'm not even a tree-hugger (so to speak). For example, I'd an i5-2400 not too long ago. And as you all know, non-HT quads kinda suck these days. So, instead of sending that CPU to the landfill where it rots till eternity, it would've been a lot better to just allow unlockable hyperthreading, thereby turning that 2400 into an eight-thread 2600, which is still "surprisingly" useful in 2022, as long as your aim is 30FPS that is. Ditto for unlocked multipliers, cache, cores, iGPU, stuff. I'm a fan of the idea... as long as the fees are reasonable, especially for older products. I mean, it'd be nuts to charge 20 bucks to unlock a 10 year old CPU. Just unlock them for free once they grow old and weary.
  11. I guess they're going to fire up the ol' 14nm plant at Samsung to fab you that GTX1050 in late 2022... That cost monies, ya know?
  12. 110-150 MHz OC headroom? I guess Nvidia - or at least the AIB partners - got tired of so called "pro gamers" returning their new cards within a day or two of purchasing. Does anyone else remembers the blown GTX 590 fiasco, not much unlike the Galaxy Note 7?! "Ooh, gotta overclock!!!" Anyhow, undervolting is the way to go, even if you've to sacrifice a bit of performance. 5% lower performance at 30-40% higher overall efficiency is a great trade-off. P.S I really like Nvidia's current 'reference' designs, which seem like an open-air / blower hybrid. Shame it's a pain to take apart.
  13. Then you completely missed my point! I was talking about 1080p and 1440p ultra-wides. And besides, anything beyond 4K is worthless... unless we are talking about 50" or larger displays. There are only so many pixels the eyes can see. Microscopes exist for a reason, after all!
  14. Man

    Cable or WiFi?

    Light travels much faster than radio waves. Cables are a no brainer... unless you don't mind the added latency.
  15. Well, here' my standard 16:9 1080p monitor running in "ultra-wide" at 1920 x 823. As you can tell, the horizontal is the same at 1920 pixels. It's only the vertical resolution that's been 'squished' from 1080p to 823p. So, if you like more horizontal screen real-estate on a 16:9 then it can be done. Quickly. I accomplished it in under 5 minutes via freeware ToastyX CRU, after calculating the proper aspect ratio. Just add the resolution of your choice in a vacant extension block, hit OK and either restart the PC or - preferably - just run restart64.exe, which is bundled with CRU package.
  16. I was speaking purely from a price-point perspective. Besides, no 16:9 monitor has a horizontal resolution of 3440, which would be 3440 x 1935p. It's either 2560 x 1440p, which is lower than 3440, or 4K which is higher at 3840 x 2160p.
  17. It was an apples-to-apples comparison. You'll need a minimum of 36" 16:9 to break "even" with a 34" ultra-wide. And just for the heck of it, here's a comparison with a 38" 16:9.
  18. I don't think you're quite getting my point. It was merely a reference. You can compare a 34" ultra-wide with a 36" 16:9. Again, I was just trying to convey the point that a 30" ultra-wide is "exactly" as wide as a 31.65" 16:9 would be, hypothetically. Sure. But love is generally biased!
  19. I hear ya. I'll always love my 1050p... or the "1680" as people used to call them back in the ol' days! Would still love to get my hands on a nice 2560×1600, just for nostalgia's sake.
  20. Okay, so I'm looking forward to replacing my aging 21.5" 16:9 monitor (Asus VH222 1080p). During my search, I stumbled upon some cool 21:9 'ultra-wide' monitors and immediately, I was in love. But that was a spur of the moment kind of thing. The more I researched, the more I lost my enthusiasm. And now, I think I'm finally over ultra-wides, due to a number of reasons: As the title suggests they are indeed a lie. Check the first picture below. A 30" ultra-wide is comparable to a 31.6" 16:9! Not only that, but the boring, old 16:9 will also give you more vertical screen space, which is what matters 99.99% of the time (browsing, navigating Windows etc.). A 32" 1440p 16:9 will cost you about as much as a 30" 1080p 21:9 and would be even wider! Check the second picture. The pixel density of a 32" 16:9 1440p would also be comparable to a 30" ultra-wide 1080p (92 vs. 93 PPI). If you have to have that 'cool' 21:9 look then you can always create a custom resolution via the open-source ToastyX CRU utility and enable GPU scaling from your driver settings. That way, you'll get black bars on top and bottom. The games will think you've an ultra-wide when you select 2560x1080 resolution. Plus, it'd be optional! Bottom line: When you get an ultra-wide, you don't get an ultra-wide! You get a 16:9 display that's been chopped off. Frankly, they're just plain stupid. But of course, feel free to disagree and live the ultra-wide fantasy.
  21. Absolutely. Larger dies = poorer yields. However, I was really hoping to see large GPUs from AMD, now that they've finally transitioned to chiplets. I think they should release a halo product to compete with the 4090. Like the R9 Fury X whose Fiji GPU was nearly 600mm2. They're hardly pushing the boundary with Navi 31 at 533mm2 with a 308mm2 main chip, as per TechPowerUp. Imagine if they push the boundary of consumer GPUs with a combined die area of something around ~700-800mm2! With chiplets, they finally can. Their Epyc CPUs are well over a 1,000mm2, after all.
  22. I despise (hate is a strong word) GDDR6X with a passion. It just runs too bloody hot for its own good. I'm glad AMD hasn't bothered with it. Different architectures have different memory bandwidth requirements. I very much doubt the XTX is going to come anywhere near close to the 4090, given the amount of shader cores and their core frequency. Ada is touching nearly 3GHz while RDNA3 appears to be capped at around 2.5GHz, for whatever reason. DP1.4 doesn't matter to 99.99% people out there. No one in their right mind would bother with an 8K 120Hz display. The 4090 is a halo product. So obviously, rich blokes are going to line-up to get their mitts on one. It's all about bragging rights. AMD's flagships rarely manage to match NVidia's finest. Why? Because - apparently - AMD doesn't like to make large dies. It's not like they can't design a 600mm2 die, they absolutely can. It's just that they've never bothered with it... for some reason. It's been like that ever since I remember. In my late teens, the GTX280 was all the rage. The fastest bloody GPU in the world with an enormous (for the time) GT200 die @ 576mm2 and a humongous 512-bit wide GDDR3 vRAM. Meanwhile, AMD was taking pot-shots at the "Yamato class" GTX280 with their much, much smaller 282mm2 RV790 - in the guise of HD4890 - with a teeny, tiny 256-bit GDDR5 and a much lower price tag. That's AMD/ATI's strategy for the past decade!
  23. While I'm far from being an Nvidia fanboy (current GPU is an RX6600 while previous one was an R9-380), I think AMD can't quite compete with Nvidia on the MSRP front... yet. RDNA2 has inferior RT and on my vanilla RX6600, it's practically a gimmick. And I very much doubt RDNA3 is going to finally catch-up with Ampere - let alone Ada - when it comes to RT. Then there's the matter of DLSS. As much as I love the open-source nature of FSR 2.0, it just can't compete with DLSS. And FSR 3.0's quality and performance remain to be seen. I'm not holding my breath, suffice to say.
×