Jump to content

Man

Member
  • Posts

    339
  • Joined

  • Last visited

Awards

This user doesn't have any awards

About Man

  • Birthday Sep 13, 1990

Contact Methods

  • Reddit
    u/Devgel
  • Twitter
    @MrDevgel

Profile Information

  • Gender
    Male

System

  • CPU
    Core 2 Duo E8400
  • Motherboard
    Intel Q35 Express
  • RAM
    2 x 2GB DDR2 800MHz
  • GPU
    Nvidia GT440 (1GB)
  • Case
    Dell Optiplex 755
  • Display(s)
    Asus VH222
  • Keyboard
    Dell SK-8135
  • Operating System
    Windows 8.1 Pro

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. According to a year-old YouTube video (linked below), it's impossible to run the 4090 below 905mV, for some reason. I was just wondering if that's the case with other RTX cards as well? The reason I'm asking is because most Ada cards have enormous heatsinks and I'd really like to make a passive cooled, super power efficient custom profile. Ideally, I'd like to run the card at or near idle voltages (~700mV), even if that means running the card at ~2GHz or lower. So, is it even possible or are all Ada cards voltage locked to 905mV?
  2. Actually, you're misinformed! The reason Intel nuked AVX512 is because of big.LITTLE. The little cores don't support AVX512 instructions, just the big cores. As you can imagine, that created a lot of headaches for users so instead of fixing the issue by forcing AVX512 to the big cores, Intel downright amputated it from their CPUs. Kind of a... d!ck move, if you ask me! That'd be a little... unfair, seeing that the i5-12600K has a pretty sizeable 20MB L3 cache, almost twice as much as Alder/Raptor Lake i3s. Umm... no?! The 7900XTX's power consumption is less than 50W more than the 4080 which is like what, +15%? That's a reasonable trade-off 'cause you're also getting ~10% performance bump in bandwidth intensive games. Let's not forget that the 7900XTX has a much wider bus (384-bit vs. 256-bit) with 24GB vRAM (12 vs. 8 GDDR6 chips), and a massive L3 cache (96MB vs. 64MB) so it's actually more efficient, especially when you consider that we are comparing a chiplet with a monolithic die. Sure, Intel's doing okay, if you move the goalpost! But the thing is, these unknown Chinese GPUs don't have Intel's money. I mean, Intel nabbed Raja Koduri, the fellow behind AMD's GCN + early 1st Gen. RDNA architectures. That's some serious pedigree. So yeah, I don't think Intel is doing great... with the kind of money and expertise they've (probably) thrown into Arc. Well, first-gen. K10s (Phenom vanilla) were marred by silicon issues (GloFo's 65nm was a mess) so they couldn't 'rev' up all that much higher than 2.5 GHz. But Phenom II's, fabbed on 45nm, were actually quite competitive. Not top dogs, but competitive. I mean, AMD was taking pot-shots at premium Core 2 Duos with tri-core Phenom II X3s and quad-core Athlon II X4s with smaller cache pools than full-blown Phenoms. And then, of course, there were hexa-core Phenom II X6s for productivity tasks to compete with premium Core 2 Quads. Frankly, I regretted getting a Core 2 Duo E8400 instead of a Phenom II X3 or perhaps an Athlon II X4! In any case, it was Bulldozer where things went down the sh!tter. But on the bright side, AMD only had two generations worth of 'duds.' They quickly learnt their lesson and made all the right choices with Ryzen, unlike Intel who stuck with Netburst for like what... 7+ years?!
  3. Sure, by throwing power efficiency out the window by pushing the voltage/frequency curve way too far... not much unlike late-model Pentium Ds! Notice Ryzen 7 7700X's power consumption (198W) and compare it with i5-13600K (255W). To put things into perspective, the i5-13600K's power consumption is mere 7W less than the i9-7900X, a so called "HEDT" CPU. That's kind of... bad! It's a sign of things to come. Intel finally realizes that it'll take years to catch-up with Ryzen. Either that or 14th Gen. is going to be revolutionary on the level of Conroe (Core 2), or at least Sandy Bridge, which I highly doubt! Check the comparison between i7-7700K and i3-13100F below. I can see some point of big.LITTLE in mobile space but desktop? I'd much rather take grown-up P-Cores than baby E-Cores which are more or less worthless. Personally, Intel mostly bothered with big.LITTLE to skew multi-core synthetic benchmark resuls in their favor! Plus, they nuked AVX-512 because of big.LITTLE so... Just because Intel calls it "7nm" doesn't mean it can keep-up with TSMC's 7nm N7. Besides, N7 is old news. For perspective, N5 was a full node jump over N7 and the upcoming N3 will be a full node jump over N5. Even Samsung is now well ahead of Intel with their 3nm GAAFET, which is 'supposedly' on par with N4 or at last N5 which are still on FinFET. I can only imagine how things will improve once TSMC finally move to GAAFET. They'd a shot with Larabee but nope, they just dumped it in order to focus their foundries output on the booming Core 2 sales. It's an okay choice, I suppose, if you're only going to play modern AAAs. But aside from that it's pretty much useless. However, I "do" want a 3rd player in the dGPU market. Everyone does that, mate. i5 CPUs are essentially cut-down i7s! While there's indeed an IPC uplift as a 4.1 GHz i3-12100F matches a 5.0GHz i7-7700K, this difference is most likely coming primarily from larger L2 and L3 cache pools (256KB/8MB vs. 1,250KB/12MB), as well as DDR5 memory. The i3-12100F simply has far more bandwidth to play with.
  4. I've closely followed CPUs for the past two decades (I'm 34, by the way) and, honestly, I believe Intel has never been in a worse position. Their consumer CPU business is in shambles, and even in the datacenter market, their market share is steadily declining. They're currently in the midst of a CPU rebrand, desperately retiring the iconic i3/5/7/9 in an attempt to change consumer perception. Their CPU architecture reached its peak with Kaby Lake (i7-7700K) or perhaps the short-lived i9-9900K, which was completely annihilated by Zen2 (Ryzen 7 3700X). Their feeble attempt at "big.LITTLE" has failed miserably, lacking any redeeming qualities. Their foundry is falling far behind the likes of TSMC and even Samsung, becoming a cash sinkhole. Their newly established GPU division has struggled to gain any traction, securing less than 1% market share. Their foray into the discrete GPU industry has proven to be a major misstep, now that crypto mining is dead for the foreseeable future. Despite all these challenges, Intel seems unwilling to change their ways. They persist in changing the socket every two generations, locking clock multipliers, restricting user voltage tweaks, limiting overclocking to high-end chipsets, and just screwing over users by doing things like disabling AVX-512. The only thing they're willing to change is the brand, like that's going to help! Given Intel's deep-rooted bureaucracy and anti-consumer practices, I simply don't see how the company can survive AMD's relentless assault. Now, some might argue: "What about Netburst, a.k.a. Pentium 4 and Pentium D?" Well, at least back then, their foundry business was a force to be reckoned with. They were rolling out 65nm CPUs in early 2006, at least a year ahead of both TSMC and Global Foundry. So, at least their foundry business was at the top. But now... well, I'm sure you're already know!
  5. For some reason, I absolutely miss these cartoon characters, even though the teenage me found it to be extremely cringe (I'm 34). In any case, I doubt you can get much out of that 9500GT, which essentially falls between the GT210 and GT220 in terms of core/texture count. But since it says "512MB GDDR," that means you've won the vRAM lottery as the 9500GT is available with either DDR2 or GDDR3 and the latter is enormously more powerful than the former. But a real shame you didn't get the 9600GT, which is literally twice the card and was basically the 3060Ti of its time.
  6. Currently have an old Xeon E3-1245. Thinking about moving to either a Ryzen 5 3600 or i3-12100F. Now, I know for a fact that overclocking is possible on R5-3600 + B450/550. However, I'm not sure if the same is true for Intel Alder Lake. I'm not particularly interested in BCLK overclocking, as I'm sure this feature is only supported by high-end motherboards. I just want to overclock that i3 to its peak 4.3GHz frequency on all cores, in case I do end up with that CPU. The question is, do H610 and B660 support turbo overclocking or is it something exclusive to the Z Series motherboards? I'd also like to play around with undervolting so I'm also wondering if it's possible to tweak the vcore voltage of the i3-12100F on H610 and B660 chipsets via either BIOS or (preferably) ThrottleStop. Or is it disabled thanks to the 'Plundervolt' fiasco? Thanks in advance!
  7. Avast? But... Why?! EDIT: Oops, just realized this thread was started a decade ago. All is forgiven! But in any case, most people don't need anything 'fancier' than Windows Security that comes bundled with Windows 10 and 11. It's 100% free and works well enough, as long as you don't wander too far into the beaten path, if you catch my drift.
  8. I don't see how that's relevant? While they definitely weren't free, I doubt you can "buy" these magazines anymore if you want to, unless they still happen to be in print or 'digital circulation' (i.e ebooks, etc.) - which they clearly aren't. It's mere archiving i.e preservation of information. It's perfectly legal though a lot of people - mostly publishers - are hellbent on turning it into a grey area lately!
  9. Are there any MMO gaming mice available that offer full software customizability on a Mac? I'm looking for a mouse with at least 9-10 customizable buttons but it's pretty darn difficult to find one that's actually meant for Macs with full software control etc. Thanks.
  10. Frankly, I don't understand the hate. It's great for the environment, and I'm not even a tree-hugger (so to speak). For example, I'd an i5-2400 not too long ago. And as you all know, non-HT quads kinda suck these days. So, instead of sending that CPU to the landfill where it rots till eternity, it would've been a lot better to just allow unlockable hyperthreading, thereby turning that 2400 into an eight-thread 2600, which is still "surprisingly" useful in 2022, as long as your aim is 30FPS that is. Ditto for unlocked multipliers, cache, cores, iGPU, stuff. I'm a fan of the idea... as long as the fees are reasonable, especially for older products. I mean, it'd be nuts to charge 20 bucks to unlock a 10 year old CPU. Just unlock them for free once they grow old and weary.
  11. I guess they're going to fire up the ol' 14nm plant at Samsung to fab you that GTX1050 in late 2022... That cost monies, ya know?
  12. 110-150 MHz OC headroom? I guess Nvidia - or at least the AIB partners - got tired of so called "pro gamers" returning their new cards within a day or two of purchasing. Does anyone else remembers the blown GTX 590 fiasco, not much unlike the Galaxy Note 7?! "Ooh, gotta overclock!!!" Anyhow, undervolting is the way to go, even if you've to sacrifice a bit of performance. 5% lower performance at 30-40% higher overall efficiency is a great trade-off. P.S I really like Nvidia's current 'reference' designs, which seem like an open-air / blower hybrid. Shame it's a pain to take apart.
  13. Then you completely missed my point! I was talking about 1080p and 1440p ultra-wides. And besides, anything beyond 4K is worthless... unless we are talking about 50" or larger displays. There are only so many pixels the eyes can see. Microscopes exist for a reason, after all!
×