Jump to content

Intel Demoes i9-11900K Against Ryzen 9 5900X

Random_Person1234

seems like intel will sell the 11900k at 5900x' price point. makes no sense. 4 less cores with single digit gains at gaming? That is nuts.

Also waiting for reviews to show the power usage and temps. Will this be an oven or actually decent power usage?

if 11900k and 5900x are at the same price point, I would always go with the 5900x. Also cheaper motherboards and cooling.

 

5800x is single CCD so in some games it should perfrom better than 5900x since it has 2 CCDs.

 

Anyway I think that this 11th gen will be bad. Hope that the 10nm cpus will be better. Kinda dissapointing tbh.

QUOTE ME  FOR ANSWER.

 

Main PC:

Spoiler

|Ryzen 7 3700x, OC to 4.2ghz @1.3V, 67C, or 4.4ghz @1.456V, 87C || Asus strix 5700 XT, +50 core, +50 memory, +50 power (not a great overclocker) || Asus Strix b550-A || G.skill trident Z Neo rgb 32gb 3600mhz cl16-19-19-19-39, oc to 3733mhz with the same timings || Cooler Master ml360 RGB AIO || Phanteks P500A Digital || Thermaltake ToughPower grand RGB750w 80+gold || Samsung 850 250gb and Adata SX 6000 Lite 500gb || Toshiba 5400rpm 1tb || Asus Rog Theta 7.1 || Asus Rog claymore || Asus Gladius 2 origin gaming mouse || Monitor 1 Asus 1080p 144hz || Monitor 2 AOC 1080p 75hz || 

Test Rig.

Spoiler

Ryzen 5 3400G || Gigabyte b450 S2H || Hyper X fury 2x4gb 2666mhz cl 16 ||Stock cooler || Antec NX100 || Silverstone essential 400w || Transgend SSD 220s 480gb ||

Just Sold

Spoiler

| i3 9100F || Msi Gaming X gtx 1050 TI || MSI Z390 A-Pro || Kingston 1x16gb 2400mhz cl17 || Stock cooler || Kolink Horizon RGB || Corsair CV 550w || Pny CS900 120gb ||

 

Tier lists for building a PC.

 

Motherboard tier list. Tier A for overclocking 5950x. Tier B for overclocking 5900x, Tier C for overclocking 5800X. Tier D for overclocking 5600X. Tier F for 4/6 core Cpus at stock. Tier E avoid.

(Also case airflow matter or if you are using Downcraft air cooler)

Spoiler

 

Gpu tier list. Rtx 3000 and RX 6000 not included since not so many reviews. Tier S for Water cooling. Tier A and B for overcloking. Tier C stock and Tier D avoid.

( You can overclock Tier C just fine, but it can get very loud, that is why it is not recommended for overclocking, same with tier D)

Spoiler

 

Psu tier List. Tier A for Rtx 3000, Vega and RX 6000. Tier B For anything else. Tier C cheap/IGPU. Tier D and E avoid.

(RTX 3000/ RX 6000 Might run just fine with higher wattage tier B unit, Rtx 3070 runs fine with tier B units)

Spoiler

 

Cpu cooler tier list. Tier 1&2 for power hungry Cpus with Overclock. Tier 3&4 for overclocking Ryzen 3,5,7 or lower power Intel Cpus. Tier 5 for overclocking low end Cpus or 4/6 core Ryzen. Tier 6&7 for stock. Tier 8&9 Ryzen stock cooler performance. Do not waste your money!

Spoiler

 

Storage tier List. Tier A for Moving files/  OS. Tier B for OS/Games. Tier C for games. Tier D budget Pcs. Tier E if on sale not the worst but not good.

(With a grain of salt, I use tier C for OS myself)

Spoiler

 

Case Tier List. Work In Progress. Most Phanteks airflow series cases already done!

Ask me anything :)

Link to comment
Share on other sites

Link to post
Share on other sites

I hope I'm not the only one that started reading it as "de-mode" then realized its "demo-ed"...

 

Or "de-moes" rather than "demo-os"...

 

Is it time to go to bed yet...

Link to comment
Share on other sites

Link to post
Share on other sites

I'm quite sure Rocket Lake is that "stopgap"-esque release. I can't really see 10900K users (and especially Ryzen 5900X users) looking at this and be interested.

 

The one people are really interested in is Alder Lake, because that is by far and away a pretty darn big change not just for Intel but potentially for x86 processor design. Rocket Lake's significant in the sense that it uses backported Sunny Cove cores, of which the 14nm backport likely led to limitations that made the top-end SKU limited to 8C/16T.

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

If the 11900K is priced similarly to the 5900X it's a really big oof for Intel, I'd take 4 more cores instead of the single digit improvements that I won't notice. Hopefully their next generation cpu's will actually be better, especially the power consumption, if it continues like this, on the same 14nm process, Intel might end up with consumer cpu's that surpass AMD's FX 9590 220W TDP.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Valentyn said:

Me with my 5950X that boosts to 5.2Ghz for single core.IMG_7711(2).thumb.PNG.cb1b80ed07dcb58c43a336104bbdeb69.PNG

When you have SSD and you accidentally restart your pc :)

QUOTE ME  FOR ANSWER.

 

Main PC:

Spoiler

|Ryzen 7 3700x, OC to 4.2ghz @1.3V, 67C, or 4.4ghz @1.456V, 87C || Asus strix 5700 XT, +50 core, +50 memory, +50 power (not a great overclocker) || Asus Strix b550-A || G.skill trident Z Neo rgb 32gb 3600mhz cl16-19-19-19-39, oc to 3733mhz with the same timings || Cooler Master ml360 RGB AIO || Phanteks P500A Digital || Thermaltake ToughPower grand RGB750w 80+gold || Samsung 850 250gb and Adata SX 6000 Lite 500gb || Toshiba 5400rpm 1tb || Asus Rog Theta 7.1 || Asus Rog claymore || Asus Gladius 2 origin gaming mouse || Monitor 1 Asus 1080p 144hz || Monitor 2 AOC 1080p 75hz || 

Test Rig.

Spoiler

Ryzen 5 3400G || Gigabyte b450 S2H || Hyper X fury 2x4gb 2666mhz cl 16 ||Stock cooler || Antec NX100 || Silverstone essential 400w || Transgend SSD 220s 480gb ||

Just Sold

Spoiler

| i3 9100F || Msi Gaming X gtx 1050 TI || MSI Z390 A-Pro || Kingston 1x16gb 2400mhz cl17 || Stock cooler || Kolink Horizon RGB || Corsair CV 550w || Pny CS900 120gb ||

 

Tier lists for building a PC.

 

Motherboard tier list. Tier A for overclocking 5950x. Tier B for overclocking 5900x, Tier C for overclocking 5800X. Tier D for overclocking 5600X. Tier F for 4/6 core Cpus at stock. Tier E avoid.

(Also case airflow matter or if you are using Downcraft air cooler)

Spoiler

 

Gpu tier list. Rtx 3000 and RX 6000 not included since not so many reviews. Tier S for Water cooling. Tier A and B for overcloking. Tier C stock and Tier D avoid.

( You can overclock Tier C just fine, but it can get very loud, that is why it is not recommended for overclocking, same with tier D)

Spoiler

 

Psu tier List. Tier A for Rtx 3000, Vega and RX 6000. Tier B For anything else. Tier C cheap/IGPU. Tier D and E avoid.

(RTX 3000/ RX 6000 Might run just fine with higher wattage tier B unit, Rtx 3070 runs fine with tier B units)

Spoiler

 

Cpu cooler tier list. Tier 1&2 for power hungry Cpus with Overclock. Tier 3&4 for overclocking Ryzen 3,5,7 or lower power Intel Cpus. Tier 5 for overclocking low end Cpus or 4/6 core Ryzen. Tier 6&7 for stock. Tier 8&9 Ryzen stock cooler performance. Do not waste your money!

Spoiler

 

Storage tier List. Tier A for Moving files/  OS. Tier B for OS/Games. Tier C for games. Tier D budget Pcs. Tier E if on sale not the worst but not good.

(With a grain of salt, I use tier C for OS myself)

Spoiler

 

Case Tier List. Work In Progress. Most Phanteks airflow series cases already done!

Ask me anything :)

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, AndreiArgeanu said:

If the 11900K is priced similarly to the 5900X it's a really big oof for Intel, I'd take 4 more cores instead of the single digit improvements that I won't notice. Hopefully their next generation cpu's will actually be better, especially the power consumption, if it continues like this, on the same 14nm process, Intel might end up with consumer cpu's that surpass AMD's FX 9590 220W TDP.

Shit seems like it would be DOA if you could actually get your hands on a 5900X these days. About the only thing going for it would be that it'll be in stock everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, SavageNeo said:

When you have SSD and you accidentally restart your pc :)

When I finally moved to all NVMe storage, and my only SATA SSD/HDDs are external raids.

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

I'm interested in getting 11th gen, if they're priced right of course

 

Patiently waiting for reviews

-sigh- feeling like I'm being too negative lately

Link to comment
Share on other sites

Link to post
Share on other sites

48 minutes ago, LAwLz said:

Intel bad. 

 

AMD good. 

Well it's not like the high end Intel K SKUs aren't already hotter and use more power, these have a even higher TDP and PL2 so it's not like this isn't true.

 

Not that this shouldn't stop a silly joke anyway :)

 

Spoiler

Last time I ever cared about power or heat: Never 😉

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, comander said:

At the same timings, using dual rank RAM can give up to 20% better performance (extreme case is 7-zip... even AOTS benefits by 12%).

This has been known for many years, although it only seemed to be "rediscovered" at Zen 3 launch for some reason. Note Intel CPUs also benefit from this. I first encountered this years ago when by chance I got dual rank ram on my first 6700k system, and couldn't work out why my second 6700k system which has single rank ram performed much worse. Ideally any testing would be like for like.

 

5 hours ago, comander said:

I think a lot of what happened with reviewers is that the notion of "more RAM doesn't matter" resulted in set ups with a limit number of modules (or lower capacity/lower rank modules -MORE ranks usually just means they plop in more ICs) being used... and then it was only natural to run those modules FAST. AMD's set up benefits less from this vs somewhat slower and more ranks. 

That's a side question I wonder about at times. Just how much ram should be present in a gaming system today? I don't see a compelling need to go beyond 16GB for general gaming myself. Some specific games, especially if heavily modded, might need more, but I'll leave those exceptions out for the masses. The problem then is that 8GB modules today are all single rank. You'd have to go back years to find single rank modules at 8GB capacity. Even 16GB modules are going single rank, so that's not even a safe choice for ultimate performance seekers any more. 4GB modules seem to be on the way out, but running 4x4GB could be one way to get 16GB with 2 DPC (not strictly the same as 2 rank modules, but for practical purposes it is). 4x8GB is probably the safest way to do it now, but you then get more ram than might be really necessary.

 

5 hours ago, comander said:

Also... who the heck uses top end cards at 1080p?

That's more a test requirement in order to see the small differences these CPUs offer. IMO you reach "good enough" performance at lower CPU level than these best of best offerings. Still, for the manufacturers, it is a marketing win to be able to claim to have the best performing CPU in some area.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/30/2020 at 10:17 PM, CarlBar said:

 

Intels latest 14nm nodes are actually ahead of their planned 10nm performance. I'd have to dig around to be sure if my memory is acurratte but i seem to recall 7nm+ is supposed to have about the same perf per watt advantage over 7nm as 5nm does over 7nm+.I belive it works out at a cumulative 60% perf per watt between 7nm and 5nm.

if i remember right 10nm+ is when they expect to beat 14nm++++

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, cj09beira said:

if i remember right 10nm+ is when they expect to beat 14nm++++

The first is not used any more, and the latter never existed.

 

Look at Tiger Lake for a practical example. This is what Intel now call 10nm SuperFin (10SF). Compared to the 10nm version used for Ice Lake, they've overcome the clock wall and it runs much more efficiently now. The first Tiger Lake offerings went up to 4.8 GHz, and the just announced ones nudge 5.0 GHz. Also these are mobile CPUs, if implemented in a desktop form more might be possible. Depending on how you count it, Tiger Lake is also 1 to 2 architecture updates from Skylake with the IPC improvements that come with that.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

@porina

 

Why would single or dual rank affect things? I always thought single rank was better because they usually clock higher and can run at tighter timings than dual rank RAM. But having same speed and same timings, why would there be a 20% performance difference between single and dual rank RAM ?

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, RejZoR said:

@porina

 

Why would single or dual rank affect things? I always thought single rank was better because they usually clock higher and can run at tighter timings than dual rank RAM. But having same speed and same timings, why would there be a 20% performance difference between single and dual rank RAM ?

One dual rank dimm is equivalent to two single rank dimms of the same speed and timings

 

Yes in some cases single rank dimms will OC  way better but board and cpu imc still plays a key factor there and the quality on the chips on either ram set up

CPU: Intel i7 7700K | GPU: ROG Strix GTX 1080Ti | PSU: Seasonic X-1250 (faulty) | Memory: Corsair Vengeance RGB 3200Mhz 16GB | OS Drive: Western Digital Black NVMe 250GB | Game Drive(s): Samsung 970 Evo 500GB, Hitachi 7K3000 3TB 3.5" | Motherboard: Gigabyte Z270x Gaming 7 | Case: Fractal Design Define S (No Window and modded front Panel) | Monitor(s): Dell S2716DG G-Sync 144Hz, Acer R240HY 60Hz (Dead) | Keyboard: G.SKILL RIPJAWS KM780R MX | Mouse: Steelseries Sensei 310 (Striked out parts are sold or dead, awaiting zen2 parts)

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, RejZoR said:

@porina

 

Why would single or dual rank affect things? I always thought single rank was better because they usually clock higher and can run at tighter timings than dual rank RAM. But having same speed and same timings, why would there be a 20% performance difference between single and dual rank RAM ?

 

Quote

A memory rank is a set of DRAM chips connected to the same chip select, which are therefore accessed simultaneously. In practice all DRAM chips share all of the other command and control signals, and only the chip select pins for each rank are separate (the data pins are shared across ranks).

 

 

Quote

Multi-rank modules allow several open DRAM pages (row) in each rank (typically eight pages per rank). This increases the possibility of getting a hit on an already open row address. The performance gain that can be achieved is highly dependent on the application and the memory controller's ability to take advantage of open pages

 

Link to comment
Share on other sites

Link to post
Share on other sites

Maybe it's #toosoon - the measurement isn't 1080p anymore.

The new standard is 1440p, that should be the new measurement if Intel/AMD are going to display one-upping each other.

 

I'm surprised Intel touted this, honestly. It doesn't seem to be enough to convert a Zen2 and certainly not enough to convert a Zen3 owner.

We can reasonably assume this chip will be hotter, for a minimal 1080p game in a very limited set of titles.

For the enthusiasts we claim we are, this new chip isn't delivering enough, imo.

 

Until Intel sorts out 7nm, meh.

 

Also I just bought a 5900x, I'm kinda sorta talking to myself to justify the purchase.

Potato Revamp

 

CPU: AMD 5900x || GPU: nVidia RTX 3080 || RAM: 32gb Trident Z Neo CL16 || Case: Fractal Torrent Compact || CPU Cooler: Scythe Fuma 2 || PSU: Corsair RM850 Gold || Storage: ADATA SX8200 Pro (1TB), 2x Samsung Evo 870 (2TB)

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, RejZoR said:

@porina

 

Why would single or dual rank affect things? I always thought single rank was better because they usually clock higher and can run at tighter timings than dual rank RAM. But having same speed and same timings, why would there be a 20% performance difference between single and dual rank RAM ?

The topic has been covered again and again in the past, most recently with Zen 3 launch. I'd suggest looking up the GN video on it as a starting point. It was one of their least worst ones.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, porina said:

The topic has been covered again and again in the past, most recently with Zen 3 launch. I'd suggest looking up the GN video on it as a starting point. It was one of their least worst ones.

It really wasn't. Not to mention how insanely difficult it is to dig out actual info on the RAM sticks you're buying. It's hard to find out if it's SR or DR and finding out which die revision it is, well that's almost mission impossible. And I always got perception everyone was praising SR memory sticks. Seriously, I've been building PC's for 20 years and this is the first time I heard such a thing. Go figure.

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, RejZoR said:

It really wasn't. Not to mention how insanely difficult it is to dig out actual info on the RAM sticks you're buying. It's hard to find out if it's SR or DR and finding out which die revision it is, well that's almost mission impossible. And I always got perception everyone was praising SR memory sticks. Seriously, I've been building PC's for 20 years and this is the first time I heard such a thing. Go figure.

It really isn't new. I only first heard of it about 5 years ago, but it was around far longer than that. But because Zen 3 is the new hotness it got rediscovered when different people got different benchmark results. The SR > DR thing is mostly a misunderstanding and irrelevant to anyone not overclocking ram hardcore. In practice DR > SR.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, RejZoR said:

It really wasn't. Not to mention how insanely difficult it is to dig out actual info on the RAM sticks you're buying. It's hard to find out if it's SR or DR and finding out which die revision it is, well that's almost mission impossible. And I always got perception everyone was praising SR memory sticks. Seriously, I've been building PC's for 20 years and this is the first time I heard such a thing. Go figure.

 

SR can get higher clocks, but its fair(er) to compare two SR to one DR, as there's only two memory channels, and many motherboard manuals have stuff like this in it:

image.png.f852314d83f7b3b9b1db6e45d940474c.png

Basically mixing SR/DR will only let you operate the memory at the slower spec, and you only get the higher spec with one module in each channel because two DR > two SR > four SR > four any configuration.

 

A DR module is essentially better than two SR modules, but it also means if you want a higher speed you can't populate the other slot on the same channel.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, kewtz said:

Maybe it's #toosoon - the measurement isn't 1080p anymore.

The new standard is 1440p, that should be the new measurement if Intel/AMD are going to display one-upping each other.

These are CPU benchmarks, upping the resolution only increases the chance of encountering a bottleneck somewhere else in the system. For GPUs or full system benchmarks 1440p+ makes sense but if you're doing direct CPU vs CPU comparisons then the lower the resolution the better (within reason). 

Link to comment
Share on other sites

Link to post
Share on other sites

So from a statistics point of view this is nearly all within margin-of-error with only one game being just barely statistically signficant. Ergo, Intel is still creating a worse part overall (when you factor in thermals, die size, etc).

RIP Intel

CPU - Ryzen 7 3700X | RAM - 64 GB DDR4 3200MHz | GPU - Nvidia GTX 1660 ti | MOBO -  MSI B550 Gaming Plus

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×