Jump to content
1 minute ago, Firebird_Gaming said:

Do they not? the hold the majority of both CPU and GPU sales for now. 

Who does? 

Intel HEDT and Server platform enthusiasts: Intel HEDT Xeon/i7 Megathread 

 

Main PC 

CPU: i9 7980XE @4.5GHz/1.22v/-2 AVX offset 

Cooler: EKWB Supremacy Block - custom loop w/360mm +280mm rads 

Motherboard: EVGA X299 Dark 

RAM:4x8GB HyperX Predator DDR4 @3200Mhz CL16 

GPU: Nvidia FE 2060 Super/Corsair HydroX 2070 FE block 

Storage:  1TB MP34 + 1TB 970 Evo + 500GB Atom30 + 250GB 960 Evo 

Optical Drives: LG WH14NS40 

PSU: EVGA 1600W T2 

Case & Fans: Corsair 750D Airflow - 3x Noctua iPPC NF-F12 + 4x Noctua iPPC NF-A14 PWM 

OS: Windows 11

 

Display: LG 27UK650-W (4K 60Hz IPS panel)

Mouse: EVGA X17

Keyboard: Corsair K55 RGB

 

Mobile/Work Devices: 2020 M1 MacBook Air (work computer) - iPhone 13 Pro Max - Apple Watch S3

 

Other Misc Devices: iPod Video (Gen 5.5E, 128GB SD card swap, running Rockbox), Nintendo Switch

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Firebird_Gaming said:

Do they not? the hold the majority of both CPU and GPU sales for now. 

AMD has been fabless since they split their fabrication production into Global Foundries (GloFo). AMD also doesn't own or even have a majority stake in GloFo, another entity does. TSMC and Samsung are the only two manufacturers that can make 7nm parts, and as far as I know, Samsung doesn't produce any AMD parts.

 

https://www.extremetech.com/computing/276169-amd-moves-all-7nm-cpu-gpu-production-to-tsmc

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Firebird_Gaming said:

AMD. 

They don't AFAIK. Intel still sells the most CPUs, especially in the prebuilt, mobile, server, and IoT space (Intel Atoms and such). Nvidia still sells the most GPUs for gaming, workstations, render farms, machine learning, etc. 

 

AMD has a large market share among tech enthusiasts and prosumers who are also tech enthusiasts or who know tech enthusiasts. Their market share is growing, but not massively (though I believe EPYC is doing very well in servers, and Threadripper is now well known for workstations). 

A massive, massive, massive, massive, totally huge, amount of people aren't very knowledgable. They've always used Intel, AMD is hot and cheap and bad, so they continue using AMD. Tons of OEMs will have contracts with Intel and continue using Intel for that reason, and also they're selling to people who implicitly trust Intel, so switching would be dumb anyways. Tons of server farms still run on Intel and they continue doing so because they're used to it or the higher ups just can't be arsed to change. The great majority of laptops run on Intel as well, and many IoT devices. 
 

Not everyone is as knowledgable as you are, there's an insane amount of "normies" who just run whatever comes in the magic internet box, and that's always had a blue "Intel Inside" sticker on it, so they trust those ones. Most noobs I've seen run Ryzen are doing so because I or another techie told them it was good, lmao. 

Intel HEDT and Server platform enthusiasts: Intel HEDT Xeon/i7 Megathread 

 

Main PC 

CPU: i9 7980XE @4.5GHz/1.22v/-2 AVX offset 

Cooler: EKWB Supremacy Block - custom loop w/360mm +280mm rads 

Motherboard: EVGA X299 Dark 

RAM:4x8GB HyperX Predator DDR4 @3200Mhz CL16 

GPU: Nvidia FE 2060 Super/Corsair HydroX 2070 FE block 

Storage:  1TB MP34 + 1TB 970 Evo + 500GB Atom30 + 250GB 960 Evo 

Optical Drives: LG WH14NS40 

PSU: EVGA 1600W T2 

Case & Fans: Corsair 750D Airflow - 3x Noctua iPPC NF-F12 + 4x Noctua iPPC NF-A14 PWM 

OS: Windows 11

 

Display: LG 27UK650-W (4K 60Hz IPS panel)

Mouse: EVGA X17

Keyboard: Corsair K55 RGB

 

Mobile/Work Devices: 2020 M1 MacBook Air (work computer) - iPhone 13 Pro Max - Apple Watch S3

 

Other Misc Devices: iPod Video (Gen 5.5E, 128GB SD card swap, running Rockbox), Nintendo Switch

Link to comment
Share on other sites

Link to post
Share on other sites

To add to what @Zando Bob said, both Intel and NVIDIA are well diverse and entrenched in what they offer. Intel isn't just a CPU manufacturing company, they make SSDs, wired and wireless network controllers (in fact, people say they're the best), and FPGAs and other ASIC prototyping parts. NVIDIA, while they mostly make GPUs, also offer a wide variety of software to take advantage of their hardware, which lets them get into a bunch of sectors.

 

Also of note, hardware companies need software to make their products actually matter. AMD likes to push for open standards, which is fine and all, but I haven't found anything they offer to actually make someone who's a lazy developer make full use of their product. People may balk at such things like NVIDIA's GameWorks, but when you're a developer who's stressed for time and your bosses demanded something yesterday, anything to get your product out the door sooner is seen as helpful.

Link to comment
Share on other sites

Link to post
Share on other sites

Nope, 2021 probably and 2022 is a solid bet for them to be back

Primary Laptop (Gearsy MK4): Ryzen 9 5900HX, Radeon RX 6800M, Radeon Vega 8 Mobile, 24 GB DDR4 2400 Mhz, 512 GB SSD+1TB SSD, 15.6 in 300 Hz IPS display

2021 Asus ROG Strix G15 Advantage Edition

 

Secondary Laptop (Uni MK2): Ryzen 7 5800HS, Nvidia GTX 1650, Radeon Vega 8 Mobile, 16 GB DDR4 3200 Mhz, 512 GB SSD 

2021 Asus ROG Zephyrus G14 

 

Meme Machine (Uni MK1): Shintel Core i5 7200U, Nvidia GT 940MX, 24 GB DDR4 2133 Mhz, 256 GB SSD+500GB HDD, 15.6 in TN Display 

2016 Acer Aspire E5 575 

 

Retired Laptop (Gearsy MK2): Ryzen 5 2500U, Radeon Vega 8 Mobile, 12 GB 2400 Mhz DDR4, 256 GB NVME SSD, 15.6" 1080p IPS Touchscreen 

2017 HP Envy X360 15z (Ryzen)

 

PC (Gearsy): A6 3650, HD 6530D , 8 GB 1600 Mhz Kingston DDR3, Some Random Mobo Lol, EVGA 450W BT PSU, Stock Cooler, 128 GB Kingston SSD, 1 TB WD Blue 7200 RPM

HP P7 1234 (Yes It's Actually Called That)  RIP 

 

Also im happy to answer any Ryzen Mobile questions if anyone is interested! 

 

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

They have HUGE budget, they will invest a shit ton of money in inovations and development of new chips as well also drops prices. Also, AMD doesn't have it's nodes, they're dependent on TSMC, while Intel have it's own. That's meaning that Intel have a much bigger production and can't run out of stock(Like AMD is right now). TSMC can't just focus on AMD's production because they have other customers. Also, while TSMC is working on 5nm and Samsung is working on 3,5nm, I'm pretty sure that Intel already HAVE 7nm chips right now somewhere in their labs. They're just improving them now, because they don't want to release them unperfect. Reason why 10nm is so delayed is because they weren't satisfyed with scalling and frequency, but they surely moved further and started doing 7nm at 1 point. It's like nVidia, nVidia can release Ampere RTX 3000 series today if they want and they're already working on it's successor, even tho Ampere is not even released yet

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Mira Yurizaki said:

To add to what @Zando Bob said, both Intel and NVIDIA are well diverse and entrenched in what they offer. Intel isn't just a CPU manufacturing company, they make SSDs, wired and wireless network controllers (in fact, people say they're the best), and FPGAs and other ASIC prototyping parts. NVIDIA, while they mostly make GPUs, also offer a wide variety of software to take advantage of their hardware, which lets them get into a bunch of sectors.

 

Also of note, hardware companies need software to make their products actually matter. AMD likes to push for open standards, which is fine and all, but I haven't found anything they offer to actually make someone who's a lazy developer make full use of their product. People may balk at such things like NVIDIA's GameWorks, but when you're a developer who's stressed for time and your bosses demanded something yesterday, anything to get your product out the door sooner is seen as helpful.

Edit: I meant to quote @Zando Bob. Whoops.

 

I've used AMD and Intel on on off over the years, and I don't really have any brand loyalty. I probably have owned equal amounts AMD products and Intel products. I currently have both Intel coffeecake systems and one Ryzen system. That said, it's impossible for anyone to be completely unbiased. And imo, people would be dumb if they didn't have some bias because we learn from the past and try to base our decisions on it. It's only natural to have some preference and it's a lie for anyone to claim complete objectivity.

 

I do have a small preference towards Intel systems, because in my experience (and to each person this is the most important metric, their own experiences) AMD motherboards are a lot more finnicky than Intel motherboards. This isn't a new unkown issue; large review sites also acknowledge that even with same branding (Aorus, etc) the quality delta between Intel motherboards and AMD has been terrible up until X470. Then there's memory issues, etc.

 

That said, I'm not going to ignore obvious advantages when they are staring me in the face. A small margin of extra cost or negative pelerformance I am more than willing to trade for stability and user friendliness, but when the value proposition is this bad, it isn't easy to overlook. As it stands now from 9th gen and AMD Ryzen, I can't recommend Intel to pretty much anyone.

 

As for GPUs, I tend to prefer AMD to Nvidia. They tend to offer more value and great performance, and stability seems less of an issue than with motherboard chipsets. But I've had my share of both.

 

So when I hear guys say, "Intel is better even if it costs more," or, "AMD sucks because they are wonky", I can understand where they are coming from and can relate.

 

It's only when they can't even consider the other option, and shut you down completely without even listening to why, is when they become stupid fanboys.

 

 

Before you reply to my post, REFRESH. 99.99% chance I edited my post. 

 

My System: i7-13700KF // Corsair iCUE H150i Elite Capellix // MSI MPG Z690 Edge Wifi // 32GB DDR5 G. SKILL RIPJAWS S5 6000 CL32 // Nvidia RTX 4070 Super FE // Corsair 5000D Airflow // Corsair SP120 RGB Pro x7 // Seasonic Focus Plus Gold 850w //1TB ADATA XPG SX8200 Pro/1TB Teamgroup MP33/2TB Seagate 7200RPM Hard Drive // Displays: LG Ultragear 32GP83B x2 // Royal Kludge RK100 // Logitech G Pro X Superlight // Sennheiser DROP PC38x

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Plutosaurus said:

Edit: I meant to quote @Zando Bob. Whoops.

 

I've used AMD and Intel on on off over the years, and I don't really have any brand loyalty. I probably have owned equal amounts AMD products and Intel products. I currently have both Intel coffeecake systems and one Ryzen system. That said, it's impossible for anyone to be completely unbiased. And imo, people would be dumb if they didn't have some bias because we learn from the past and try to base our decisions on it. It's only natural to have some preference and it's a lie for anyone to claim complete objectivity.

 

I do have a small preference towards Intel systems, because in my experience (and to each person this is the most important metric, their own experiences) AMD motherboards are a lot more finnicky than Intel motherboards. This isn't a new unkown issue; large review sites also acknowledge that even with same branding (Aorus, etc) the quality delta between Intel motherboards and AMD has been terrible up until X470. Then there's memory issues, etc.

 

That said, I'm not going to ignore obvious advantages when they are staring me in the face. A small margin of extra cost or negative pelerformance I am more than willing to trade for stability and user friendliness, but when the value proposition is this bad, it isn't easy to overlook. As it stands now from 9th gen and AMD Ryzen, I can't recommend Intel to pretty much anyone.

 

As for GPUs, I tend to prefer AMD to Nvidia. They tend to offer more value and great performance, and stability seems less of an issue than with motherboard chipsets. But I've had my share of both.

 

So when I hear guys say, "Intel is better even if it costs more," or, "AMD sucks because they are wonky", I can understand where they are coming from and can relate.

 

It's only when they can't even consider the other option, and shut you down completely without even listening to why, is when they become stupid fanboys.

 

 

Well said

 

Same here, multiple intel and AMD systems here over time (currently 2 AMD 1 intel system for CPU).  While AMD is killing it and its awesome, I still have some reservations.  I love what AMD is doing, but honestly for me most of their high end sku's are overkill.  I probably piece together a video a week (whether from skiing, cycling, or just gopro or other videos from trips).  It is nice to have everything work smoothly and it render quickly.  That said, my 9900k crushes it in premiere and photoshop, it hasnt really hindered me.  But what's more important to me is when I game, its the best experience.  If my 1-2 time per week render takes a minute longer, meh.  I want the best experience gaming when I have the time to get on.  If I was a content creator or full time streamer, I would probably lean toward a 3900x or 3950x (though streamer side performance/fps is still maybe the best with 9900k while streaming).

 

I have had a few AMD "wonky" experiences at this point, and I cant chalk it up to random chance.  They make some good stuff, with lots of cores, but its finnicky.  If I have the choice of AMd vs intel for CPU, I weigh the cost to what I am going to do and at what resolution, then make a decision.  For GPU, I pretty much avoid AMD (which is truly terrible, because I despise Nvidias price gouging and BS segmentation).  Too many finnicky experiences, bad drivers, shaky performance I have personally experienced.  While sometimes AMD beats NVIDIA or intel on performance, I have never had a wonky/shakey/RMA part from intel or Nvidia ever; always fully functional, no effort required on my part, flawless.  I have had 2 bad experiences from AMD.  One CPU, one GPU.  And their releases are riddled with bugs and bad drivers.  I have extra money to spend, and I want to spend my time doing what I want not making a part work the way it should.  

 

Intel will probably make a comeback, if they dont, they deserve AMD to sh*t all over them, its their job to innovate and make a good product.  

 

 

El Zoido:  9900k + RTX 4090 / 32 gb 3600mHz RAM / z390 Aorus Master 

 

The Box:  3900x + RTX 3080 /  32 gb 3000mHz RAM / B550 MSI mortar 

Link to comment
Share on other sites

Link to post
Share on other sites

Intel make a comeback?

How when they are selling out of everything

They can't sell more of what they don't have

 

They competitive for not having node shrink in half a decade

They banked and still banking considering 5yrs is long as time in tech

 

Amd needs to keep pushing for that bank to handle intels money

And we all win 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, chaqs3 said:

if Intel still use 14nm。。。。。

 

And the 9900k is proof that the process still works.

 

It's not a huge stride forward for them conpared to themselves, but remember, it took until 2017 for AMD to catch up to Haswell, and only now with 3000 series has AMD actually caught up in single threaded performance to Coffeelake.

 

The problem with present Intel isn't their node but their product segmentation and prices.

 

We can sit on 14+++ for another year as long as the options are competitive.

 

If 10nm isn't working out as planned, it is better to go with what they know works than put out a bad product.

 

AMD did that, and they are never truly going to love it down. People will never forget FX, no matter how many Ryzen 3000s AMD has.

 

Hopefully 10th gen will have compelling configurations and prices , and they come out swinging in 2021 with 7nm.

 

And hopefully AMD keeps churning out refinements on Zen. Maybe in another 5 years they'll be on Zen3+++ and it will still be amazing. And that's fine.

Before you reply to my post, REFRESH. 99.99% chance I edited my post. 

 

My System: i7-13700KF // Corsair iCUE H150i Elite Capellix // MSI MPG Z690 Edge Wifi // 32GB DDR5 G. SKILL RIPJAWS S5 6000 CL32 // Nvidia RTX 4070 Super FE // Corsair 5000D Airflow // Corsair SP120 RGB Pro x7 // Seasonic Focus Plus Gold 850w //1TB ADATA XPG SX8200 Pro/1TB Teamgroup MP33/2TB Seagate 7200RPM Hard Drive // Displays: LG Ultragear 32GP83B x2 // Royal Kludge RK100 // Logitech G Pro X Superlight // Sennheiser DROP PC38x

Link to comment
Share on other sites

Link to post
Share on other sites

I rate CPUs for gaming by their Cinebench R15 single core performance.

 

Right now my i7 8086k score is 222. That is about the same as a i9 9900ks. In tests with an regular i9 9900k I know I don't need the extra cores yet.

 

The CPU that beats it is the AMD Ryzen 9 3950X with a score of 226. It is out of my price range for a gaming CPU so if one of my i7 8086k dies I would get a i9 9900ks.

 

I would like to upgrade to a CPU with a score of 250 or better at around $500 and I don't care who does it.

RIG#1 CPU: AMD, R 7 5800x3D| Motherboard: X570 AORUS Master | RAM: Corsair Vengeance RGB Pro 32GB DDR4 3200 | GPU: EVGA FTW3 ULTRA  RTX 3090 ti | PSU: EVGA 1000 G+ | Case: Lian Li O11 Dynamic | Cooler: EK 360mm AIO | SSD#1: Corsair MP600 1TB | SSD#2: Crucial MX500 2.5" 2TB | Monitor: ASUS ROG Swift PG42UQ

 

RIG#2 CPU: Intel i9 11900k | Motherboard: Z590 AORUS Master | RAM: Corsair Vengeance RGB Pro 32GB DDR4 3600 | GPU: EVGA FTW3 ULTRA  RTX 3090 ti | PSU: EVGA 1300 G+ | Case: Lian Li O11 Dynamic EVO | Cooler: Noctua NH-D15 | SSD#1: SSD#1: Corsair MP600 1TB | SSD#2: Crucial MX300 2.5" 1TB | Monitor: LG 55" 4k C1 OLED TV

 

RIG#3 CPU: Intel i9 10900kf | Motherboard: Z490 AORUS Master | RAM: Corsair Vengeance RGB Pro 32GB DDR4 4000 | GPU: MSI Gaming X Trio 3090 | PSU: EVGA 1000 G+ | Case: Lian Li O11 Dynamic | Cooler: EK 360mm AIO | SSD#1: Crucial P1 1TB | SSD#2: Crucial MX500 2.5" 1TB | Monitor: LG 55" 4k B9 OLED TV

 

RIG#4 CPU: Intel i9 13900k | Motherboard: AORUS Z790 Master | RAM: Corsair Dominator RGB 32GB DDR5 6200 | GPU: Zotac Amp Extreme 4090  | PSU: EVGA 1000 G+ | Case: Streacom BC1.1S | Cooler: EK 360mm AIO | SSD: Corsair MP600 1TB  | SSD#2: Crucial MX500 2.5" 1TB | Monitor: LG 55" 4k B9 OLED TV

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, jones177 said:

I rate CPUs for gaming by their Cinebench R15 single core performance.

 

Right now my i7 8086k score is 222. That is about the same as a i9 9900ks. In tests with an regular i9 9900k I know I don't need the extra cores yet.

 

The CPU that beats it is the AMD Ryzen 9 3950X with a score of 226. It is out of my price range for a gaming CPU so if one of my i7 8086k dies I would get a i9 9900ks.

 

I would like to upgrade to a CPU with a score of 250 or better at around $500 and I don't care who does it.

 

Except even with the CB single core score, the 8700k+ still beats the 3900x at most games.

 

It's not just straight synthetic benchmarks, there are other factors.

 

Unless you are doing something specifically that requires more cores there's absolutely no reason to upgrade from an 8086k. At all.

 

Your gaming experience will be worse or unchanged by going to a 3900x.

Before you reply to my post, REFRESH. 99.99% chance I edited my post. 

 

My System: i7-13700KF // Corsair iCUE H150i Elite Capellix // MSI MPG Z690 Edge Wifi // 32GB DDR5 G. SKILL RIPJAWS S5 6000 CL32 // Nvidia RTX 4070 Super FE // Corsair 5000D Airflow // Corsair SP120 RGB Pro x7 // Seasonic Focus Plus Gold 850w //1TB ADATA XPG SX8200 Pro/1TB Teamgroup MP33/2TB Seagate 7200RPM Hard Drive // Displays: LG Ultragear 32GP83B x2 // Royal Kludge RK100 // Logitech G Pro X Superlight // Sennheiser DROP PC38x

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/2/2019 at 7:57 PM, Firebird_Gaming said:

After the launch of 3rd gen Ryzen 9's and Threadrippers, I really have doubts about intel's future as a competitive processor manufacturer (As do many others).  However, I managed to find a roadmap of intels new 10th gen, 14nm processors. They are definite, logical improvements over current, ninth generation processors. The 14nm processor has another + tacked to it, brining hopefully better per thread performance and new architecture upgrades. Also, the HD graphics 7th gen is featured on these chips, hopefully thats good news to budget gamers (i3 10100 and 10300 buyers). I also took note of the increased  L3 cache, and... different socket... Someone is gonna need a new motherboard. But the better news is that it also has more cores and hyper threading throughout the lineup, which should bring intel closer to matching AMD in multi thread performance throughout the lineup. But is it good enough? Maybe, I think that Intel still holds the lead in single core performance, and more cores and more hyper-threading can only help close the gap to AMD's equivalent chips. The i9 and i7 may be good enough to close the gap with the r7 and r9 (3900x). However, with 4th gen ryzen coming soon, I doubt how long they can hold a lead. However, in June, ice lake will be released. The 10nm architecture could be quite competitive assuming that intel holds their single thread advantage, high core clocks, and 4-6-8-10 core lineup. I am interested to see if anyone shares my views. A lot of people say that intel is dead, but is it really? Sure now it has lost it's lead, but I am sure that the future holds potential. 

  

To get at the original question here: Intel is challenged in the enthusiast space right now and they won’t have a significantly better product until 10nm enthusiast CPUs actually launch (which might end up being 2022 in the worst scenario). Cascade Lake-X is likely a sign of future 14nm improvements, which is to say the new products will have essentially unchanged IPC with core clocks being broadly similar. The main levers Intel has to work with for the next few years are pricing and enabling hyperthreading. Meanwhile, AMD will make substantial architecture revisions in 2020 and 2021, potentially even moving to a new process for Zen 4. 

 

This isn't to say that Intel is doomed - 9900k is still the best gaming CPU on the market and a theoretical 6C/12T i5 CPU at around $200 USD would be competitive. But Intel has very serious problems innovating in the enthusiast space for the next few years which will leave it vulnerable to any improvements in AMD’s products. The lightly threaded performance gap isn’t that large, after all.

 

Outside of the enthusiast space it’s looking better for Intel. They’ve got the best mobile CPUs, they’ve got good vendor relationships, and AMD’s fab capacity is constrained by their being a TSMC customer. 

AMD Ryzen 7 3700X | Thermalright Le Grand Macho RT | ASUS ROG Strix X470-F | 16GB G.Skill Trident Z RGB @3400MHz | EVGA RTX 2080S XC Ultra | EVGA GQ 650 | HP EX920 1TB / Crucial MX500 500GB / Samsung Spinpoint 1TB | Cooler Master H500M

Link to comment
Share on other sites

Link to post
Share on other sites

Some food for thought regarding games and core/thread usage. I came across these graphs a while ago that looked interesting:

3329-vtune-ashes.jpg&key=67830922cdc9270

 

1636-vtune-gtav.jpg&key=8d32ca6749ad0d9e

The first graph is a histogram of how many threads were running at once in Ashes of the Singularity. The second graph is a histogram of the same thing on GTA V. I believe I've seen people say GTA V doesn't improve a whole lot past four cores/threads. This is the reason why.

 

Unfortunately the way this data was obtained using a CPU profiler that I can't find and probably we won't have access to without paying someone more money than is reasonable to spend for something like this.

 

Just now, melete said:

Outside of the enthusiast space it’s looking better for Intel. They’ve got the best mobile CPUs, they’ve got good vendor relationships, and AMD’s fab capacity is constrained by their being a TSMC customer. 

And let's be real here, the enthusiast space is a drop in the bucket compared to everything else.

Link to comment
Share on other sites

Link to post
Share on other sites

@Mira YurizakiAt this stage I suspect that being vertically integrated is what is holding Intel back. From everything I've read, Intel's Sunny/Willow Cove architecture is a powerehouse, but was so tied to the 10nm process that when that side of the house dropped the ball, Intel ended up stuck on the Sky Lake architecture while they've been trying to get a handle on it. Further, Intel's issues producing enough 14nm products to meet demand has ceeded market share to AMD that Intel did not have to. 

 

Meanwhile TSMC has been developing improved 7nm and now 5nm processes form their high end products, and I'm hearing bits that that may be ready in the 2021 timeframe. So, it is not so much Intel vs AMD as it is the Intel Architecture division vs AMD and the Intel Fab shop vs TSMC/Samsung.

 

From what I'm hearing, I'm expecting the Cannon Lake chips to be a stop gap while they back-port Willow Cove onto 14nm for Tiger Lake, and I expect the Tiger Lake chips to be, core for core, competitive with and possibly even better than what we are seeing speculated for the Zen 3 chips, but I'm also expecting them to be more expensive for their core count, and lower max core limits. I'm thinking that will keep Intel on the game while they're sorting out their 10nm/7nm processes and while they're decoupling their node and architecture, but I'm expecting Zen 3 to do and sell very well. 

 

At that point, I'm thinking the big showdown will be in 2021 when Intel is trying to release the post 14nm nodes while AMD is going to the AM5/Zen 4 platform. If either of them stumble there, it could be a Bulldozer/Netburst level failure. 

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, Harry Voyager said:

@Mira YurizakiAt this stage I suspect that being vertically integrated is what is holding Intel back. From everything I've read, Intel's Sunny/Willow Cove architecture is a powerehouse, but was so tied to the 10nm process that when that side of the house dropped the ball, Intel ended up stuck on the Sky Lake architecture while they've been trying to get a handle on it. Further, Intel's issues producing enough 14nm products to meet demand has ceeded market share to AMD that Intel did not have to. 

 

Meanwhile TSMC has been developing improved 7nm and now 5nm processes form their high end products, and I'm hearing bits that that may be ready in the 2021 timeframe. So, it is not so much Intel vs AMD as it is the Intel Architecture division vs AMD and the Intel Fab shop vs TSMC/Samsung.

 

From what I'm hearing, I'm expecting the Cannon Lake chips to be a stop gap while they back-port Willow Cove onto 14nm for Tiger Lake, and I expect the Tiger Lake chips to be, core for core, competitive with and possibly even better than what we are seeing speculated for the Zen 3 chips, but I'm also expecting them to be more expensive for their core count, and lower max core limits. I'm thinking that will keep Intel on the game while they're sorting out their 10nm/7nm processes and while they're decoupling their node and architecture, but I'm expecting Zen 3 to do and sell very well. 

 

At that point, I'm thinking the big showdown will be in 2021 when Intel is trying to release the post 14nm nodes while AMD is going to the AM5/Zen 4 platform. If either of them stumble there, it could be a Bulldozer/Netburst level failure. 

The good news though is that no matter how bad the next chips are, our current minimum level of graphics realism/immersion is already so good that even if it takes a few years to get some big gains, it won't really be that bad.

 

And it's likely that in terms of gaming, hardware gpu level RTRT will be the next big thing and won't really care about CPUs anyway.

 

If the market mainstream are potent 6/12s, game developers already have way more resources to work with than they ever had.

Before you reply to my post, REFRESH. 99.99% chance I edited my post. 

 

My System: i7-13700KF // Corsair iCUE H150i Elite Capellix // MSI MPG Z690 Edge Wifi // 32GB DDR5 G. SKILL RIPJAWS S5 6000 CL32 // Nvidia RTX 4070 Super FE // Corsair 5000D Airflow // Corsair SP120 RGB Pro x7 // Seasonic Focus Plus Gold 850w //1TB ADATA XPG SX8200 Pro/1TB Teamgroup MP33/2TB Seagate 7200RPM Hard Drive // Displays: LG Ultragear 32GP83B x2 // Royal Kludge RK100 // Logitech G Pro X Superlight // Sennheiser DROP PC38x

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Plutosaurus said:

The good news though is that no matter how bad the next chips are, our current minimum level of graphics realism/immersion is already so good that even if it takes a few years to get some big gains, it won't really be that bad.

 

And it's likely that in terms of gaming, hardware gpu level RTRT will be the next big thing and won't really care about CPUs anyway.

Very much yes. And as high core count CPUs have gone mainstream, we're starting to see more and more developers implement multi-threading.

 

In my primary game, Il-2, the dev team finally broke out building rendering into a separate thread, so now you can see for absolutely miles without taking a serious performance hit. Ram takes a bit of a beating (20Gb charge on high), but RAM's cheap amyways. I'm hoping with the new AI programmer they're able to streamline and parallelize the AI enough that we could do missions with hundreds of active aircraft. I don't know that they'll go as far as being able to do the Scheinfurt-Regensburg raids or Black Thursday but big there were a lot of very large engagements, that we really just need the CPU power to run in all their insanity. 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Harry Voyager said:

Very much yes. And as high core count CPUs have gone mainstream, we're starting to see more and more developers implement multi-threading.

 

In my primary game, Il-2, the dev team finally broke out building rendering into a separate thread, so now you can see for absolutely miles without taking a serious performance hit. Ram takes a bit of a beating (20Gb charge on high), but RAM's cheap amyways. I'm hoping with the new AI programmer they're able to streamline and parallelize the AI enough that we could do missions with hundreds of active aircraft. I don't know that they'll go as far as being able to do the Scheinfurt-Regensburg raids or Black Thursday but big there were a lot of very large engagements, that we really just need the CPU power to run in all their insanity. 

WoW used to be notorious for not giving a shit past 2-3 threads. Meant it ran like shit on anything but a highly clocked Intel.

 

Recently with Ryzen launch after like 13 fuckin years, they updated it and now it's much better. It can now really take a lot more advantage of resources. Still not perfect but, I'll take it.

Before you reply to my post, REFRESH. 99.99% chance I edited my post. 

 

My System: i7-13700KF // Corsair iCUE H150i Elite Capellix // MSI MPG Z690 Edge Wifi // 32GB DDR5 G. SKILL RIPJAWS S5 6000 CL32 // Nvidia RTX 4070 Super FE // Corsair 5000D Airflow // Corsair SP120 RGB Pro x7 // Seasonic Focus Plus Gold 850w //1TB ADATA XPG SX8200 Pro/1TB Teamgroup MP33/2TB Seagate 7200RPM Hard Drive // Displays: LG Ultragear 32GP83B x2 // Royal Kludge RK100 // Logitech G Pro X Superlight // Sennheiser DROP PC38x

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, Harry Voyager said:

Very much yes. And as high core count CPUs have gone mainstream, we're starting to see more and more developers implement multi-threading.

 

In my primary game, Il-2, the dev team finally broke out building rendering into a separate thread, so now you can see for absolutely miles without taking a serious performance hit. Ram takes a bit of a beating (20Gb charge on high), but RAM's cheap amyways. I'm hoping with the new AI programmer they're able to streamline and parallelize the AI enough that we could do missions with hundreds of active aircraft. I don't know that they'll go as far as being able to do the Scheinfurt-Regensburg raids or Black Thursday but big there were a lot of very large engagements, that we really just need the CPU power to run in all their insanity. 

Though I wonder how many games will actually do something at that scale. For a game like IL-2, perhaps having hundreds of entities flying around makes sense, but not for something like a corridor shooter or hell, even GTA. While having higher populations would be nice, from a game play perspective, I think having more people would be a burden. Granted it might be lulzy for you to drive through LA traffic without care while running away from the cops, but I'd imagine it'd get annoying real fast if you had to do deal with actual real-world conditions in that game.

 

But on another note, I'm even wondering what else we're overlooking. Half-Life 2 had pretty decent combat AI that could support a decent amount of entities at once, and it was originally designed to run on a single core CPU from the earlish 2000s. And then there's STALKER's AI which supposedly could be the game if it wasn't toned down.

Link to comment
Share on other sites

Link to post
Share on other sites

Didn't HL handle that by essentially design sting cover points for the AI on each map?

 

I recall reading one of the things the original Il-2 did was treat all of the 'gunner' positions as one single gunner and aimed all the guns with a single entity. That made them effective, but also limited the bombers to single direction defense and meant if one gunner hit you, every gunner hit you, making most bombers rather suicidal to go up against. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Plutosaurus said:

 

Except even with the CB single core score, the 8700k+ still beats the 3900x at most games.

 

It's not just straight synthetic benchmarks, there are other factors.

 

Unless you are doing something specifically that requires more cores there's absolutely no reason to upgrade from an 8086k. At all.

 

Your gaming experience will be worse or unchanged by going to a 3900x.

That is the sad part.

There is no upgrade for you or me.

 

I want 6ghz.  

That's not too much to ask is it?

RIG#1 CPU: AMD, R 7 5800x3D| Motherboard: X570 AORUS Master | RAM: Corsair Vengeance RGB Pro 32GB DDR4 3200 | GPU: EVGA FTW3 ULTRA  RTX 3090 ti | PSU: EVGA 1000 G+ | Case: Lian Li O11 Dynamic | Cooler: EK 360mm AIO | SSD#1: Corsair MP600 1TB | SSD#2: Crucial MX500 2.5" 2TB | Monitor: ASUS ROG Swift PG42UQ

 

RIG#2 CPU: Intel i9 11900k | Motherboard: Z590 AORUS Master | RAM: Corsair Vengeance RGB Pro 32GB DDR4 3600 | GPU: EVGA FTW3 ULTRA  RTX 3090 ti | PSU: EVGA 1300 G+ | Case: Lian Li O11 Dynamic EVO | Cooler: Noctua NH-D15 | SSD#1: SSD#1: Corsair MP600 1TB | SSD#2: Crucial MX300 2.5" 1TB | Monitor: LG 55" 4k C1 OLED TV

 

RIG#3 CPU: Intel i9 10900kf | Motherboard: Z490 AORUS Master | RAM: Corsair Vengeance RGB Pro 32GB DDR4 4000 | GPU: MSI Gaming X Trio 3090 | PSU: EVGA 1000 G+ | Case: Lian Li O11 Dynamic | Cooler: EK 360mm AIO | SSD#1: Crucial P1 1TB | SSD#2: Crucial MX500 2.5" 1TB | Monitor: LG 55" 4k B9 OLED TV

 

RIG#4 CPU: Intel i9 13900k | Motherboard: AORUS Z790 Master | RAM: Corsair Dominator RGB 32GB DDR5 6200 | GPU: Zotac Amp Extreme 4090  | PSU: EVGA 1000 G+ | Case: Streacom BC1.1S | Cooler: EK 360mm AIO | SSD: Corsair MP600 1TB  | SSD#2: Crucial MX500 2.5" 1TB | Monitor: LG 55" 4k B9 OLED TV

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×