Jump to content

Intel Icelake Server delayed

11 minutes ago, Brooksie359 said:

Yeah the main reason they are behind is latency and clock speeds. 4.2 vs 5ghz is a pretty big gap. 

But on the flip side AMD's clocks on the server side are better in some cases (if I remember correctly)

if you want to annoy me, then join my teamspeak server ts.benja.cc

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Taf the Ghost said:

AMD is mostly all-in on jumping to the next node in the GPU space. GCN just doesn't scale up as well as Nvidia's base CUDA approach, so they're looking to carve out in the middle until they can move to an entirely new Architecture.

Well, that's what I mean about their GPU roadmap being bleak. AMD is stuck on GCN for 2-3 years more. That's a terrible proposition. 

I mean GCN is a decent workhorse architecture but it's too far behind Nvidia and Nvidia is soldiering on while AMD really isn't moving forward in any meaningful way. The node isn't helping them much when their chips are bottlenecked and relatively inefficient when pumping up the numbers.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, The Benjamins said:

But on the flip side AMD's clocks on the server side are better in some cases (if I remember correctly)

On a per-cost basis, yes. Intel does have higher clocking server parts, but you're going to be paying a lot of money for them. Mostly as they're the outright best silicon that Intel produces.

1 minute ago, Trixanity said:

Well, that's what I mean about their GPU roadmap being bleak. AMD is stuck on GCN for 2-3 years more. That's a terrible proposition. 

I mean GCN is a decent workhorse architecture but it's too far behind Nvidia and Nvidia is soldiering on while AMD really isn't moving forward in any meaningful way. The node isn't helping them much when their chips are bottlenecked and relatively inefficient when pumping up the numbers.

I wouldn't say it's bleak. The issue is scaling up with larger dies. That's where AMD gets hurt with GCN. They can't compete in high-end Gaming graphics, but they can in high-end Compute.

 

No, what is bleak is anyone getting another AMD GPU for a reasonable price for the next 3 years.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Taf the Ghost said:

On a per-cost basis, yes. Intel does have higher clocking server parts, but you're going to be paying a lot of money for them. Mostly as they're the outright best silicon that Intel produces.

I wouldn't say it's bleak. The issue is scaling up with larger dies. That's where AMD gets hurt with GCN. They can't compete in high-end Gaming graphics, but they can in high-end Compute.

 

No, what is bleak is anyone getting another AMD GPU for a reasonable price for the next 3 years.

It is bleak if there is no real competition for 3 years. If your arch has hit a wall and you need many years to work around it you're in a bad spot. It's not all that dissimilar to Intel's 10nm which we've discussed in this very thread. The difference is Intel have had a node advantage for so long and now competitors are finally catching up because Intel hit a wall whereas AMD has been at a disadvantage for years now and will continue to be so while not being able to do anything to catch up. In essence: competitors catching up to fab leader Intel vs AMD falling even further behind the GPU leader.

 

Not being able to compete with Nvidia 1:1 in every segment whether for gaming or compute is bad. Even with a price parity many consumers feel like AMD is the worse deal whether due to broader game performance advantages or power consumption. That's not good. That means even if AMD can continue to re-release similar chip configurations every year to compete with Nvidia's latest midrange it's pretty bad if they do so with a worse value proposition. At some point they'll have to release another 64 CU card to compete with Nvidia's midrange and then they're fucked if post-GCN isn't ready yet.

 

I really don't see any positives for AMD on the GPU side. The compute thing isn't a sure thing either as Nvidia leads through CUDA and developer resources and performance being very workload dependent for AMD to take the lead. I'd give AMD the win on this if they were ahead in compute workloads across the board which they're not. 

 

As far as I can see they're stuck between a rock and a hard place in current and future scenarios. They deprioritized GPUs and are paying for it now.

Link to comment
Share on other sites

Link to post
Share on other sites

So what were going to have is like what we had in the early 2000s with AMD having new Products and Intel Having no response. 

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 3.0: R7 5700x @, B550A vision D, H105, 2x32gb Oloy 3600, Sapphire RX 6700XT  Nitro+, Corsair RM750X, 500 gb 850 evo, 2tb rocket and 5tb Toshiba x300, 2x 6TB WD Black W10 all in a 750D airflow.
GF PC: (nighthawk 2.0): R7 2700x, B450m vision D, 4x8gb Geli 2933, Strix GTX970, CX650M RGB, Obsidian 350D

Skunkworks: R5 3500U, 16gb, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6 Ubuntu 20. LTS

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Spirt  (NAS) ASUS Z9PR-D12, 2x E5 2620V2, 8x4gb, 24 3tb HDD. F80 800gb cache, trueNAS, 2x12disk raid Z3 stripped

PSU Tier List      Motherboard Tier List     SSD Tier List     How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."

Camera Gear: X-S10, 16-80 F4, 60D, 24-105 F4, 50mm F1.4, Helios44-m, 2 Cos-11D lavs

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, GDRRiley said:

So what were going to have is like what we had in the early 2000s with AMD having new Products and Intel Having no response. 

We don't have a Netburst just yet. Who knows, maybe they have another core up their sleeve...

 

Or maybe it's just a Teja.

Link to comment
Share on other sites

Link to post
Share on other sites

23 hours ago, Trixanity said:

It is bleak if there is no real competition for 3 years. If your arch has hit a wall and you need many years to work around it you're in a bad spot. It's not all that dissimilar to Intel's 10nm which we've discussed in this very thread. The difference is Intel have had a node advantage for so long and now competitors are finally catching up because Intel hit a wall whereas AMD has been at a disadvantage for years now and will continue to be so while not being able to do anything to catch up. In essence: competitors catching up to fab leader Intel vs AMD falling even further behind the GPU leader.

 

Not being able to compete with Nvidia 1:1 in every segment whether for gaming or compute is bad. Even with a price parity many consumers feel like AMD is the worse deal whether due to broader game performance advantages or power consumption. That's not good. That means even if AMD can continue to re-release similar chip configurations every year to compete with Nvidia's latest midrange it's pretty bad if they do so with a worse value proposition. At some point they'll have to release another 64 CU card to compete with Nvidia's midrange and then they're fucked if post-GCN isn't ready yet.

 

I really don't see any positives for AMD on the GPU side. The compute thing isn't a sure thing either as Nvidia leads through CUDA and developer resources and performance being very workload dependent for AMD to take the lead. I'd give AMD the win on this if they were ahead in compute workloads across the board which they're not. 

 

As far as I can see they're stuck between a rock and a hard place in current and future scenarios. They deprioritized GPUs and are paying for it now.

GCN has been around since the HD 7000 series right? So 2011, got have to say for a single base architecture it was quite forward thinking. The fact that up until fury or so, there really hadn't been very many changes to the architecture. AMD seems to have made an architect that can be scaled up, whereas Nvidia seem to make the big guy first then scale down from there. The advantages is you know you're biggest core and can work from there. GCN seems to fall off when it comes to 56CUs. 64 seems to be well into the area of diminishing returns for gaming and to a lesser extent compute. In all honestly, GCN was built for general compute applications as much as for graphics, if not more so.

My Folding Stats - Join the fight against COVID-19 with FOLDING! - If someone has helped you out on the forum don't forget to give them a reaction to say thank you!

 

The only true wisdom is in knowing you know nothing. - Socrates
 

Please put as much effort into your question as you expect me to put into answering it. 

 

  • CPU
    Ryzen 9 5950X
  • Motherboard
    Gigabyte Aorus GA-AX370-GAMING 5
  • RAM
    32GB DDR4 3200
  • GPU
    Inno3D 4070 Ti
  • Case
    Cooler Master - MasterCase H500P
  • Storage
    Western Digital Black 250GB, Seagate BarraCuda 1TB x2
  • PSU
    EVGA Supernova 1000w 
  • Display(s)
    Lenovo L29w-30 29 Inch UltraWide Full HD, BenQ - XL2430(portrait), Dell P2311Hb(portrait)
  • Cooling
    MasterLiquid Lite 240
Link to comment
Share on other sites

Link to post
Share on other sites

43 minutes ago, Ben Quigley said:

GCN has been around since the HD 7000 series right? So 2011, got have to say for a single base architecture it was quite forward thinking. The fact that up until fury or so, there really hadn't been very many changes to the architecture. AMD seems to have made an architect that can be scaled up, whereas Nvidia seem to make the big guy first then scale down from there. The advantages is you know you're biggest core and can work from there. GCN seems to fall off when it comes to 56CUs. 64 seems to be well into the area of diminishing returns for gaming and to a lesser extent compute. In all honestly, GCN was built for general compute applications as much as for graphics, if not more so.

Forward thinking - somewhat. It's an iterative approach though so it's meant to be. Nvidia did have a mobile first philosophy but it may have been dropped somewhere along the way so it's really not a matter of whether something scales up or down. However, we don't really know as much about Nvidia's archs and their relation to each other as we do GCN; I find they obfuscate their stuff in some aspects. And given the current situation it really doesn't matter about the approach because Nvidia has the crown. And building for compute was smart enough at the time but turned out to be a bit like Bulldozer in the sense that they thought they were on to something but it never really panned out. The difference being that there were bigger problems than the CMT paradigm with Bulldozer where GCN has shown itself to be pretty versatile but ultimately a jack of all trades that never truly shined. They built an architecture that's better in a guesstimate of 20% of all scenarios. As I said earlier if they were dominant in compute we couldn't fault AMD but Nvidia still wins there as well despite a somewhat weaker compute arch. With that being said AMD dropped the ball for gaming and it shows. Nvidia is starting to split their arch and AMD has said they'll have to do the same. That should tell you something.

 

I'm not gonna go into details on the pitfalls of the scaling of GCN because we've been there already. However AMD may redeem themselves if their 7nm Vega can win over the enterprise segment otherwise it's gonna be a long wait for AMD's GPU segment - not unlike their wait for Zen to be ready.

Link to comment
Share on other sites

Link to post
Share on other sites

So it got hit by an Iceberg? Or it's drowned in a 'lake'?

DAC/AMPs:

Klipsch Heritage Headphone Amplifier

Headphones: Klipsch Heritage HP-3 Walnut, Meze 109 Pro, Beyerdynamic Amiron Home, Amiron Wireless Copper, Tygr 300R, DT880 600ohm Manufaktur, T90, Fidelio X2HR

CPU: Intel 4770, GPU: Asus RTX3080 TUF Gaming OC, Mobo: MSI Z87-G45, RAM: DDR3 16GB G.Skill, PC Case: Fractal Design R4 Black non-iglass, Monitor: BenQ GW2280

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/18/2018 at 12:23 AM, Okjoek said:

Now it feels more like this when talking AMD vs Intel:

And that's a good thing

AMD's power level is over NINE THOUSAND! Boy I'm glad I'm still a Dragonball fan :')

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, Voids said:

AMD's power level is over NINE THOUSAND! Boy I'm glad I'm still a Dragonball fan :')

I love how perfectly dated that meme is for the dragon ball universe as the FX 9000 series are for AMD CPUs xD.

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, leadeater said:

I love how perfectly dated that meme is for the dragon ball universe as the FX 9000 series are for AMD CPUs xD.

It never gets old, its outside of time itself at this stage haha.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, leadeater said:

I love how perfectly dated that meme is for the dragon ball universe as the FX 9000 series are for AMD CPUs xD.

 

1 hour ago, Voids said:

It never gets old, its outside of time itself at this stage haha.

Don't forget how CPUs can now alter their power level at will

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Okjoek said:

 

Don't forget how CPUs can now alter their power level at will

 

Intel = kakarot sama and Amd = Vegeta sama

 

All we need is a yamcha 

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Voids said:

All we need is a yamcha

We did, Cyrix, he died and there is no bringing him back. Been wished back already and Porunga won't do it as he's unworthy.

 

81eb49050e4bdf655706960301a3adaf.jpg

 

Edit:

Also what's more popular, Dragon Ball or Intel CPUs lol.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

We did, Cyrix, he died and there is no bringing him back. Been wished back already and Porunga won't do it as he's unworthy.

 

81eb49050e4bdf655706960301a3adaf.jpg

 

Edit:

Also what's more popular, Dragon Ball or Intel CPUs lol.

You'd need the super dragonballs to revive Cyrix lol. Dragonball without a doubt, I think america and Japan are like 40ish on the list but it has a big following in the like of mexico I think more so than the US 

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/17/2018 at 3:51 PM, Trixanity said:

Forward thinking - somewhat. It's an iterative approach though so it's meant to be. Nvidia did have a mobile first philosophy but it may have been dropped somewhere along the way so it's really not a matter of whether something scales up or down. However, we don't really know as much about Nvidia's archs and their relation to each other as we do GCN; I find they obfuscate their stuff in some aspects. And given the current situation it really doesn't matter about the approach because Nvidia has the crown. And building for compute was smart enough at the time but turned out to be a bit like Bulldozer in the sense that they thought they were on to something but it never really panned out. The difference being that there were bigger problems than the CMT paradigm with Bulldozer where GCN has shown itself to be pretty versatile but ultimately a jack of all trades that never truly shined. They built an architecture that's better in a guesstimate of 20% of all scenarios. As I said earlier if they were dominant in compute we couldn't fault AMD but Nvidia still wins there as well despite a somewhat weaker compute arch. With that being said AMD dropped the ball for gaming and it shows. Nvidia is starting to split their arch and AMD has said they'll have to do the same. That should tell you something.

 

I'm not gonna go into details on the pitfalls of the scaling of GCN because we've been there already. However AMD may redeem themselves if their 7nm Vega can win over the enterprise segment otherwise it's gonna be a long wait for AMD's GPU segment - not unlike their wait for Zen to be ready.

I wouldn't necessarily blame gcn for lack of scaling, i think the problem is that when they decided to not focus too much on dedicated gpus they stopped worrying about adding more shader engines (+necessary changes on other parts to accommodate them) thus limiting gcn to 64 cus, with bad scaling.

something new will come in 2020 it seems until then maybe navi can find a way to improve scaling a bit, but i dont expect too much as they are changing architectures right after it so investing in gcn wouldn't be too smart (this assuming that the newer one isnt gcn 2.0 which is a possibility)

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, cj09beira said:

I wouldn't necessarily blame gcn for lack of scaling, i think the problem is that when they decided to not focus too much on dedicated gpus they stopped worrying about adding more shader engines (+necessary changes on other parts to accommodate them) thus limiting gcn to 64 cus, with bad scaling.

something new will come in 2020 it seems until then maybe navi can find a way to improve scaling a bit, but i dont expect too much as they are changing architectures right after it so investing in gcn wouldn't be too smart (this assuming that the newer one isnt gcn 2.0 which is a possibility)

GCN is too compute heavy for consumers or rather it's not good enough at graphical workloads. Part of that is how GCN was designed; how the CUs were designed, how the geometry engines were designed and how the ROPs were designed and how those interact. The blocks are too intrinsically linked so you can't change the amount of units you have (as an example) resulting in these lopsided designs that bottlenecks itself. If Navi can de-link these, it may solve some of the issues GCN has but it's obviously a waste of time spending too much time on doing that when it's allegedly replaced the year after with a brand new arch.

 

AMD completely misread where the industry was going: focusing on compute thinking games would start to need more compute power than graphical power and thinking dedicated chips would go away in favor of integrated designs. I don't think AMD could have been more wrong and I have to wonder who came up with that? I mean if designs became integrated despite the market leader Nvidia having nowhere to put their designs then they would be fucked if that was the case; they should have realized that it wouldn't be that easy. AMD really messed up on both CPU and GPU predictions and direction around the same timeframe. It's incredible they're still going but then they did have a lot of valuable assets that they've sold to keep going and have made a turn around but they still need to execute in both areas and particularly the GPU hence I'm highly critical until I see something concrete. AMD has had a clear tendency to overhype and stir the rumor mill only for products to not hit the mark. Zen has been a welcome departure from that.

 

Anyway, I feel like we're retreading the same ground. This is all well established. It just doesn't sit well with me that we've had five years of dubious competition and we'll have at least another two. The problem compared to the CPU business is that, unlike Intel, Nvidia isn't slowing down and have had time to expand their business considerably whereas AMD is still very focused on their core business with their core markets. Those margins need to go up and they can't do that without breaking new ground or doing some major leapfrogging of their competitors.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Trixanity said:

GCN is too compute heavy for consumers or rather it's not good enough at graphical workloads. Part of that is how GCN was designed; how the CUs were designed, how the geometry engines were designed and how the ROPs were designed and how those interact. The blocks are too intrinsically linked so you can't change the amount of units you have (as an example) resulting in these lopsided designs that bottlenecks itself. If Navi can de-link these, it may solve some of the issues GCN has but it's obviously a waste of time spending too much time on doing that when it's allegedly replaced the year after with a brand new arch.

 

AMD completely misread where the industry was going: focusing on compute thinking games would start to need more compute power than graphical power and thinking dedicated chips would go away in favor of integrated designs. I don't think AMD could have been more wrong and I have to wonder who came up with that? 

Had DX12 taken off, a compute focused architecture would have been close to ideal. For consoles, GCN worked out quite well. In DX11, Nvidia is definitely better.

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Trixanity said:

I don't think AMD could have been more wrong

When AMD thought the server industry would switch to ARM and dropped all R&D on x86 server CPUs?

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, leadeater said:

When AMD thought the server industry would switch to ARM and dropped all R&D on x86 server CPUs?

Are you talking about skybridge, K12 or something else?

 

Although when we say x86 don't we all pretty much refer to Intel anyway and that the refusal to switch away from Intel is just as big as the refusal to switch to ARM meaning AMD would have the same challenge whether it was a Zen or K12 server chip?

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, Trixanity said:

Are you talking about skybridge, K12 or something else?

 

Although when we say x86 don't we all pretty much refer to Intel anyway and that the refusal to switch away from Intel is just as big as the refusal to switch to ARM meaning AMD would have the same challenge whether it was a Zen or K12 server chip?

While back before Bulldozer came out AMD went a bit loopy and thought the server world would switch to ARM completely for most tasks as it was more power efficient and most common workloads like web servers would run on ARM very easily. Because of that CPU development wasn't focused on high end features or performance. It was a rather fatal move.

 

Before this decision was made AMD was actually doing very well in the enterprise and supercomputer markets, gaining heaps of ground and had some very clear advantages over Intel which was the key driver for the shift. They literally dropped the baton in the relay race.

 

Quote

Opteron processors first appeared in the top 100 systems of the fastest supercomputers in the world list in the early 2000s. By the summer of 2006, 21 of the top 100 systems used Opteron processors, and in the November 2010 and June 2011 lists the Opteron reached its maximum representation of 33 of the top 100 systems. The number of Opteron-based systems decreased fairly rapidly after this peak, falling to 3 of the top 100 systems by November 2016, and in November 2017 only one Opteron-based system remained.[12][13]

Several supercomputers using only Opteron processors were ranked in the top 10 systems between 2003 and 2015, notably:

Other top 10 systems using a combination of Opteron processors and compute accelerators have included:

The only system remaining on the list (as of November 2017), also using Opteron processors combined with compute accelerators:

 

Having 33 of the top 100 back in the hay day of IBM Power and Sun Spark dominating that top end of the list, not Intel, is a rather big deal.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×