Jump to content

AMD '#Rekt son' reply to Intel's "four glued together" statement

5 hours ago, AnonymousGuy said:

latency-pingtimes.png

 

This is the one.  Doesn't include IF to another chip on the package.  Going up 3.5x in latency depending on your workload is pretty well qualified as "inconsistent performance" as mentioned on the Intel slides.

Ah PCPer only tested Ryzen 2400Mhz there again. 

 

It'll be interesting to see Threadripper latency per module hop, and how RAM speed affects it.

Epyc and Threadripper support 2666Mhz JEDEC out of the box, and Alienware are already showing 64GB at 2933Mhz on their Threadripper system pre-order.

 

Hopefully the RAM compatibility is better on Threadripper, given the time they've had on Ryzen to work on the issues.

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Hunter259 said:

Just saying that calling Intel childish and wrong over this is silly.

Smearing the competitors products by name (which is illegal btw) with false claims "No ecosystem" and derogatory terms "4 desktop dies glued together" is indeed childish and silly (except on Intel not us)

 

(The only reason its not breaking the law here is because they were leaked Internal slides)

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Hunter259 said:

Even people with degrees are idiots. I'm going into my second year for Computer Engineering and I can see similar people who can get grades but have no common sense. I also never said it had bad performance just that it is inherently inferior

How is it inferior if it allows the company to sell cpus at half the price of the competition. It's the way to offset the cost increases that happen in the newer nodes. And in the case of epyc it allows you to have a single socket which means lower latencies while having huge amounts of PCIe 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, leadeater said:

Probably isn't an issue anyway, as we know 80% ish of the market is dual socket systems so businesses could just keep buying dual socket servers that have Intel CPUs and just not care what you can do with a single socket AMD.

 

One of AMD's problems they have to compete with is people just going +1 to the last order etc. Why think when you can just reuse your last purchase order, you know that's a working platform and fits in with your current management framework. We do it, we have a standard server configuration list i.e. Storage Server, ESXi host etc and just keep ordering the same configurations until we decide it's time to update our configuration and that is usually only when HPE bring out a new server generation or something else big happens.

Wouldn't epyc qualify for something big? 

?

But I do agree that people not following the news might be a problem for AMD 

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Hunter259 said:

How is saying it's "Glued together" some childish statement. They are, in a way, glued together. Intel did it in the past when they couldn't figure out how to make a 4 core die and have it not be a money sink. They left that as soon as they could. It's an inherently technologically inferior way of making a CPU.

There is a difference between a scalable architecture and quite literally gluing two chips together. As for your other comments on the latency last time I heard anything the infinity fabric was faster then the ring buss lol. The multi dye solution brings the production cost down loads so the added small amount of latency I'd say is worth the cut down costs. 

I spent $2500 on building my PC and all i do with it is play no games atm & watch anime at 1080p(finally) watch YT and write essays...  nothing, it just sits there collecting dust...

Builds:

The Toaster Project! Northern Bee!

 

The original LAN PC build log! (Old, dead and replaced by The Toaster Project & 5.0)

Spoiler

"Here is some advice that might have gotten lost somewhere along the way in your life. 

 

#1. Treat others as you would like to be treated.

#2. It's best to keep your mouth shut; and appear to be stupid, rather than open it and remove all doubt.

#3. There is nothing "wrong" with being wrong. Learning from a mistake can be more valuable than not making one in the first place.

 

Follow these simple rules in life, and I promise you, things magically get easier. " - MageTank 31-10-2016

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, cj09beira said:

Wouldn't epyc qualify for something big? 

?

But I do agree that people not following the news might be a problem for AMD 

Not really because the trigger point needs to be on our end like 30 or more ESXi hosts coming up for replacement.

 

We can't tailor every server to every specific use case we get, we sort of do but anyway, because that would be too costly in human cost. I'll give an example of what the process would be if a department came to us and asked for a storage server and we want to use an AMD system, we'll assume the server is $10K.

 

Our Vendor and Client Engagement Specialist would have to collect the requirements and budget information from the department then hand that on to a Systems Engineer. The Systems Engineer would have to part out the server while verifying that the performance and scalability is adequate, likely having to engage with our Systems Integrator contract company and maybe Pre-Sales Engineers from HPE. Then once a basic server configuration has been put together another Systems Engineer needs to check over it before that proposed configuration goes back to the Engagement Specialist. The Engagement Specialist will then take that configuration and obtain three quotes from three different suppliers, evaluate each of them and then pick the best one. The suppliers may propose changes to the configuration which will need to be verified by a Systems Engineer.

 

Once the preferred supplier has been picked all the information about the client requirements and the quote is then given to a Project Manager who then has to write a Business Case justifying the purchase which gets presented to the Project Board for approval. After it has been approved a Request For Purchase document is created which is basically a more condensed form of the Business Case and that is submitted for approval to the Finance Officer of the client who needs to sign off the purchase of the equipment.

 

Finally our Engagement Specialist can accept the quote from the preferred supplier. We can ignore the rest of the process and the setup costs as that will be the same either way, maybe slightly higher on the first AMD server.

 

As you can see to purchase a $10K server we have probably spent $20K, probably more, on the business process of that purchase to save maybe $1k-$2k in hardware cost. Or we could have just asked "How much storage you need"?, "Great that will be X number of HDDs", copy paste previous storage purchase documentation and update a few things (client etc) and ordered our standard storage server configuration from our preferred supplier that was arranged when we created our standard configuration list.

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, leadeater said:

Not really because the trigger point needs to be on our end like 30 or more ESXi hosts coming up for replacement.

 

We can't tailor every server to every specific use case we get, we sort of do but anyway, because that would be too costly in human cost. I'll give an example of what the process would be if a department came to us and asked for a storage server and we want to use an AMD system, we'll assume the server is $10K.

 

Our Vendor and Client Engagement Specialist would have to collect the requirements and budget information from the department then hand that on to a Systems Engineer. The Systems Engineer would have to part out the server while verifying that the performance and scalability is adequate, likely having to engage with our Systems Integrator contract company and maybe Pre-Sales Engineers from HPE. Then once a basic server configuration has been put together another Systems Engineer needs to check over it before that proposed configuration goes back to the Engagement Specialist. The Engagement Specialist will then take that configuration and obtain three quotes from three different suppliers, evaluate each of them and then pick the best one. The suppliers may propose changes to the configuration which will need to be verified by a Systems Engineer.

 

Once the preferred supplier has been picked all the information about the client requirements and the quote is then given to a Project Manager who then has to write a Business Case justifying the purchase which gets presented to the Project Board for approval. After it has been approved a Request For Purchase document is created which is basically a more condensed form of the Business Case and that is submitted for approval to the Finance Officer of the client who needs to sign off the purchase of the equipment.

 

Finally our Engagement Specialist can accept the quote from the preferred supplier. We can ignore the rest of the process and the setup costs as that will be the same either way, maybe slightly higher on the first AMD server.

 

As you can see to purchase a $10K server we have probably spent $20K, probably more, on the business process of that that purchase to save maybe $1k-$2k in hardware cost. Or we could have just asked "How much storage you need"?, "Great that will be X number of HDDs", copy paste previous storage purchase documentation and update a few things (client etc) and ordered our standard storage server configuration from our preferred supplier that was arranged when we create our standard configuration list.

 

That's why I like smaller companies, less burocracy 

Link to comment
Share on other sites

Link to post
Share on other sites

Both are acting silly, IMO. 

 

But at least it's interesting. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, cj09beira said:

That's why I like smaller companies, less burocracy 

But if we didn't have all the red tape what would all the Business Analysts and Project Managers do? And who is it that creates the red tape etc.......? ;)

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, leadeater said:

But if we didn't have all the red tape what would all the Business Analysts and Project Managers do? And who is it that creates the red tape etc.......? ;)

I have seen many companies being destroyed because of too much people running it, and doing things like: hey let's stop buying all this expensive quality tires that last a long time and use those Chinese ones, or let's not resurface that good tire and buy a new Chinese one, we'll the company is in the toilet right now ?

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, RadiatingLight said:

TBH that ecosystem slide looks very unimpressive.

HP, Lenovo, Dell and supermirco are basically the 4 horsemen of the server world, I am a HP ProLiant fanboy and those servers are amazing. As for ecosystem, they listed hardware and software separately and both covered the largest and most popular segments of hosting and server applications, it is safe to say Intel might be screwed unless they seriously adjust their price and start fosusing on what Xeons are better at, otherwise EPYC could kill Xeon on the single and dual socket range if Intel fucks it up. 

Yours faithfully

Link to comment
Share on other sites

Link to post
Share on other sites

The Intel slides were meant for internal use and they were leaked.

CPU: Intel Core i7-5820K | Motherboard: AsRock X99 Extreme4 | Graphics Card: Gigabyte GTX 1080 G1 Gaming | RAM: 16GB G.Skill Ripjaws4 2133MHz | Storage: 1 x Samsung 860 EVO 1TB | 1 x WD Green 2TB | 1 x WD Blue 500GB | PSU: Corsair RM750x | Case: Phanteks Enthoo Pro (White) | Cooling: Arctic Freezer i32

 

Mice: Logitech G Pro X Superlight (main), Logitech G Pro Wireless, Razer Viper Ultimate, Zowie S1 Divina Blue, Zowie FK1-B Divina Blue, Logitech G Pro (3366 sensor), Glorious Model O, Razer Viper Mini, Logitech G305, Logitech G502, Logitech G402

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, PCGuy_5960 said:

The Intel slides were meant for internal use and they were leaked.

They should feel bad about distributing that shit to their employers any ways.

Personal Desktop":

CPU: Intel Core i7 10700K @5ghz |~| Cooling: bq! Dark Rock Pro 4 |~| MOBO: Gigabyte Z490UD ATX|~| RAM: 16gb DDR4 3333mhzCL16 G.Skill Trident Z |~| GPU: RX 6900XT Sapphire Nitro+ |~| PSU: Corsair TX650M 80Plus Gold |~| Boot:  SSD WD Green M.2 2280 240GB |~| Storage: 1x3TB HDD 7200rpm Seagate Barracuda + SanDisk Ultra 3D 1TB |~| Case: Fractal Design Meshify C Mini |~| Display: Toshiba UL7A 4K/60hz |~| OS: Windows 10 Pro.

Luna, the temporary Desktop:

CPU: AMD R9 7950XT  |~| Cooling: bq! Dark Rock 4 Pro |~| MOBO: Gigabyte Aorus Master |~| RAM: 32G Kingston HyperX |~| GPU: AMD Radeon RX 7900XTX (Reference) |~| PSU: Corsair HX1000 80+ Platinum |~| Windows Boot Drive: 2x 512GB (1TB total) Plextor SATA SSD (RAID0 volume) |~| Linux Boot Drive: 500GB Kingston A2000 |~| Storage: 4TB WD Black HDD |~| Case: Cooler Master Silencio S600 |~| Display 1 (leftmost): Eizo (unknown model) 1920x1080 IPS @ 60Hz|~| Display 2 (center): BenQ ZOWIE XL2540 1920x1080 TN @ 240Hz |~| Display 3 (rightmost): Wacom Cintiq Pro 24 3840x2160 IPS @ 60Hz 10-bit |~| OS: Windows 10 Pro (games / art) + Linux (distro: NixOS; programming and daily driver)
Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, RadiatingLight said:

I'm willing to believe that Hitachi was just an honest mistake, if it was really the only duplicated brand.

AND asrock was on there twice!

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Lord Nicoll said:

HP, Lenovo, Dell and supermirco are basically the 4 horsemen of the server world, I am a HP ProLiant fanboy and those servers are amazing. As for ecosystem, they listed hardware and software separately and both covered the largest and most popular segments of hosting and server applications, it is safe to say Intel might be screwed unless they seriously adjust their price and start fosusing on what Xeons are better at, otherwise EPYC could kill Xeon on the single and dual socket range if Intel fucks it up. 

Even if Intel brings down prices, i don't know if that's enough to fight the EPYC single-core platform.

EPYC single-socket cpu's (the one with a P at the end) offer 128 PCI-E lanes, 8 memory channels and 32C/64T. And it costs around 2000$.

Intel can't offer something similar even if they want to.

 

Yes 1P servers aren't populair, but workstations are also a thing and have usually 1 CPU so in that market they seem to be, at least on paper, the winner.

If you want my attention, quote meh! D: or just stick an @samcool55 in your post :3

Spying on everyone to fight against terrorism is like shooting a mosquito with a cannon

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Jito463 said:

I can't say I agree with that.  As with anything in life, there are pros and cons to both methods.

 

True, but recall that the Intel method was flawed, due to their reliance on the NB bus for interconnects between the dies.  While IF does scale based on your RAM speed, the actual communication between CCX's still occurs on-die (and the communication between dies on TR/Epyc occurs on the silicon).  Yes, there will be a latency hit, but the extent of the latency determines whether it's a "worse way".  The primary thing holding back Ryzen isn't IF, it's the silicon process from GloFo.  If they could breach that 4GHz barrier, then Ryzen would be a knockout.

 

Being a different method, does not automatically make it an inferior method.

but that will spell disaster to ryzens power savings advantage against kaby or skl. IRRC, the process ryzen was manufactured was for mobile devices and it also explains why Ryzens hit a voltage wall around 4.1+ Ghz

The norms in which determines the measure of morality of a human act are objective to the moral law and subjectively man/woman's conscience

Link to comment
Share on other sites

Link to post
Share on other sites

51 minutes ago, samcool55 said:

Even if Intel brings down prices, i don't know if that's enough to fight the EPYC single-core platform.

EPYC single-socket cpu's (the one with a P at the end) offer 128 PCI-E lanes, 8 memory channels and 32C/64T. And it costs around 2000$.

Intel can't offer something similar even if they want to.

 

Yes 1P servers aren't populair, but workstations are also a thing and have usually 1 CPU so in that market they seem to be, at least on paper, the winner.

This is true, the PCIe lanes are insane, and since AMD using 8 core dies, they could in theory also release a 256 PCIe lane 64 core version for 4 socket use, which might happen, but 8 dies would be hell to cool. However do note that the TR4 socket is overkill, so maybe they plan to taken 4 way and 8 way with an ever higher core counted models, however unlikely. 

Yours faithfully

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Lord Nicoll said:

This is true, the PCIe lanes are insane, and since AMD using 8 core dies, they could in theory also release a 256 PCIe lane 64 core version for 4 socket use, which might happen, but 8 dies would be hell to cool. However do note that the TR4 socket is overkill, so maybe they plan to taken 4 way and 8 way with an ever higher core counted models, however unlikely. 

Would likely need more dies per socket and even more pins, 50% of the PCIe lanes are already taken up by inter-socket link so a 4 socket would leave at a minimum -32. Without a PCIe switch, terrible idea for CPU linking, you need 32 lanes per CPU link and that's 3 for every one in a 4 socket system. Basically 64 lanes is 60% (96) what is required to link 4 CPUs with 0 left for PCIe slots.

 

Edit:

Bad math fixing now, hold fire lol. Corrected I think?

 

Edit 2:

That's what you already said omfg, I'm going to bed it's 2am.

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not looking to build a server but I sure do hope amd does well with epyc, they need the $$$ from the large companies to stay in the market and compete with intel in the long term.

 

The only question I have is...where can I get the amd epyc shirt? 

I don't normally wear merch but the shirt that the amd guys were wearing looks good

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, zeraine00 said:

but that will spell disaster to ryzens power savings advantage against kaby or skl. IRRC, the process ryzen was manufactured was for mobile devices and it also explains why Ryzens hit a voltage wall around 4.1+ Ghz

No ryzen was designed for the server then scaled down... They hit a voltage wall because of GoFlo's process

CPU: Intel i7 7700K | GPU: ROG Strix GTX 1080Ti | PSU: Seasonic X-1250 (faulty) | Memory: Corsair Vengeance RGB 3200Mhz 16GB | OS Drive: Western Digital Black NVMe 250GB | Game Drive(s): Samsung 970 Evo 500GB, Hitachi 7K3000 3TB 3.5" | Motherboard: Gigabyte Z270x Gaming 7 | Case: Fractal Design Define S (No Window and modded front Panel) | Monitor(s): Dell S2716DG G-Sync 144Hz, Acer R240HY 60Hz (Dead) | Keyboard: G.SKILL RIPJAWS KM780R MX | Mouse: Steelseries Sensei 310 (Striked out parts are sold or dead, awaiting zen2 parts)

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

Would likely need more dies per socket and even more pins, 50% of the PCIe lanes are already taken up by inter-socket link so a 4 socket would leave at a minimum -32. Without a PCIe switch, terrible idea for CPU linking, you need 32 lanes per CPU link and that's 3 for every one in a 4 socket system. Basically 64 lanes is 60% (96) what is required to link 4 CPUs with 0 left for PCIe slots.

 

Edit:

Bad math fixing now, hold fire lol. Corrected I think?

 

Edit 2:

That's what you already said omfg, I'm going to bed it's 2am.

The 2U w/ 8 packages is the most the Infinity Fabric can do with only 2 hops Core to Core. They'd also need more bandwidth to cover the situation. Introducing 3 hops pushes them out to 64 CCX? One of the aspects scales on the 2^n. However, above 2U is a small chunk of the market (I believe it was said 25%). AMD just need 5% total market share for the time being.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, XenosTech said:

No ryzen was designed for the server then scaled down... They hit a voltage wall because of GoFlo's process

I think what was meant was the silicon fabrication process was licensed from Samsung and was designed originally for manufacturing mobile processors. Doesn't make it bad or anything and is actually why Zen uses so little power when using lower clock rates and lower voltages.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

I think what was meant was the silicon fabrication process was licensed from Samsung and was designed originally for manufacturing mobile processors. Doesn't make it bad or anything and is actually why Zen uses so little power when using lower clock rated and lower voltages.

Now that you word it that way yeah it's totally fine for anything as long as it delivers a good power punch while not making your psu and wallet scream for bloody murder

CPU: Intel i7 7700K | GPU: ROG Strix GTX 1080Ti | PSU: Seasonic X-1250 (faulty) | Memory: Corsair Vengeance RGB 3200Mhz 16GB | OS Drive: Western Digital Black NVMe 250GB | Game Drive(s): Samsung 970 Evo 500GB, Hitachi 7K3000 3TB 3.5" | Motherboard: Gigabyte Z270x Gaming 7 | Case: Fractal Design Define S (No Window and modded front Panel) | Monitor(s): Dell S2716DG G-Sync 144Hz, Acer R240HY 60Hz (Dead) | Keyboard: G.SKILL RIPJAWS KM780R MX | Mouse: Steelseries Sensei 310 (Striked out parts are sold or dead, awaiting zen2 parts)

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×