Jump to content

AMD announces open source ray tracing at GDC *Interview update*

Notional
8 minutes ago, Notional said:

AMD is doing great on the CPU side now. Sure the IPC isn't identical to Intel's, but it doesn't have to either, when AMD has other upsides. The high end clock frequency seems to be similar with Ryzen 2000 series now. Zen 2 is out next year, so they aren't resting on their laurels. But honestly, with ALL RadeOn cards being sold to miners and such, I doubt RTG is lacking funding per se.

 

We shall see. It will certainly be difficult, but never impossible.

Agreed. I do think AMD achieved what they wanted with mantle: Steer the industry in a direction that favours AMD more than before. But for devs to take RTG seriously, they need not only market share, but specifically high end market share.

 

 

Momentary pics of profitability is not great man.  They need to keep that going and to ensure it stays that way.  They don't know what Intel has in store in the future.  They can't take that risk, one misstep on their end (this also means Intel doing a leap forward too), they will be back to where they were before.  That can't happen because this time GPU's are not going to save them.

 

Mantel was the wrong direction lol, there was no way MS and Sony would have accepted mantle as is, If you look at things now, Vulkan, DX12, the advantage for AMD was momentary.  By the next generation, nV turned that disadvantage to more of parity (Pascal), this always happens with API's, when a company has an advantage its because the other company wasn't able to create an architecture that was competitive enough, once they understand where their down falls are they correct them for their next generation.  Weakness to strengths. What will happen with Volta and ilk from the looks of Volta's white paper, its going to be even stronger in future API's.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Razor01 said:

Momentary pics of profitability is not great man.  They need to keep that going and to ensure it stays that way.  They don't know what Intel has in store in the future.  They can't take that risk, one misstep on their end (this also means Intel doing a leap forward too), they will be back to where they were before.  That can't happen because this time GPU's are going to save them.

 

Mantel was the wrong direction lol, there was no way MS and Sony would have accepted mantle, If you look at things now, Vulkan, DX12, the advantage for AMD was momentary.  By the next generation, nV turned that disadvantage to more of parity, this always happens with API's, when a company has an advantage its because the other company wasn't able to create an architecture that was competitive enough, once they understand where their down falls are they correct them for their next generation.  Weakness to strengths. 

But I think that is exactly what AMD is doing. Zen+ is coming out next month with better clock frequencies, tighter timings, higher native support for ram speeds and probably better ram support as well. Zen 2 will be out next year on 7nm, so I think they are gathering momentum and doing just great on the long run too. Same for Threadripper and EPYC.

 

Problem is that x86 is so old and past retirement, that we are mostly seeing specialized instruction sets, higher clock rates (more difficult to achieve now) and newer process nodes (even more difficult to achieve). X86 mostly has to rely on added cores now. Quantum compute can't get here fast enough. Point is, that I think people are vastly overestimating Intel's technological advantages over AMD. They have vast R&D funding, so they can quickly take a new architecture, for instance, and change the process node it was meant for (like Coffee Lake). But their advantages are dwindling fast.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Notional said:

But I think that is exactly what AMD is doing. Zen+ is coming out next month with better clock frequencies, tighter timings, higher native support for ram speeds and probably better ram support as well. Zen 2 will be out next year on 7nm, so I think they are gathering momentum and doing just great on the long run too. Same for Threadripper and EPYC.

 

Problem is that x86 is so old and past retirement, that we are mostly seeing specialized instruction sets, higher clock rates (more difficult to achieve now) and newer process nodes (even more difficult to achieve). X86 mostly has to rely on added cores now. Quantum compute can't get here fast enough. Point is, that I think people are vastly overestimating Intel's technological advantages over AMD. They have vast R&D funding, so they can quickly take a new architecture, for instance, and change the process node it was meant for (like Coffee Lake). But their advantages are dwindling fast.

 

When a company has the money and talent, they should never be underestimated too lol.  Look what happened with the Core line up against AMD.  AMD should have been able to figure out Intel was going to make the core line up when they saw Pentium M a full year to the core line up being released, that was the first gaming laptop I was extremely impressed with.  When I saw that XPS laptop from Dell, it was no brainier, I replaced my desktop for about a year with that laptop, an Athlon x 2 Desktop btw.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Crunchy Dragon said:

I swear, AMD's been milking the Radeon name since they acquired ATI. I've seen it on GPUs, SSDs, RAM, and now Ray tracking?

 

Next thing you know, they're gonna have refrigerators under the Radeon branding.

 

In all seriousness, I'm interested in seeing just how good of a thing this whole ray tracking business turns out to be. Also, if RTX requires dedicated hardware, what's stopping companies from making like a PCIe add-in card to process it? Or does it have to be on the same physical PCB as the GPU?

AMD's refrigerators, now with 100% more RADeON

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Razor01 said:

When a company has the money and talent, they should never be underestimated too lol.  Look what happened with the Core line up against AMD.  AMD should have been able to figure out Intel was going to make the core line up when they saw Pentium M a full year to the core line up being released, that was the first gaming laptop I was extremely impressed with.  When I saw that XPS laptop from Dell, it was no brainier, I replaced my desktop for about a year with that laptop, an Athlon x 2 btw.

Oh for sure. I just mean that there are some technological limitations for both Intel and AMD (mostly process node and clock frequency). And those are probably the two worst.

As for Pentium M, by then, Intel's anti competitive practices had already done massive damage to AMD. By then, Intel had illegally acquired most of the market share and mind share. AMD's economy was in shambles as a result. Their R&D simply could not keep up.

 

Today, AMD hit the jackpot with their ingenious InfinityFabric making multicore CPU's much cheaper than Intel. That has hit Intel hard throughout ALL their multicore chips (at least from 4+ and up).

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Crunchy Dragon said:

Next thing you know, they're gonna have refrigerators under the Radeon branding.

RefridgeOn. xD

 

But there is a point to it: Get the branding out as much as possible and create mind share in the market.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, Notional said:

Oh for sure. I just mean that there are some technological limitations for both Intel and AMD (mostly process node and clock frequency). And those are probably the two worst.

As for Pentium M, by then, Intel's anti competitive practices had already done massive damage to AMD. By then, Intel had illegally acquired most of the market share and mind share. AMD's economy was in shambles as a result. Their R&D simply could not keep up.

 

Today, AMD hit the jackpot with their ingenious InfinityFabric making multicore CPU's much cheaper than Intel. That has hit Intel hard throughout ALL their multicore chips (at least from 4+ and up).

 

 

Well right now they are still behind Intel than they were back then.  AMD was making around 5 billion in net revenue per year at the time Pentium M was released to the time the Core line up was released.  To reach that again, they must make 1.5 billion per quarter in net revenue.

 

Infinity fabric is just an interconnect.  I don't give much importance to interconnects.  It makes things easier and cheaper for AMD yeah, but that's all it does.  Intel can go that route too if they use their own mesh technologies, which they have as well.  I don't think Intel will go this route because its pretty easy to see what non local data does to such devices.

 

AMD went this route because they really didn't much choice, the amount of money to create larger multicore CPU's wasn't in their grasp with Zen.

 

AMD had the lead with Athlonx2, in terms of technology, Intel caught up with Pentium M, Phenom vs the first Core line up chips they were close.  Then AMD went the wrong direction with design that is what killed them.  Yeah Intel stopped AMD from achieving their full potential in the OEM space, but that isn't want hurt them with the chips after Phenom, they had the money, the talent but management did the unthinkable and over ruled the engineers.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, laminutederire said:

Personally I'm just sick of bad software.  For crying out loud it's possible to have doom run at furiously high frame rate with high image quality. 

The problem is, when games don't run as well as Doom, how many people say "I need a new GPU" as opposed to "I need a new game"?

 

We see hardware reviewed all the time in terms of how well they run a set of given games or programs, but we don't see many game or software reviews in terms of how well they utilize a set of existing hardware. While sometimes we do see complaints about how bad a game runs in general (usually in some form of pre-release state), in other cases games that runs terribly even in high end hardware get some sort of "aspirational aura" for it.

There's simply more pressure on hardware to achieve high FPS on every game and setting, no matter how ridiculous, than there is for games to run well on every hardware configuration, no matter how ridiculous.

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, Razor01 said:

Well right now they are still behind Intel than they were back then.  AMD was making around 5 billion in net revenue per year at the time Pentium M was released to the time the Core line up was released.  To reach that again, they must make 1.5 billion per quarter in net revenue.

 

Infinity fabric is just an interconnect.  I don't give much importance to interconnects.  It makes things easier and cheaper for AMD yeah, but that's all it does.  Intel can go that route too if they use their own mesh technologies, which they have as well.  I don't think Intel will go this route because its pretty easy to see what non local data does to such devices.

 

AMD went this route because they really didn't much choice, the amount of money to create larger multicore CPU's wasn't in their grasp with Zen.

 

AMD had the lead with Athlonx2, in terms of technology, Intel caught up with Pentium M, Phenom vs the first Core line up chips they were close.  Then AMD went the wrong direction with design that is what killed them.  Yeah Intel stopped AMD from achieving their full potential in the OEM space, but that isn't want hurt them with the chips after Phenom, they had the money, the talent but management did the unthinkable and over ruled the engineers.

AMD made over 5 billion in 2017. Of course that includes RTG and semi custom. Then again, EPYC hasn't even launched properly yet.

 

You should give importance to interconnects. Especially when it allows AMD to make a 32 core CPU at over 90% yields. AMD can almost sell those chips for less than Intel spends on making theirs (ok I exaggerate, but still). If this interconnect can make high end APU's down the line and be used in a meaningful way on GPU's and even better, between the two, it is a huge trump for AMD.

 

AMD probably didn't have much of a choice, but they turned that downside into an upside. Intel's mesh is a "mesh" right now o.O

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Notional said:

AMD made over 5 billion in 2017. Of course that includes RTG and semi custom. Then again, EPYC hasn't even launched properly yet.

 

You should give importance to interconnects. Especially when it allows AMD to make a 32 core CPU at over 90% yields. AMD can almost sell those chips for less than Intel spends on making theirs (ok I exaggerate, but still). If this interconnect can make high end APU's down the line and be used in a meaningful way on GPU's and even better, between the two, it is a huge trump for AMD.

 

AMD probably didn't have much of a choice, but they turned that downside into an upside. Intel's mesh is a "mesh" right now o.O

 

 

Well they grew other parts of the their business once ATi was bought out, so that needs to be looked into as you stated.

 

The yields are only important if Intel can't get the yields for their largest chips to acceptable limits.  And right now they are getting well above acceptable limits for their largest chips. 

 

It makes no sense to add in such a weakness as data locality performance which can hit anywhere from 50 to 75% drops in performance of existing software.  This is the ONLY reason Ryzen doesn't perform as good in gaming :/ too although the hit isn't as sever.  They took the lesser of two evils for them.  Drop the potential loss in yields with a new process with a slight hit on desktop performance, but with Epyc we will see much larger hits on specific databases and work loads that need data locality, that is a certainty. 

 

Well using infinity fabric for a GPU to CPU, its no longer an APU, its a dGPU, just differences in the interconnect and pathways.  And doing this if they want to share memory they gotta use HBM which is cost prohibitive for the most part.  To do higher performance segments, gotta have a lot of HBM which shouldn't be shared.  We can see even with shared the cost of Kaby Lake G systems are quite high.

Link to comment
Share on other sites

Link to post
Share on other sites

Oh neat, hopefully it's up to snuff with RTX in terms of quality and performance

I spent $2500 on building my PC and all i do with it is play no games atm & watch anime at 1080p(finally) watch YT and write essays...  nothing, it just sits there collecting dust...

Builds:

The Toaster Project! Northern Bee!

 

The original LAN PC build log! (Old, dead and replaced by The Toaster Project & 5.0)

Spoiler

"Here is some advice that might have gotten lost somewhere along the way in your life. 

 

#1. Treat others as you would like to be treated.

#2. It's best to keep your mouth shut; and appear to be stupid, rather than open it and remove all doubt.

#3. There is nothing "wrong" with being wrong. Learning from a mistake can be more valuable than not making one in the first place.

 

Follow these simple rules in life, and I promise you, things magically get easier. " - MageTank 31-10-2016

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Because this game changing tech can be done via software, after 10 years of failing to do so.

And even tho it has been known for a decade, AMD manages to make it work on pure software... just a day after NVidia manages to resolve the decade old problem with hardware.

 

Yeah. Sure. 

 

I totally buy that and i am not gonna claim only fanbois believe that, nope. Not gonna claim that.

Link to comment
Share on other sites

Link to post
Share on other sites

Oh boy... I am already getting tired of all the ray tracing hype and it has barely even started.

How often do we need another overhyped game technology to come out until people realize they are never as groundbreaking as they are made out to be?

 

 

PhysX

TressFX

Hybrid Crossfire

TrueAudio

DirectX 12 and how amazing the performance would be.

Most of the stuff in GameWorks (like FaceWorks).

 

 

How many times do we need to get videos like this before people stop getting excited over tech-demos and other pre-release marketing material.

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Razor01 said:

 

I really don't know why you hate AMD so much, but It remains that they had superior products for quite a few generations without having more than half of the market share thanks to Intel bribes. They could have earned 50% more if Intel played it fair, they would have had enough money to take more risks when they were down. Because they went down really fast once they had an inferior product.

 

What you don't take into account is that IPC is quite nice anyway, the clocks are lacking with Zen right now. However node shrinks with Zen will be much easier for AMD than Intel thanks to their "bad interconnect". The fact is that Intel hasn't done a node shrink for quite a while because their yields aren't good enough for them to really afford it. They end up making huge monolithic dies which cost a fortune  for high core counts, and it costs them more to push smaller node on their quadcores than it is to push 2 more cores on their current node. That says a lot of the yields on smaller nodes.

Granted the 7nm goes right with glofo, amd could have 5Ghz+ cpus with lower power consumption and a bit smaller IPC. With that you get to a point where Intel only has a small IPC advantage and still a price disadvantage. Looking at the situation like this, amd isn't the bad company that you try to depict anymore.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Razor01 said:

The yields are only important if Intel can't get the yields for their largest chips to acceptable limits.  And right now they are getting well above acceptable limits for their largest chips. 

I guess you don't often look at E7 Xeons like the E7-8890v4 then, damn those are not cheap. The new Platinum Xeons actually cost more than the last generation too.

 

We have HPE DL580 Gen9 servers with 4x E7-8890v4's in them and they are not cheap, not by a long shot. A dual CPU EPYC in a lot of cases is equally capable, more so in others and less as well.

 

Production yields are a closely guarded secret though, Intel will never tell you what they are. When they talk about it it's always in relation to how well it's doing compared to the last technology and projections on when it's going to match that.

 

6 hours ago, Razor01 said:

It makes no sense to add in such a weakness as data locality performance which can hit anywhere from 50 to 75% drops in performance of existing software.  This is the ONLY reason Ryzen doesn't perform as good in gaming :/ too although the hit isn't as sever.  They took the lesser of two evils for them.  Drop the potential loss in yields with a new process with a slight hit on desktop performance, but with Epyc we will see much larger hits on specific databases and work loads that need data locality, that is a certainty. 

No that is not the case. That was only a theory which got perpetuated so much it's become 'fact', actual benchmarks of EPYC processors do not show these claimed performance issues. In fact for any workload that is not pure AVX-512 the comparable AMD systems are faster.

https://www.servethehome.com/dual-amd-epyc-7601-processor-performance-and-review-part-1/2/

 

If you're worried about GPU acceleration and NUMA nodes then that is true, but only if you're stupid and don't configure your cluster correctly. It's 1%-6.5% performance loss on non preferred NUMA node from this set of tests.

https://www.servethehome.com/nvidia-gpu-in-a-amd-epyc-server-tips-for-tensorflow-and-cryptomining/

 

Spoiler
Quote

Since we need to abide by the new NVIDIA EULA for CUDA 9, we then “moved the server to the data center”, where we are OK running CUDA for crypto mining in the data center.

lol had to quote that, too funny not to

 

Though this isn't actually an issue because proper HPC clusters are aware of this and you can configure workloads to run in optimal nodes and compute resource locations. Accounting for NUMA was already a thing in the HPC space, there's just more NUMA nodes to configure now.

 

Not only that Microsoft considers them more than good enough to replace the Intel driven L series Azure VMs with AMD EPYC in the new Lv2 series VMs.

 

Quote

We’re thrilled to have AMD available as part of the Azure Compute family. We’ve worked closely with AMD to develop the next generation of storage optimized VMs called Lv2-Series, powered by AMD’s EPYC™ processors. The Lv2-Series is designed to support customers with demanding workloads like MongoDB, Cassandra, and Cloudera that are storage intensive and demand high levels of I/O.

https://azure.microsoft.com/en-us/blog/announcing-the-lv2-series-vms-powered-by-the-amd-epyc-processor/

 

And then there is what HPE has to say about AMD EPYC

 

Quote

The benchmarks are SPECrate 2017_fp_base and SPECfp_rate2006, HPE says. An AMD Epyc dual-CPU 7601-based DL385 system scored 257 on SPECrate2017_fp_base (throughput) and 1980 on the SPECfp_rate2006, both higher than any other two-socket system score published by SPEC.

 

A Skylake DL380 with dual Xeon Gold 6152 22-core CPUs scored 197 on the SPECrate2017_fp_base in comparison

.

HPE's DL385 Epyc release says: "The combination of core-count and features attains up to 50 per cent lower cost per virtual machine (VM) HPE sees over traditional server solutions." Those traditional server products are not identified.

https://www.theregister.co.uk/2017/11/21/hpe_brings_amds_epyc_processor_to_mainstream_2p2u_server_box/

 

Apologies if that all sounds like a sales pitch for AMD EPYC but we don't need misrepresentations about products to perpetually spread unchecked. AMD EPYC is fine for server workloads, they compete well and will easily find a place in the market. What matter more is how software is licensed, per core go with Intel but per socket go with AMD (highly generalized)

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Razor01 said:

Well using infinity fabric for a GPU to CPU, its no longer an APU, its a dGPU, just differences in the interconnect and pathways.

How is that any different to the current way they are interconnected? Infinity Fabric doesn't make something not an APU, being on package is what defines that. Same goes for Kaby Lake G, it's connected by PCIe and Infinity Fabric is a protocol that is transport agnostic that currently runs over PCIe for implementations in use at the moment.

 

For something to be dedicated it has to do and contain only one thing, that's why there is a distinction between APUs and CPU + dGPU. I can't upgrade or customize the Vega M in a Kaby G because it's not a dedicated part.

 

6 hours ago, Razor01 said:

And doing this if they want to share memory they gotta use HBM which is cost prohibitive for the most part. 

Infinity Fabric is just a transition technology while Gen-Z gets finalized, it's based on the research for that. Gen-Z and it's memory sharing does not require the same type at all, the whole purpose of it is everything is treated as memory (memory semantic) where every device in the system all talks the same native language be it a GPU, CPU, SSD or NIC. If they all support Gen-Z then no protocol translation is required and they are all working on the same common fabric reducing latency and improving inter device bandwidth.

 

If you're worried Gen-z is yet another vapor product going nowhere it's backed by pretty much every tech giant, other than Intel and Nvidia. HPE already has working hardware using it too.

https://venturebeat.com/2017/05/16/hp-enterprise-unveils-single-memory-160-terabyte-computer-the-machine/

https://www.pcworld.com/article/3197054/hardware/hpe-shows-off-the-machine-prototype-without-memistors.html

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

I guess you don't often look at E7 Xeons like the E7-8890v4 then, damn those are not cheap. The new Platinum Xeons actually cost more than the last generation too.

 

We have HPE DL580 Gen9 servers with 4x E7-8890v4's in them and they are not cheap, not by a long shot. A dual CPU EPYC in a lot of cases is equally capable, more so in others and less as well.

 

Production yields are a closely guarded secret though, Intel will never tell you what they are. When they talk about it it's always in relation to how well it's doing compared to the last technology and projections on when it's going to match that.

 

No that is not the case. That was only a theory which got perpetuated so much it's become 'fact', actual benchmarks of EPYC processors do not show these claimed performance issues. In fact for any workload that is not pure AVX-512 the comparable AMD systems are faster.

https://www.servethehome.com/dual-amd-epyc-7601-processor-performance-and-review-part-1/2/

 

If you're worried about GPU acceleration and NUMA nodes then that is true, but only if you're stupid and don't configure your cluster correctly. It's 1%-6.5% performance loss on non preferred NUMA node from this set of tests.

https://www.servethehome.com/nvidia-gpu-in-a-amd-epyc-server-tips-for-tensorflow-and-cryptomining/

 

  Reveal hidden contents

lol had to quote that, too funny not to

 

Though this isn't actually an issue because proper HPC clusters are aware of this and you can configure workloads to run in optimal nodes and compute resource locations. Accounting for NUMA was already a thing in the HPC space, there's just more NUMA nodes to configure now.

 

Not only that Microsoft considers them more than good enough to replace the Intel driven L series Azure VMs with AMD EPYC in the new Lv2 series VMs.

 

https://azure.microsoft.com/en-us/blog/announcing-the-lv2-series-vms-powered-by-the-amd-epyc-processor/

 

And then there is what HPE has to say about AMD EPYC

 

https://www.theregister.co.uk/2017/11/21/hpe_brings_amds_epyc_processor_to_mainstream_2p2u_server_box/

 

Apologies if that all sounds like a sales pitch for AMD EPYC but we don't need misrepresentations about products to perpetually spread unchecked. AMD EPYC is fine for server workloads, they compete well and will easily find a place in the market. What matter more is how software is licensed, per core go with Intel but per socket go with AMD (highly generalized)

 

Intel CPU's, if you were to buy they in bulk you can get them for 60% less that what the MSRP is.

 

I do this well because my IT department, said our servers when they upgrade CPU's they don't pay anything close to the MSRP, and actually I can buy parts if I wanted to for their prices too.

 

Where is the distributed data base testing in all that?  Yeah the thing that pretty much runs everything on our web and used in HPC extensively?

 

This isn't about NUMA and GPU's.

 

That is a damn sales pitch because you showed everything that is Eypc's strengths, and not where it FAILS

 

Anandtech did talk about this in there Eypc preview.

 

https://www.anandtech.com/show/11544/intel-skylake-ep-vs-amd-epyc-7000-cpu-battle-of-the-decade/18

 

67413sys.png

 

87293resp.png

 

Come on what server software in the cloud or web doesn't use large database?  What companies don't use large distributed databases anymore that will push Eypc like this? HPC's even use large databases most of the time, because they need to store the data and retrieve that data at different times!

 

And there really is no fixing this outside of turning off CCX modules or programming which CPU is doing what and where the data is being stored, which is a pain in the ass!

 

Infinity Fabric doesn't solve shit when it comes to tech guys, I don't get why anyone would think so, its a damn interconnect!  We just had different interconnects before for different things, now its just one thing for AMD systems.

 

Get a check on what you guys are talking about!

 

I'm not worried about Gen -Z because again, as I stated, this does not mean RAM will be allocated differently, this isn't even what I was talking about.  You still need to allocate memory for each component, they can't share RAM address space.

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Razor01 said:

I'm not worried about Gen -Z because again, as I stated, this does not mean RAM will be allocated differently, this isn't even what I was talking about.  You still need to allocate memory for each component, they can't share RAM address space.

You need to read more in to Gen-Z because that's exactly what it allows. Literally what HPE is doing with The Machine.

 

10 minutes ago, Razor01 said:

Intel CPU's, if you were to buy they in bulk you can get them for 60% less that what the MSRP is.

You really think we don't buy in bulk ;).

 

11 minutes ago, Razor01 said:

I do this well because my IT department, said our servers when they upgrade CPU's they don't pay anything close to the MSRP, and actually I can buy parts if I wanted to for their prices too.

Well considering I'm a Systems Engineer who buys and installs these systems I know exactly what they cost, we get even cheaper prices because we're a university.

 

12 minutes ago, Razor01 said:

Where is the distributed data base testing in all that?  Yeah the thing that pretty much runs everything on our web and used in HPC extensively?

HPC tests are in the reviews I linked.

 

13 minutes ago, Razor01 said:

That is a damn sales pitch because you showed everything that is Eypc's strengths, and not where it FAILS

It's only strong in the price equal comparison, Intel still takes over in the most high end systems because they can do 4-8 sockets which is something AMD cannot do.

 

It's not my problem you stated something woefully untrue about EPYC processors, here's the rebuttal and you can counter it if you like but actual evidence will be required. 

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, leadeater said:

You need to read more in to Gen-Z because that's exactly what it allows. Literally what HPE is doing with The Machine.

 

You really think we don't buy in bulk ;).

 

Well considering I'm a Systems Engineer who buys and installs these systems I know exactly what they cost, we get even cheaper prices because we're a university.

 

HPC tests are in the reviews I linked.

 

It's only strong in the price equal comparison, Intel still takes over in the most high end systems because they can do 4-8 sockets which is something AMD cannot do.

 

It's not my problem you stated something woefully untrue about EPYC processors, here's the rebuttal and you can counter it if you like but actual evidence will be required. 

 

 

Those tests you linked only show specific characteristics of HPC, not a complete test of what most people will need.

 

Eypc, Ryzen all their problems stem from using Infinity Fabric, there is NO DAMN way around these problems unless they do what I stated, turn off CCX modules or program around the specific problem.

 

Well here at NBC no problem getting a 60% discount on top end Xeon parts ;)  we do use quite a bit of hardware for our online services, and even have dedicated servers just for the superbowl lol.

 

This has nothing to do with Gen Z, Gen Z will not solve the memory address issues by itself,  programming will also have to be done differently!

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, Razor01 said:

 

 

Those tests you linked only show specific characteristics of HPC, not the a complete test of what most people will need.

 

Eypc, Ryzen all their problems stem from using Infinity Fabric, there is NO DAMN way around these problems unless they do what I stated, turn off CXX modules or program around the specific problem.

 

This has nothing to do with Gen Z, Gen Z will not solve the memory address issues by itself,  programming will also have to be done differently!

image.png

See that's a memory subsystem test, that's nice what it shows but it does not relate all that well to actual application performance that is actually aware of the CPU architecture otherwise you would see that in the real world tests I linked you. Those guys aren't just some Joe average know nothings despite being named Serve the Home.

 

Show me an actual application performance benchmark where the same priced Intel CPU performs significantly better, if it isn't using AVX-512 you won't find one. Also not the one done by Intel.

 

Really why do you even care, the CPUs perform very well but not at everything and the same can be said for Intel. But CPU performance alone is almost meaningless in most purchasing decisions, more important factors come in like the licensing model of the applications running on the server which cost more than the CPUs do. I could really care if the AMD CPU performs better if it costs me 40% more in software licenses than if I had gone with Intel, and it's the same in reverse.

 

For example if going with AMD saves me $2000 in hardware costs but costs me $12000 more in licensing I'm just not going to do it am I, not for a $10000 net deficit. Hardware is cheap, software is expensive.

 

You're talking to me like I literally haven't actually used AMD EPYC processors, which I have. You can tell me as much as you like how bad they are but I don't have to take your word for it since I've done all the testing I need to and know how they perform and what they are good at and what they are not.

Link to comment
Share on other sites

Link to post
Share on other sites

48 minutes ago, Razor01 said:

HPC's even use large databases most of the time, because they need to store the data and retrieve that data at different times!

No they do not use databases, they use distributed file systems. Examples would include Lustre, HDFS, GPFS, Ceph etc. These all run on commodity server hardware and work on massive parallelism and are typically interconnected using Infiniband. Very interesting stuff, I bet you'd like it.

 

I personally like Ceph the most but it's not the most common, last few years it's really improved a lot performance wise which kept it out of the real front end HPC storage deployments. Historically it's been used more for Object Storage and for OpenStack.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

No they do not use databases, they use distributed file systems. Examples would include Lustre, HDFS, GPFS, Ceph etc. These all run on commodity server hardware and work on massive parallelism and are typically interconnected using Infiniband. Very interesting stuff, I bet you'd like it.

Man, I used to have Infiniband... :) 

... with NFS :P 

 

Now I use LACP on dual GIgiabit :( But the computers are faster :D 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, LAwLz said:

TressFX

Don't agree with that one. Both in Tomb Raider, and specifically Rise of the Tomb Raider, it made a huge difference. The hair on Lara Croft is a huge part of her character and personality. Snow in the hair, her hair getting wet, and her fixing her wet hair after getting out of water, really made her come alive. Same can be said about Deus Ex Mankind Divided. Heck, I'd even say HairWorks is awesome in Witcher 3, if it didn't curb stomp performance.

 

10 hours ago, Razor01 said:

The yields are only important if Intel can't get the yields for their largest chips to acceptable limits.  And right now they are getting well above acceptable limits for their largest chips. 

Which is exactly the situation Intel is in. Why do you think their high core count CPU's are so much more expensive than AMD's? It's not just Intel being greedy. And I'm not talking MSRP either. If you are a big customer, you never pay MSRP.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, SpaceGhostC2C said:

Man, I used to have Infiniband... :) 

... with NFS :P 

 

Now I use LACP on dual GIgiabit :( But the computers are faster :D 

You went from Infiniband to dual Gbe? What? That sounds like 1 step forward 2 steps back lol. Also not the first person I've heard of who using NFS instead of a parallel file systems, sometimes not worth trying to tame the beast if you don't need it.

Link to comment
Share on other sites

Link to post
Share on other sites

58 minutes ago, leadeater said:

See that's a memory subsystem test, that's nice what it shows but it does not relate all that well to actual application performance that is actually aware of the CPU architecture otherwise you would see that in the real world tests I linked you. Those guys aren't just some Joe average know nothings despite being named Serve the Home.

 

Show me an actual application performance benchmark where the same priced Intel CPU performs significantly better, if it isn't using AVX-512 you won't find one. Also not the one done by Intel.

 

Really why do you even care, the CPUs perform very well but not at everything and the same can be said for Intel. But CPU performance alone is almost meaningless in most purchasing decisions, more important factors come in like the licensing model of the applications running on the server which cost more than the CPUs do. I could really care if the AMD CPU performs better if it costs me 40% more in software licenses than if I had gone with Intel, and it's the same in reverse.

 

For example if going with AMD saves me $2000 in hardware costs but costs me $12000 more in licensing I'm just not going to do it am I, not for a $10000 net deficit. Hardware is cheap, software is expensive.

 

You're talking to me like I literally haven't actually used AMD EPYC processors, which I have. You can tell me as much as you like how bad they are but I don't have to take your word for it since I've done all the testing I need to and know how they perform and what they are good at and what they are not.

 

 

Original statement, infinity fabric introduced certain problems from Ryzen and Epyc. (we got side tracked a little bit)  Those problems can't be overlooked when talking about infinity fabric and multiple dies on the same package.  That is what I was getting at.  Infinity fabric is not a cure all for multi dies. 

 

We shouldn't even think of it that way.

 

Just because AMD marketed the hell out of it, as the next big thing. 

 

Infinity fabric what it solves

 

1) Ease of integration of multiple different components as one subsystem

2) Cost savings because of the above

 

That is all it does.  That is all you can ask for an interconnect to do.

 

All the other hub bub about increasing performance, solves deep seeded problems with programmable models.

 

Nada doesn't stop any of that.

 

And then we have the other down side, its harder to program for since its not NUMA aware hardware.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×