Jump to content

Mac pro and XDR display orders available now + unboxing

williamcll
1 minute ago, hishnash said:

At this point they are very deeply embedding into Thunderbolt3, Zen2 still does not have greate support for this. ZEN3 will have USB4 so that would free it up.

That said apple could put it the work to get TB3 working well on ZEN2 but that would most likly be more than a few months.

Also for the macPro this has been in dev for the last 2000 days! so i do not expect there were any ZEN2 sample AMD could have sent over at that time. Same apple had such a long lead time on this machine, it would have been much better if they had released it 2 years ago and then been able to iterate into Zen this year.

I think the debate about can they or can't they is moot, It's apple we are talking about not acer,  they can if they want, they haven't because they choose not to,  all of us trying to find reasons why they couldn't is kinda silly,  because at the end of the day it's a fairly trivial step for them, just like adding sd support, they don't for reasons unrelated to contracts or ability. 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, mr moose said:

 

It would really just come down to the motherboard they chose,  sometimes they just don't support many PCIe.

Apple are not selecting a motherboard from the shop window they are building their own after all. So they could have made a beast of a machine 6 PCIex16 gen 4 slots with direct to CPU connections and still have lots of PCIe left over for USB4 SSD and networking. No need for the very costly PLX chip on the current macPro. 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, hishnash said:

Apple are not selecting a motherboard from the shop window they are building their own after all. So they could have made a beast of a machine 6 PCIex16 gen 4 slots with direct to CPU connections and still have lots of PCIe left over for USB4 SSD and networking. No need for the very costly PLX chip on the current macPro. 

 

Given how new epyc are it makes sense there aren't huge options.   Intel have been the king of the hill for so long that motherboard vendors are more willing to invest in designs with crazy options because they will sell.  I dare say super micro wanted to see how the demand was before they commit to such markets. 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, mr moose said:

Given how new epyc are it makes sense there aren't huge options.   Intel have been the king of the hill for so long that motherboard vendors are more willing to invest in designs with crazy options because they will sell.  I dare say super micro wanted to see how the demand was before they commit to such markets. 

Lots of GPUs in a server node actually isn't a common thing, not high end high power ones anyway. Two is common, 4 is a thing and many will preference using SXM between the GPUs (Nvidia) rather than using PCIe bandwidth as the most common configuration is 2 GPUs per CPU. There are workloads that just cannot communicate across PCIe through QPI links.

 

Nvidia's DGX-2 16 V100 is more of a show piece than something people actually use. Too much power in a single chassis and too hard to utilize just monstrous power, better off splitting that down to more node and running concurrent but slower and obtaining better overall resource utilization for negligible power increase from the extra CPUs. Depending on design and requirements you can even just have 1:1 CPU:GPU e.g Cray XC50, because a lot of the PCIe lanes can be used for multiple high speed interconnect cards which do actually use all (actually technically can exceed) PCIe slot bandwidths.

 

From what I have seen MPP systems will go with smaller nodes and scale out with node count where as Cluster systems will deploy nodes in the cluster to suit each workload type and the job scheduler places work on appropriate node types (CPU heavy, GPU heavy, RAM heavy etc).

 

The most cut and dry clear win I can see for AMD (lots of PCIe lanes) is extremely high speed low latency NVMe storage servers, single CPU. Otherwise number of PCIe lanes and GPUs per system doesn't really matter, that's a workstation thing.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×