Jump to content

Intel announces Cascade Lake AP

porina
12 minutes ago, cj09beira said:

these will be limited to 2 socket systems though 

They always have 8S capable Xeons in their lineup. Give it time.

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, Amazonsucks said:

They always have 8S capable Xeons in their lineup. Give it time.

if they are using upi to connect the 2 dies between each other then a 2S config is already 4S, so a 4S will in reality be 8S, so that is what i expect them to top out at 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Amazonsucks said:

They always have 8S capable Xeons in their lineup. Give it time.

A 4 socket system of these is effectively 8 anyway, I would think the only thing that would limit them being able to do that is socket size/pin count for the UPI links.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, leadeater said:

Yea that's a chassis that can fit 4 of those 2S hybrid blades in to them, the socket is already the narrow kind. Great for density as you are double that of a 1U server and quadruple that of a 2U server while still having a full compliment of 24 2.5" bays in the front. You also only need 2 PSUs rather than the 8 you would need for the same number of traditional servers. 

Maybe a candidate for a TechQuickie video, what types of servers are used in what kind of use cases? I'm aware of the blade concept, but for lack of a better description, are full width servers still a thing?

 

I still see a potential benefit that if you are currently using a 4S system, going to this may offer some space savings, but I don't know how that might scale beyond that. This assumes the workload benefits from the sockets being directly connected, and not in a separate physical instance.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, porina said:

Maybe a candidate for a TechQuickie video, what types of servers are used in what kind of use cases? I'm aware of the blade concept, but for lack of a better description, are full width servers still a thing?

 

I still see a potential benefit that if you are currently using a 4S system, going to this may offer some space savings, but I don't know how that might scale beyond that. This assumes the workload benefits from the sockets being directly connected, and not in a separate physical instance.

Yea single system servers are still a thing, there's also more traditional blades and the the hybrid ones that I showed. Traditional blades are front mounted and typically have more per chassis and don't have large amounts of storage (they use remote storage), hybrid blades is the in between when you have multiple systems per chassis but fewer of them and also decent amount of storage. The hybrid ones are very popular for virtual server hosts particularly Hyper Converged systems where each node contributes to a larger distributed pool of storage along with being a standard virtual host server.

 

Hybrid blades also have more room for PCIe devices like GPUs, however those tend to be 2U height half width meaning 2 per chassis rather than 4.

 

Large multi socket servers (4S/6S/8S) are still used where scale out is not as effective, can't be used at all, or the system resources required to run the job won't fit on a 2S server. You can get an example of that from the video Linus did when he visited that Canadian university. Some workloads just need very large amounts of ram, more than a 2S can give. These Cascade Lake-AP would be able to cover that in 2S given the memory configuration possible with them.

 

Have a look at the HPE Apollo range of servers, there's even more weird and wonderful server types and configurations optimized for different tasks like 8 way NVLink GPUs or 4U large HDD capacity multi node/blade (not really blade like) servers.

https://www.hpe.com/nz/en/product-catalog/servers/apollo-systems.hits-12.html

Link to comment
Share on other sites

Link to post
Share on other sites

https://twitter.com/chiakokhua/status/1059831360930508800

 

Quote

So Cascade Lake has 3X UPI linking 2 24C XCC dies with 6 MC each. It's like a 2P in 1 package, comparable to a 2S Naples. So bisection bw between the dies is 10.6x16x3/8 = 63.6GB/s. That is very low compared to 2S Naples at 152 GB/s.

7:34 AM - 6 Nov 2018

@leadeater

 

@porina

 

Caught a discussion of the UPI connections for the AP part while waiting on the AMD event.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Taf the Ghost said:

Caught a discussion of the UPI connections for the AP part while waiting on the AMD event.

Thanks. Now I have to look up what bisectional bandwidth is.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, porina said:

Thanks. Now I have to look up what bisectional bandwidth is.

You mean what it is generally?  Or for some specific system?

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Amazonsucks said:

You mean what it is generally?  Or for some specjfic system?

I hadn't heard the term before, and have now looked it up. Not something you need to worry about on single socket systems...

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, porina said:

I hadn't heard the term before, and have now looked it up. Not something you need to worry about on single socket systems...

Yeah its once you have to start talking over a network fabric to a lot of nodes.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, leadeater said:

Did you also see this on his Twitter?

 

image.png.99fb162e9771768e6fe08b18b011387d.png

 

Full image:

  Hide contents

DrOjne3U0AAyWZr.jpg

 

Yes, it was actually his interactions with others in the CPU Design/Leak Twitter-sphere that I got the links from. There's a thread on the Anandtech forum where they've been going back & forth for a month on different aspects. It seems there is no L4 cache and the I/O will act like a Switch.

 

It's also his layouts that convinced me that the 2-Die Pairs would likely happen, as it drops the amount of direct attachments and those chiplets would be better off directly communicating in a 4-way connection. Zen uses the leverage of 4 direction links, given it's mathematical efficiency for connection. (Plus they also have years of design work with that 4-way done, so they know how to use it well.)

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×