Jump to content

AMD Zen server platform detailed

NumLock21
1 hour ago, it_dont_work said:

I would think in our current hardware echosystem, power consumption and heat output would be a higher priority for large data centers. *IF* these chips are more efficient and require less power than running multi cpu (2/4 cpu) boards, while putting out less heat that could be a big selling point. Large data centers pay more to cool the server rooms than to run the machines.

Only data centers that haven't migrated to recirculated water cooled server racks, that gets the cooling cost down lots.

Link to comment
Share on other sites

Link to post
Share on other sites

As legit as this looks this is wccf so I won't click

We have a NEW and GLORIOUSER-ER-ER PSU Tier List Now. (dammit @LukeSavenije stop coming up with new ones)

You can check out the old one that gave joy to so many across the land here

 

Computer having a hard time powering on? Troubleshoot it with this guide. (Currently looking for suggestions to update it into the context of <current year> and make it its own thread)

Computer Specs:

Spoiler

Mathresolvermajig: Intel Xeon E3 1240 (Sandy Bridge i7 equivalent)

Chillinmachine: Noctua NH-C14S
Framepainting-inator: EVGA GTX 1080 Ti SC2 Hybrid

Attachcorethingy: Gigabyte H61M-S2V-B3

Infoholdstick: Corsair 2x4GB DDR3 1333

Computerarmor: Silverstone RL06 "Lookalike"

Rememberdoogle: 1TB HDD + 120GB TR150 + 240 SSD Plus + 1TB MX500

AdditionalPylons: Phanteks AMP! 550W (based on Seasonic GX-550)

Letterpad: Rosewill Apollo 9100 (Cherry MX Red)

Buttonrodent: Razer Viper Mini + Huion H430P drawing Tablet

Auralnterface: Sennheiser HD 6xx

Liquidrectangles: LG 27UK850-W 4K HDR

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Matias_Chambers said:

32 cores, that's better than anything Intel has to offer. Intel has a 24 core Xeon E7 8890 v4 for 7000$+. 

 AMD has always been loved on server side of the market. One of the reason they still survived.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Bouzoo said:

Here you go. 

 

 

Too bad the BRIX sucked back then.

MSI GE72 Apache Pro-242 - (5700HQ : 970M : 16gb RAM : 17.3" : Win10 : 1TB HDD : Razer Anansi : Some mouse) - hooked up to a 34UM58-P (WFHD) in dual screen

 

iPad Air 2 (for school)

iPhone 6

Xbox One Forza 6 Limited Edition Blue

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, TheGlenlivet said:

So it will play minecraft then...?

It will. And with 4 CPUs (32 cores and 64 threads each, total of 128 cores and 256 cores) it can pretty much take care of the graphics computation, better than a GPU can, if coded correctly.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Matias_Chambers said:

32 cores, that's better than anything Intel has to offer. Intel has a 24 core Xeon E7 8890 v4 for 7000$+. 

Is that the one Linus dropped?

Corsair 600T | Intel Core i7-4770K @ 4.5GHz | Samsung SSD Evo 970 1TB | MS Windows 10 | Samsung CF791 34" | 16GB 1600 MHz Kingston DDR3 HyperX | ASUS Formula VI | Corsair H110  Corsair AX1200i | ASUS Strix Vega 56 8GB Internet http://beta.speedtest.net/result/4365368180

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, leadeater said:

Only data centers that haven't migrated to recirculated water cooled server racks, that gets the cooling cost down lots.

Ok just found out this is more mainstream than I though. Also found a place in iceland that just pipes air in from outside.

 

 

Silent build - You know your pc is too loud when the deaf complain. Windows 98 gaming build, smells like beige

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, leadeater said:

Only data centers that haven't migrated to recirculated water cooled server racks, that gets the cooling cost down lots.

which a lot havent because of the required initial investment and server/ervice downtime. Facebook build a megasite that uses traditional AC cooling.... (yeah, its madness). Discovery had a documentary on it in 2012... so unless they rebuilt an entire server centre since then, one of americas largest data centres are still air cooled

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Prysin said:

which a lot havent because of the required initial investment and server/ervice downtime. Facebook build a megasite that uses traditional AC cooling.... (yeah, its madness). Discovery had a documentary on it in 2012... so unless they rebuilt an entire server centre since then, one of americas largest data centres are still air cooled

You sure they haven't upgraded at least a part of their datacenter in (now) 5 years?

The ability to google properly is a skill of its own. 

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Bouzoo said:

You sure they haven't upgraded at least a part of their datacenter in (now) 5 years?

upgrades like those are quite hard to do during operations, remember you gotta lay all the pipes and infrastructure whilst everything is in operation, ofc data centres are extremely fragile to physical interference. The loss of a server or accidental power loss due to harming electric wires can cause huge problems. Then there is the issue of leak testing everything at site in close proximity to hardware. And you also need to take apart a server rack, install the brackets and waterblocks on the required components, and you need to do all this whilst not endangering the live operations of the servers. Not to mention the cost of thousands of waterblocks and pipes, and coolant that need to be bought, filled, tested, installed. WE are talking thousands of hours of work. Not to mention you need to disasseble the air cooling systems in order to make space for some of the watercooling stuff, you need pumps and electrical wiring and control systems and fail safe systems ran all over the place in order to maintain the right pressure and flow for efficient operation. And if you have efficient air cooling (yes, you can have that if the data centre is in a cool region or underground) then you may not see sufficient gains in cooling performance to warrant the outright insane labor and equipment cost.

 

So in 5 years? considering that Facebook is in ever more need to data-crunching power, i don't think so... We would have heard it if Facebook services were being sluggish for weeks on end.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Prysin said:

So in 5 years? considering that Facebook is in ever more need to data-crunching power, i don't think so... We would have heard it if Facebook services were being sluggish for weeks on end.

Actually when I think about it you are right, it would be all over the place if fb did such upgrade. But for the rest part, it's not like they can't afford the manpower and infrastructure. 

The ability to google properly is a skill of its own. 

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, AnonymousGuy said:

I'm teamviewered in to my desktop so copy paste was being a bitch and I had to edit-fix it :)  

 

It's still broken, but googling seems to indicate Naples is a "fake" 32 core where it's 4x8 cores on the same package to give them 128 lanes.

Amd already admitted the 32 core will use an MCM. 

That is two 16 core dies on a single package.  This is because of yields.  It doesn't make it a "fake"  32 core processor,  just not a monolithic one. 

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

This will be a big hit for customers that needed PCIe lanes under space constraints, as this allows you to get as much in a quarter of the space of an Intel solution.

In case the moderators do not ban me as requested, this is a notice that I have left and am not coming back.

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, Matias_Chambers said:

32 cores, that's better than anything Intel has to offer. Intel has a 24 core Xeon E7 8890 v4 for 7000$+. 

then again, AMD doesnt have pricing available yet ;)

 

and seeing the price socket G34 still goes for, i doubt AMD is gonna be undercutting intel on that field by large margins.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Bouzoo said:

You sure they haven't upgraded at least a part of their datacenter in (now) 5 years?

 

1 hour ago, Prysin said:

upgrades like those are quite hard to do during operations, remember you gotta lay all the pipes and infrastructure whilst everything is in operation, ofc data centres are extremely fragile to physical interference. The loss of a server or accidental power loss due to harming electric wires can cause huge problems. Then there is the issue of leak testing everything at site in close proximity to hardware. And you also need to take apart a server rack, install the brackets and waterblocks on the required components, and you need to do all this whilst not endangering the live operations of the servers. Not to mention the cost of thousands of waterblocks and pipes, and coolant that need to be bought, filled, tested, installed. WE are talking thousands of hours of work. Not to mention you need to disasseble the air cooling systems in order to make space for some of the watercooling stuff, you need pumps and electrical wiring and control systems and fail safe systems ran all over the place in order to maintain the right pressure and flow for efficient operation. And if you have efficient air cooling (yes, you can have that if the data centre is in a cool region or underground) then you may not see sufficient gains in cooling performance to warrant the outright insane labor and equipment cost.

 

So in 5 years? considering that Facebook is in ever more need to data-crunching power, i don't think so... We would have heard it if Facebook services were being sluggish for weeks on end.

There's actually two kinds of data center water cooling techniques: Rack only and Direct to Chip. The second being much less common, way more expensive, higher risk but better effectiveness. Only the real hard core do this.

 

One of the other reasons rack water cooling is more popular is you don't have any storage array options that could even support Direct to Chip AND HDD cooling, IBM did make a crazy custom one.

 

Rear door active water cooling can be retro fitted in to existing racks and there isn't any significant risk during installation to the rest of the room/equipment.

http://www.chilleddoor.com/discover-chilleddoor

http://www.chilleddoor.com/files/uploads/2014/10/MOT_chilledDoor_9-2016_FINAL.pdf

 

Another more common way to reduce data center cooling requirements is to use shipping containers and make it in to a fully self sufficient closed system. It's cheaper since you can more accurately determine the cooling required, less volume to cool and due to their being less volume you get better air movement and no heat inversion zones/dead zones.

 

You can just stack containers on top of each other as they are designed to do as well is put them in rows.

 

Our disaster recovery (DR) data center is a container, if required it can be moved which makes it well suited to this task.

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, AluminiumTech said:

Xeon Phi at basically Atom potato cores

 

But 1 potato core can turn into 4 potato cores.

Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, AnonymousGuy said:

I'm teamviewered in to my desktop so copy paste was being a bitch and I had to edit-fix it :)  

 

It's still broken, but googling seems to indicate Naples is a "fake" 32 core where it's 4x8 cores on the same package to give them 128 lanes.

thats just how they are arranged inside, doesnt mean they are fake.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, cj09beira said:

thats just how they are arranged inside, doesnt mean they are fake.

And if yields are a problem, breaking up a 32 core to components should mean a higher success rate per wafer? As long as performance loss is not significant I would guess that this would not be a factor.

 

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, LoE Ferret said:

And if yields are a problem, breaking up a 32 core to components should mean a higher success rate per wafer? As long as performance loss is not significant I would guess that this would not be a factor.

 

It definitely helps with wields.

but it also means amd doesn't need to have another die being made at global foundries.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, cj09beira said:

It definitely helps with wields.

but it also means amd doesn't need to have another die being made at global foundries.

There are two server architectures, Zeppelin (APU) (8c, 12c(?), 16c) and Naples (CPU) (16c(?), 24c and 32c). Keep in mind details about both of these all over the damn place with tons of contradicting details between sources and even from the same source, online news sites need to do a way better job at amending articles or taking them down when new information comes to light.

 

I've seen some say both are based off 8 core modules, multi chip package when required, and others saying Naples is 16 core modules, multi chip package when required. I guess both are viable but I'm going to bet on 16 core dies for Napples.

 

Multi chip package for a 24 core offering will be much cheaper using two 16 core dies than three 8 core dies or a single die (defect that invalidates the entire chip). Defective cores can be disabled when they fail to validate for use in a 16/32 core package where you would need 3 perfect 8 core dies or 1 large perfect die. I guess the chances of defects in a larger 16 core die is higher. Joining 2 dies in a single package is cheaper than 3, and less complex.

 

The down side to multi die packages is you lose cache coherency efficiency but there is still internal die separation between cores and cache, NUMA nodes. This exists in Intel processors and these AMD processors (from what I've read, 4 cores per node).

 

As these core counts are getting higher and process nodes getting smaller plus die size and complexity getting higher, or same size but more transistors, I think monolithic dies are becoming a worse choice than multi chip packages from a technical perspective and way more so from a business perspective (yields etc).

 

Huge dies and many cores are really complex and making full use of them is a challenge, easy in the virtualization world but this is an area were monolithic vs multi chip makes zero difference unless you assign more cores to a VM than a single chip module. This is already an issue anyway if you assign more than on a single CPU so no change here at all.

 

With an multi chip design you can focus on making each chip extremely efficient and make this design compiler and OS aware then just add dies to give the core/performance package required. Cross die-core communication still happens through L3 cache as it would in a monolithic die but instead of there being a single L3 cache there is L3 cache per die, this can also be a good thing since it will prevent a subset of cores stealing all L3 cache.

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, TheCherryKing said:

Will this use the AM4 socket or an LGA-Style socket like G34?

Naples uses LGA, not PGA (AM4)

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/01/2017 at 5:06 AM, RexinOridle said:

It will. And with 4 CPUs (32 cores and 64 threads each, total of 128 cores and 256 cores) it can pretty much take care of the graphics computation, better than a GPU can, if coded correctly.

no it cannot outmatch a GPU at the graphics (especially with mods) because memory bandwidth would be a limiting factor.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Prysin said:

no it cannot outmatch a GPU at the graphics (especially with mods) because memory bandwidth would be a limiting factor.

 

Yeah, you are right. Didn't think of it that way.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×