Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
CarlBar

Buildzoid X570 First Look

Recommended Posts

3 minutes ago, Stefan Payne said:

Scraping the bottom of the barrel with an 11 year old chipset, we're talking over decade old technology, it can hardly be compared to technology today. If anything it makes them look worse for hitting 28w

 

Come on Stefan, you're better than that

Link to post
Share on other sites

My old x48 chipset even has its own dedicated ihs and it looks so freaking awesome! They need to add ihs to those X570, so it look just as awesome or maybe like add rgb on it, to bring it up to date. And all of a sudden price of boards goes up by $50 bucks. 😑

 

 

x48-8.jpg.351cf2ce583446746873427747405599.jpg


Intel Xeon E5 1650 v3 @ 3.5GHz 6C:12T / CM212 Evo / Asus X99 Deluxe / 16GB (4x4GB) DDR4 3000 Trident-Z / Samsung 850 Pro 256GB / Intel 335 240GB / WD Red 2 & 3TB / Antec 850w / RTX 2070 / Win10 Pro x64

HP Envy X360 15: Intel Core i5 8250U @ 1.6GHz 4C:8T / 8GB DDR4 / Intel UHD620 + Nvidia GeForce MX150 4GB / Intel 120GB SSD / Win10 Pro x64

 

HP Envy x360 BP series Intel 8th gen

AMD ThreadRipper 2!

5820K & 6800K 3-way SLI mobo support list

 

Link to post
Share on other sites

Lol and people think this on X570 is extreme. We used these chipset coolers from Thermalright too cool our shit back in the day :D Granted, it was for North Bridge back then when they were still a thing, but still.


So you had Thermalright TRUE cooler on CPU and this HR05 underneath it on NB :D

tr-hr05.jpg

Link to post
Share on other sites
2 minutes ago, Arika S said:

Scraping the bottom of the barrel with an 11 year old chipset, we're talking over decade old technology, it can hardly be compared to technology today. If anything it makes them look worse for hitting 28w

 

Come on Stefan, you're better than that

Just shows 2 things:

a) We already had Chipsets that consumed 30W or more and ran without a fan more or less reliably but X58 would be a bit more appropriate as X48 has the Memory Controller inside, X58 has not. But QPI to PCIe and a ton of PCIe. (and manufactured in 65nm), wich was at 24W - also without a fan.

 

b) early adoptions of a certain technology are not necessarily the most efficient ones and there might (or might not) be PCIe 4.0 Implementations that don't consume as much power in the future. Or for example a new "low interference" Connector specifically for M.2 and a "short distance" PHY, wich also consumes power.

 

Buttom Line:
We should all hold our horses and wait for the final product.

Right now we do not know anything about the insides of the Chipset...

 

It looks like its at least a 20x PCIe 4.0 Switch (4x in, 16 out)

 

The Biostar Leak list 3 PCie x1 (*3)

2x m.2 PCIe Gen 4 x4 (=8)

1x PCie x16 with 4 Lanes by SB

And the missing 1 is used for the GBit Ethernet Chip

 

That means that the Southbridge has 16 PCIe 4.0 Lanes in total - from the Chipset.

PCIe Switches tend to add all possible PCIe Lanes...


"Hell is full of good meanings, but Heaven is full of good works"

Link to post
Share on other sites
Posted · Original PosterOP

Ah the good old days of the Northbridge.


Buildzoid did specify the heating issues only occur with certain storage configurations which apparently include a lot of m.2 drives so i'd guess it is indeed related to PCI-E 4.0. The power usage of any given electrical connection in semiconductor electronics tends to scale in power usage at an exponentiol rate compared to the gain in frequency.

 

Thats why 7nm can be either half power or 20% more frequency. Because scaling the frequency by a mere 20% causes the silicon to draw twice as much power as at the old frequency. And PCI-E 4.0 doubles the frequency compared to PCI-E 3.0. So your probably looking at a massive uptick in power draw and subsequent thermal load.

Link to post
Share on other sites
30 minutes ago, Stefan Payne said:

Still: we don't know what the Chipset has integrated, how many PCIe 4.0 goes in, how many PCIe 4.0 Lanes are provided by the Chipset, how many S-ATA and other goodies are integrated.

Presumably it will be the same number as the PCIe 3.0 lanes previously, as it will be a limit of AM4 unless they dig out some reserved pins not used before. Still, a double in bandwidth could mean many more things can be dangled off the chipset, thus increasing the load when used.


Main rig: Asus Maximus VIII Hero, i7-6700k stock, Noctua D14, G.Skill Ripjaws V 3200 2x8GB, Gigabyte Windforce 980Ti, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, HP LP2475W 1200p wide gamut

Gaming system: Asrock Z370 Pro4, i7-8086k stock, Noctua D15, G.Skill TridentZ 3000C14 2x8GB, Asus 1080 Ti Strix OC, Fractal Edison 550W PSU, Corsair 600C, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p 144Hz G-sync

Ryzen rig: Asrock B450 ITX, R5 2600, Noctua D9L, Corsair Vengeance LPX 3000 2x4GB, Vega 56, Corsair CX450M, NZXT Manta, Crucial MX300 525GB, Acer RT280K

VR rig: Asus Z170I Pro Gaming, i7-6600k stock, Silverstone TD03-E, Kingston Hyper-X 2666 2x8GB, Zotac 1070 FE, Corsair CX450M, Silverstone SG13, Samsung PM951 256GB, HTC Vive

Gaming laptop: Asus FX503VD, i5-7300HQ, 2x8GB DDR4, GTX 1050, Sandisk 256GB SSD

Total CPU heating: i7-7800X, 2x i7-6700k, i7-6700HQ, i5-6600k, i5-5675C, i5-4570S, i3-8350k, i3-6100, i3-4360, 2x i3-4150T, E5-2683v3, 2x E5-2650, R7 1700, 1600

Link to post
Share on other sites
5 minutes ago, porina said:

Presumably it will be the same number as the PCIe 3.0 lanes previously,

No, see above.

The Chipset has 16 Lanes

Question is: half 4.0 other half 3.0 or all 4.0?

5 minutes ago, porina said:

as it will be a limit of AM4 unless they dig out some reserved pins not used before. 

There are other possibilitys, you can use the HDMI/DP Ports and switch that. Though its not recommended to do. It is possible though.


"Hell is full of good meanings, but Heaven is full of good works"

Link to post
Share on other sites
22 minutes ago, Stefan Payne said:

Just shows 2 things:

a) We already had Chipsets that consumed 30W or more and ran without a fan more or less reliably but X58 would be a bit more appropriate as X48 has the Memory Controller inside, X58 has not. But QPI to PCIe and a ton of PCIe. (and manufactured in 65nm), wich was at 24W - also without a fan.

 

b) early adoptions of a certain technology are not necessarily the most efficient ones and there might (or might not) be PCIe 4.0 Implementations that don't consume as much power in the future. Or for example a new "low interference" Connector specifically for M.2 and a "short distance" PHY, wich also consumes power.

 

Buttom Line:
We should all hold our horses and wait for the final product.

Right now we do not know anything about the insides of the Chipset...

 

It looks like its at least a 20x PCIe 4.0 Switch (4x in, 16 out)

 

The Biostar Leak list 3 PCie x1 (*3)

2x m.2 PCIe Gen 4 x4 (=8)

1x PCie x16 with 4 Lanes by SB

And the missing 1 is used for the GBit Ethernet Chip

 

That means that the Southbridge has 16 PCIe 4.0 Lanes in total - from the Chipset.

PCIe Switches tend to add all possible PCIe Lanes...

Are you sure there's a PCI-E switch? I was under the impression AMD dropped all plans for that because it's expensive (artificially expensive from what I can understand). 

 

Maybe they've created a DMI equivalent?

Link to post
Share on other sites

is there any reason to get the new chipset for something like: 8 core cpu, 2x8gb ram, 1 gpu, 1 nvme ssd, 1 sata ssd, 1 hdd? seems like there isnt?


MSI GX660 + i7 920XM @ 2.8GHz + GTX 970M + Samsung SSD 830 256GB

Link to post
Share on other sites
6 minutes ago, Trixanity said:

Are you sure there's a PCI-E switch? I was under the impression AMD dropped all plans for that because it's expensive (artificially expensive from what I can understand). 

In the Chipset, yeah.

How else could they have that amount of PCIe Lanes??

Quote

Maybe they've created a DMI equivalent?

You means omething such as Infinity Fabric? ;)


"Hell is full of good meanings, but Heaven is full of good works"

Link to post
Share on other sites
7 minutes ago, Stefan Payne said:

No, see above.

The Chipset has 16 Lanes

I was thinking of the link between CPU and chipset only. What else is connected to the chipset is the "extra stuff" but we now have double the bandwidth to use with it.


Main rig: Asus Maximus VIII Hero, i7-6700k stock, Noctua D14, G.Skill Ripjaws V 3200 2x8GB, Gigabyte Windforce 980Ti, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, HP LP2475W 1200p wide gamut

Gaming system: Asrock Z370 Pro4, i7-8086k stock, Noctua D15, G.Skill TridentZ 3000C14 2x8GB, Asus 1080 Ti Strix OC, Fractal Edison 550W PSU, Corsair 600C, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p 144Hz G-sync

Ryzen rig: Asrock B450 ITX, R5 2600, Noctua D9L, Corsair Vengeance LPX 3000 2x4GB, Vega 56, Corsair CX450M, NZXT Manta, Crucial MX300 525GB, Acer RT280K

VR rig: Asus Z170I Pro Gaming, i7-6600k stock, Silverstone TD03-E, Kingston Hyper-X 2666 2x8GB, Zotac 1070 FE, Corsair CX450M, Silverstone SG13, Samsung PM951 256GB, HTC Vive

Gaming laptop: Asus FX503VD, i5-7300HQ, 2x8GB DDR4, GTX 1050, Sandisk 256GB SSD

Total CPU heating: i7-7800X, 2x i7-6700k, i7-6700HQ, i5-6600k, i5-5675C, i5-4570S, i3-8350k, i3-6100, i3-4360, 2x i3-4150T, E5-2683v3, 2x E5-2650, R7 1700, 1600

Link to post
Share on other sites
5 hours ago, GoldenLag said:

i mean noone cares their 9900k is drawing between 150-200 watts. and when its limited to 95 watts, in multicore workloads it runs like an r7 2700x. 

 

people dont care about power, people care about the sideeffects. which can all be mitigated. 

Chief, TDP isn't power drawn. Ryzen also draws more than it's TDP.


Seagull eat fish. But fish belong to Mafia. Mafia punch seagull for not respecting Mafia. Seagull say "No, please! I have child!"

Mafia punch seagull with child.

 

 

 

 

 

 

 

 

Pyo.

Link to post
Share on other sites
2 hours ago, LAwLz said:

Not sure what might have happened to make the northbridge run so hot and power hungry all of a sudden.

PCIe 4.0 and a decent amount of lanes, it's more or less the only thing it can be. Considering storage configuration is specifically mentioned as the cause it would have to be high throughput NVMe going through that chipset.

Link to post
Share on other sites
Just now, Stefan Payne said:

In the Chipset, yeah.

How else could they have that amount of PCIe Lanes??

No, they (=ATi) Always used PCie between NB and SB and now still PCIe.

Well, I'm not saying that the lanes aren't there but if they do use a switch then (according to Ian Cutress) we can look forward to the cheapest motherboards costing at least $300-400 because a switch costs $200 to add to a board.

 

That's why I'm questioning the move unless the switch is optional at which point most boards will still be limited to x4 lanes. You either need a switch or use a different interface to split the lanes like that - Intel does the latter.

 

ATI doesn't exist anymore and is hardly relevant to the discussion.

Link to post
Share on other sites
1 hour ago, Stefan Payne said:

can give you one with 30W:

I'm already afraid for threadrippers chipset...


PSU Tier List//Graphics card (cooling) tier list//Build Guide Megathread//Motherboard Tier List//Linux Guide//Build logs//Before troubleshoot//Mark Solved//Off Topic//Community standards

Don't forget to quote or mention me

 

Primary PC:

Spoiler

CPU: I5-8600k  @4.5 ghz  GPU: GTX 1070 ti EVGA SC Gaming   RAM: 8+8 3360 mhz DDR4 Trident Z   MOBO: MSI Gaming Pro Carbon AC   HDD: 1 TB 7200 RPM Seagate Baracudda, 1 TB 5400 RPM Samsung ECOGREEN   SSD: Samsung 860 EVO 500 GB   Soundcard: built in   Case: Cooler Master Masterbox Lite 5 RGB   Screen: Salora 40LED1500

 

Secondary PC: Cedar mill

Spoiler

CPU: i3-2130   GPU: Intel HD graphics   RAM: 4+2 GB 1333 mhz DDR3    MOBO: HP H series   HDD: 320 GB WD Black 7200 RPM   PSU: HP 250 watt   Soundcard: built in   Case: Sunbeam Quarterback   Screen: IIyama Prolite T2240MTS, Samsung SyncMaster710N

 

Server: CookieVault

Spoiler

CPU: core2dual E8400   GPU: Intel HD graphics   RAM: 2+1+1+1 gb 1333 mhz ddr3   MOBO: HP Q series   HDD: 4x 1tb 5400 RPM Samsung Spinpoint Ecogreen   Soundcard: built in   Case: Compaq 6000 pro mt   Screen: Samsung SyncMaster710n

 

Laptop: Acer TravelMate 8573t

Spoiler

CPU: I3-2330M   GPU: Intel HD graphics   RAM: 8+2 GB 1333 mhz DDR3   MOBO: Acer   SSD: 250 gb mx500 sata   Soundcard: built in   Case: Acer TravelMate 8573t   Screen: TN 768p

 

Consoles:

Spoiler

PS4 slim glacier white 500 gb, PS4 FTP Special Edition 500 gb, Xbox, 3 DS lites, DSI XL, Gameboy Advanced Color, PS Vita v2, Wii, PS3 500 gb

 

Link to post
Share on other sites
3 minutes ago, GoldenLag said:

Chief, if it wasnt obvious in my comment, i know this.

What you said heavily suggests otherwise.


Seagull eat fish. But fish belong to Mafia. Mafia punch seagull for not respecting Mafia. Seagull say "No, please! I have child!"

Mafia punch seagull with child.

 

 

 

 

 

 

 

 

Pyo.

Link to post
Share on other sites
1 minute ago, Trixanity said:

Well, I'm not saying that the lanes aren't there but if they do use a switch then (according to Ian Cutress) we can look forward to the cheapest motherboards costing at least $300-400 because a switch costs $200 to add to a board.

That's the PLX stuff.

Intel also has a PCIe Switch in their Chipset and they don't cost 300-400€.

1 minute ago, Trixanity said:

That's why I'm questioning the move unless the switch is optional at which point most boards will still be limited to x4 lanes. You either need a switch or use a different interface to split the lanes like that - Intel does the latter.

öhm, no. You misunderstand it seems.

Not THAT kind of Switch...


"Hell is full of good meanings, but Heaven is full of good works"

Link to post
Share on other sites
1 minute ago, Stefan Payne said:

That's the PLX stuff.

Intel also has a PCIe Switch in their Chipset and they don't cost 300-400€.

öhm, no. You misunderstand it seems.

Not THAT kind of Switch...

Possibly.

 

What kind of switch is it if not a PCI-E switch?

Link to post
Share on other sites
40 minutes ago, Trixanity said:

Possibly.

What kind of switch is it if not a PCI-E switch?

The Kind that's integrated to the Chipset and mass produced ;)

40 minutes ago, leadeater said:

That'll actually be much less of a problem, TR is much more of a SoC and doesn't have much in the way of PCIe in the chipset. EPYC doesn't even have a chipset.

True but doesn't that limit TR to 4 S-ATA Ports whilw EPYC has 8 due to arbitrary limitations?


"Hell is full of good meanings, but Heaven is full of good works"

Link to post
Share on other sites
3 minutes ago, Stefan Payne said:

The Kind that's integrated to the Chipset and mass produced ;)

True but doesn't that limit TR to 4 S-ATA Ports whilw EPYC has 8 due to arbitrary limitations?

Yeah I get that but X370 and X470 have been hard limited to x4 so what's new with this one that it can suddenly go from x4 3.0 to x16 4.0 (well, according to the sheet it's 4 x4 but whatever).

Link to post
Share on other sites

The reality of power draw is that as long as it isn't insane, cooling won't be a problem.

As far as chip(set) power draw…performance matters, but so does the market.

 

Your general laptop user (not these chips, I know) won't care about the wattage directly, but they will care about the battery life or extra power brick if things get out of hand.  That's generally far more on the gaming graphics cards than the CPU/chipset though, so YMMV.

 

Your gamer enthusiast will care about top performance, power be damned, and will probably overclock it if relatively easily possible, at the expense of even more power draw.  Your average gamer will be the same, buying the best performance for dollar amount they can afford that is in front of them at the store, but probably not overclock.

 

Your average generic home user won't care directly about power draw, though they will care if the system is pushing significantly more heat out of it all the time and increasing their electric bill vs their old system.

 

"Servers" are where power draw really gets interesting.  Your average HOME server user will care greatly, because as a 24/7 always on workload, efficiency makes a fairly noticeable change in power bills fairly quickly.  Your average on site small business server, or metered co-lo is similar.  But, once you get to data center style, then efficiency of space becomes just as important as power use for ongoing cost, and you're not looking at desktop chips anyway.

 

I expect to be building up a server that is a low power small business style server, and also a mostly matching system that instead uses a higher clock/core chip and discrete graphics as a gaming rig (which will only be running when gaming, so power draw won't matter much).

 

So, to say power draw doesn't matter is…silly.  But to say that everybody has it at the top of their list of important things is just as silly.

 

 

Also, I would agree that the likely cause of the chipset's extra draw is PCIe 4.  But, we'll know for sure soon. :)

Link to post
Share on other sites
8 minutes ago, justpoet said:

The reality of power draw is that as long as it isn't insane, cooling won't be a problem.

Especially since the 16 core (and possibly the smaller core counts, depending on chiplet configuration) will spread the heat into two separate portions of the IHS, it may actually be easier to cool even with a a slightly higher power draw.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×