Jump to content

Intel Z590 with 10gbe Lan enough?

1988fido

Hi

Short version:
If I buy Z590 intel motherboard with 10gbe onboard, i will get the full speed of the 10gbe ?
because it only have 24 pcie lanes.
16 for gpu
4 for nvme ssd (pcie 3.0 somewhat budget ssd)
and 4 going to the chipset which splits them into 24 but realistically its 4 physical lanes from the cpu to the z590 chipset which goes to:
sata ports , usb, Lan 10gbe etc..
so did i get it right?


Long version:
if i did get it right then the 10gbe land port is a waste on z590 as long as its wired via the chipset. and if its direct to cpu then it will make my gpu x8 or the nvme x2 instead of 4.
or the 10gbe will get x2 pcie instead of x4 pcie 3.0

😕 then its the same case with amd X570  platform? is it only possible to utilize the 10gbe Lan on HEDT systems like i9-X intel and threadrippers ?

basically i have 30TB+ on my gaming computer , i will upgrade it soon.
right now its 8700k + 1080 Ti , not enough for my work. also want to try raytracing.

anyhow so am checking pricing on 10850k its very cheap for the performance I get.
but i am wondering how it will handle what I will use it for.

I am building a Nas/Plex server to offload the plex workload from my main computer so i can do rendering and many more stuff + gaming.
so i want to transfer huge files back and forth between them. so 10gbe is a must , because right now my 30TB+ hardisks dont have a back up and they are full so i will at least need to add another 10TB to be able to expand (40 total) then put 40 TB in the Nas/server

so will Z590 be able to handle Sata ssd's + hardisks (all sata ports occupied + 10gbe onboard (not add in card) also my Gpu. ofc i wont be gaming sametime transfering files.but how will the pcie lanes be divided and i cant afford threadripper , because my Nas/server will be a threadripper 1950x or 1920x (found them cheap with motherboards)

so server side I am safe there is many many pcie lanes to handle 10gbe + nvme ssd (pcie 3.0 to save $$) and as many sata's i want to use and add pcie cards as i can afford hhhh

Link to comment
Share on other sites

Link to post
Share on other sites

The 10GBe will run off the chipset, and won't affect the gpu or nvme link speeds. The chipset has plenty of extra bandwidth so it won't make a difference.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, Electronics Wizardy said:

The 10GBe will run off the chipset, and won't affect the gpu or nvme link speeds. The chipset has plenty of extra bandwidth so it won't make a difference.

 

 

It doesn’t have extra 

 

electrically DMi 3.0 intel

Have only 4 traces coming from the cpu. So it’s 4x pcie 3.0 lanes only. 
 

all the extra stuff they say it’s firmware and software trickery scheduling the data transfers. 
 

and you will see it choking if you tried to use all USB ports and some Sata ports then did transfer files and used the Lan to transfer files. 
 

then used nvme with that. 
 

you will see it choking for real 

 

 

the 10Gbe takes 2x pcie at least to reach 1.25GBs because 1X pcie 3.0 is about 980~ MBs 

so to reach more it have to use another lane 

 

so it’s 2 lanes 

 

so you are left remains with 2 lanes from cpu to share across everything that rely on chipset 

 

Sata , Bluetooth , wifi ( if used ) etc.. 

 

 

2 sata ssd’s should get you around 1.2GBs so that is also 2 lanes worth of bandwidth from cpu . 
 

nothing left to use hardisk usb or peripherals.

 

 

 

I know the intel marketing BS 24 lanes from chipset. 
 

those are the lines and tracers how it divides the total 4 lanes from cpu 

 

 

I explain in another way

 

 

cpu ——-> send 4 lanes to chipset ( named dmi 3.0

 

 

now intel via chipset processing and magic 


 

chipset ——-> 24 lines to all connected stuff ( lan+ usb+ sata …..) 

 

🙂 so the bottleneck still there which is dmi link ( 4x pcie ) no matter what kind of magic they do it is still limited by it. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, 1988fido said:

 

electrically DMi 3.0 intel

Have only 4 traces coming from the cpu. So it’s 4x pcie 3.0 lanes only. 
 

It splits those dmi lanes into about 24 lanes. if you want the dmi to be faster, you can get a 11th gen chip for 2x the dmi bandwidth.

 

And the dmi is more than enough bandwidth for hdds to be copied to 10gbe

 

Since your not maxing out all the devices connected to the chipset at any time, the dmi is more than enough speed here.

 

With a 10gbe nas there is no way you will be able to fill the dmi on the system, and you can max out the network connection.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, Electronics Wizardy said:

It splits those dmi lanes into about 24 lanes. if you want the dmi to be faster, you can get a 11th gen chip for 2x the dmi bandwidth.

 

And the dmi is more than enough bandwidth for hdds to be copied to 10gbe

 

Since your not maxing out all the devices connected to the chipset at any time, the dmi is more than enough speed here.

 

With a 10gbe nas there is no way you will be able to fill the dmi on the system, and you can max out the network connection.

Not really 

 

because I wrote the 10850K by mistake 

 

i meant 11900K 

all what I wrote apply to 11900K ( 11th gen ) 

 

 

intel older generations was worse 20 total pcie CPU lanes  

(4 go to chipset via dmi, 16 to gpu , so even having nvme will make gpu go to 8, or chipset go to 2x 🙂 which is why many ppl lose USB ports if they try to plug all sata and USB’s like me ) 


 

and the idea hdds are slow , I am adding cheap sata ssd’s and some raided to use the 10gbe if it’s possible

 

 

how you saying you can’t max the system ? 
 

2 sata ssd’s that is 1.2+ GBs which is effectively more than 1X pcie 3.0 

and 10gbe is 2X pcie 3.0 

 

what you have is 1X for everything else which is not enough 

 

we talking about only 2 sata ssd’s 

 

right now I have total data of 30TB and there is no sata ssd’s that fit 30TB in just 2 disks 

 

so I must at least have many sata ssd’s then a bunch of hardisks

am

going to utilize all 8 sata ports and at least 2 will be ssd and if the speed even allow it will go for 4 

but you can see it’s already throttling in just 2 ssd’s 

 

the math says dmi will be full with just occupying the satas and USB ports + 10gbe transfer files 

 

 

 

10gbe transfer king that is 2x gone ( cpu lanes 2x ) 

 

2 satas that is 2X g cpu lanes gone because they only work by diving to 2 or not ) 

 

1 cpu lane can’t give 2 sata full speed

 

and then I have to add the other stuff !

 

so 1X enough for the entire system ? 
won’t even be able to copy from one hardisk to the another hardisk ( to organize or sort the files) have to wait for the network transfer to finish because it will be choking if I do so 

 

and btw my maximus X hero have almost all sata used andI lose USB ports when I want to keep everything plugged in , monitor , rgb, mouse , keyboard etc.. 

 

so I already see how the chipsets are bad

( but it’s the old one so the new one will handle it ) how ever the new system also means I want to expand storage and I want to use 10gbe 😄 so again it won’t handle it 

 

It won’t handle the ports sharing

and it won’t handle the bandwidth data transfers 

( at least in theory )

anyone have 10gve Z590 can share his data would appreciate it 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, 1988fido said:

how you saying you can’t max the system ? 

If your are copying data to your nas, your copying it at a max to 10gbit, the dmi has more than enough bandwidth for that. THen those 10gbit are going tot he drives back over the dmi. Maybe a bit faster with compression and overhead, but still, your not maxing it out

 

3 minutes ago, 1988fido said:

o 1X enough for the entire system ? 
won’t even be able to copy from one hardisk to the another hardisk ( to organize or sort the files) have to wait for the network transfer to finish because it will be choking if I do so 

That should be fine, the sata drives max out at 6gbit, and if your using 10gbit for the lan, your well 

 

You will probably hit the bandwidth limit on the hdds well before the dmi becomes the limit.

 

 

I think you will basically never hit the max speed of the dmi as your not maxing out all the storage devices at once. Yea you can be limited by the DMI if you try to, but I don't think it will be common or a issue here.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, Electronics Wizardy said:

If your are copying data to your nas, your copying it at a max to 10gbit, the dmi has more than enough bandwidth for that. THen those 10gbit are going tot he drives back over the dmi. Maybe a bit faster with compression and overhead, but still, your not maxing it out

 

That should be fine, the sata drives max out at 6gbit, and if your using 10gbit for the lan, your well 

 

You will probably hit the bandwidth limit on the hdds well before the dmi becomes the limit.

 

 

I think you will basically never hit the max speed of the dmi as your not maxing out all the storage devices at once. Yea you can be limited by the DMI if you try to, but I don't think it will be common or a issue here.

I think you missing some info 

 

1- Sata SSD’s not just hardeives

and these will be raided to reach the 10Gbe 

 

2- it’s multiple transfers at once. 
 

lets say am using nvme to send to hardisks and some ssd’s 

 

 

then I am sending at the same time to the NAS via 10gbe ( from 2 raised SSD’s to reach 10gps speed) ! 
 

how that chipset 4pcie can handle it ? 
can you tell me some information I am missing or some math I do wrong ? 
 

single SSd can do 600MBs 

so 2 SSD’s is 1.2GBs which is equivalent of  more than 1X 

and the lanes either go for 4 or 2 or 1 they don’t split like 1.5 lanes

 

so either I get full speed of the 1.2GBs sagas utilizing 2X pcie lanes to do so

 

or I utilize 1X lane to reach 900~ MBs

which is what I don’t want to happen 

 

 

3- all that above is just 2 SSD’s 

and I am telling you I have 30TB+ which I want to expand to 40 if possible and add more SSDs to reach 4 

 

and copy on 10gbe full speed 

 

so doing that transfer = bandwidth of pcie 2x gone 

 

then doing 10gbe over the network at the SAME TIME !  will add another 2X  pcie of bandwidth 

 

= 4X pcie are gone 

 

nothing left to use anything else 🙂 ? 
 

where is my math is wrong ? 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, 1988fido said:

how that chipset 4pcie can handle it ? 
can you tell me some information I am missing or some math I do wrong ? 

First then, the dmi speed is doubled with z590 and a 11th gen cpu. So the eqv of 8x pcie gen 3

 

 

Next thing, I think its very unlikely you will be maxing out 10gbe speeds, and copying from nvme drives at once time. And when you are, it will be for short periods of time. If this is a major issue buy something like a dual xeon system, threadripper or epyc. The consumer platforms are not made to have all the io pushed to the limit at once.

 

2 minutes ago, 1988fido said:

and the lanes either go for 4 or 2 or 1 they don’t split like 1.5 lanes

 

Well in this case they kinda do. The chipset is splitting the bandwidth between the devices, not the lanes. 

 

Also PCIe is full duplex, so you can copy data and recieve data at full speed at the same time. So you can copy from a nvme ssd to anouther nvme ssd at the full speed over the dmi.

 

4 minutes ago, 1988fido said:

so doing that transfer = bandwidth of pcie 2x gone 

I don't think you understand for the chipset splits pcie. It doesn't split lanes, it splits bandwidth, so you can easily copy from 10gbe to a ssd just fine. You will be network limited, not dmi limited

 

4 minutes ago, 1988fido said:

then doing 10gbe over the network at the SAME TIME !  will add another 2X  pcie of bandwidth 

 

PCIe is full duplex, so the copy and receive can happen at the same time.

 

5 minutes ago, 1988fido said:

nothing left to use anything else 🙂 ? 

You don't use those 4x dmi lanes up as they are shared bandwidth. This system will work fine here.

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, Electronics Wizardy said:

First then, the dmi speed is doubled with z590 and a 11th gen cpu. So the eqv of 8x pcie gen 3

 

 

Next thing, I think its very unlikely you will be maxing out 10gbe speeds, and copying from nvme drives at once time. And when you are, it will be for short periods of time. If this is a major issue buy something like a dual xeon system, threadripper or epyc. The consumer platforms are not made to have all the io pushed to the limit at once.

 

Well in this case they kinda do. The chipset is splitting the bandwidth between the devices, not the lanes

Also PCIe is full duplex, so you can copy data and recieve data at full speed at the same time. So you can copy from a nvme ssd to anouther nvme ssd at the full speed over the dmi.

 

I don't think you understand for the chipset splits pcie. It doesn't split lanes, it splits bandwidth, so you can easily copy from 10gbe to a ssd just fine. You will be network limited, not dmi limited

 

PCIe is full duplex, so the copy and receive can happen at the same time.

 

You don't use those 4x dmi lanes up as they are shared bandwidth. This system will work fine here.

I wrote long reply but here is short version:

My workstation/desktop the i9 machine Z590. will have these Disks for example
Cluster A ( 2 sata SSD’s raid 0)
Cluster B (2 sata SSD’s Raid 0)
Cluster C (bunch of hardisks for huge capacity
And D ( nvme SSD 1 in the nas and 1 in the computer connected to the nas)

My Nas/Server will have these Disks for example: [made it the same to be simple]
Cluster A ( 2 sata SSD’s raid 0)
Cluster B (2 sata SSD’s Raid 0)
Cluster C (bunch of hardisks for huge capacity
And D ( nvme SSD 1 in the nas and 1 in the computer connected to the nas)


or reverse the entire thing. or replace some clusters with C or B or A etc.. got it ?

so having 2 simultaneous transfers is the Minimum.
4x pcie can handle it?
in numbers it doesn't seems like it.
only way to handle it is if am going to use the system like many ppl do which is not good, basically AFK till the computer finish transfers then go do the rest in order after the previous ones finish (means i have to wait long)  and work in slow way.

Long version in this spoiler:

Spoiler

1- 2 Sata SSD's can get the 10GBE to be used = go faster than 5gbps and faster than 2.5gbps so the mobo have to use 2x pcie mode from cpu to reach the sata ports

10GBE is around 1.25GB right ?
2 sata SSD's man can reach 800+~ done already achieving the point to utilize the 10GBE so no point to even buy a mobo with 2.5gbps or 5gbps

https://www.pcworld.com/article/2365767/feed-your-greed-for-speed-by-installing-ssds-in-raid-0.html#:~:text=Intel tells us running four,than SATA 6Gbps drives do.
.dont forget i am not going to put only 2 SSD's and won't make this entire thread nor the entire server just to copy files from 2 SSD's so man at least the LEAST the minimum am going to have 2 SSD's and that is not giving myself headroom to grow, which I think i shouldnt even go for 2 only. and i wrote up in previous replies i want to try 4 if possible " add more SSDs to reach 4"
and thats not what is important , the important what you are not considering is :

am going to use the Lan in what mode?

A- it is 1gbps = 1x pcie have to select if its even an option to do it manually in bios , but it is not anyway it should be done auto by the mobo
B-2.5 = Same as the above
C-5gbps = 2x pcie have to select if its even an option to do it manually in bios , but it is not anyway it should be done auto by the mobo
D-10Gbps same as the above ( C point)

😄
so the motherboard don't care if I saturate it 100% or 90 or even 60% as long as it sees the Lan functioning in the mode higher than 2.5gbps it will allocate the lanes for it which is 2 lanes and not 1 to the chipset right ?.
ok then  chipset have speed enough to give to who ? to the Lan to send it over the network.
this way i literrally Killed 50% of the entire bandwidth coming from the cpu. if am going to do anything in the same direction ( i didnt say its not full duplex but for some reason u assumed i dont know that) it will eat more bandwidth from the total 4x which then will be Done. it will easily be loaded 100% if i even try to use 2 more ssd's and some hardisks.


3- you don't read what I write. its multiple replies which shows you didnt really read.

" i cant afford threadripper , because my Nas/server will be a threadripper 1950x or 1920x (found them cheap with motherboards)"
"
i want to try 4 if possible " add more SSDs to reach 4""
and few more sentences which you missed or dont read and just reply. seems like you are just wasting my time man. i thought you can give me something useful.
i wrote that and yet you suggest i go epyc or threadripper XD or xeon etc..

the computer is for some work +some gaming and projects etc..

the nas is a server and will help with work.

to buy 2 threadrippers or 2 HEDT is NOT within the budget.

"

You don't use those 4x dmi lanes up as they are shared bandwidth. This system will work fine here.

"
4- dont think it will work fine , it will be slow things wont be smooth and usb's will transfer slow from thumb drive etc.. or even hardisk to another etc..
i copy from nvme to ssd's
then raid 0 ssd's to the Nas
which means i have to be like waiting whenever i do something i have to check the speed and make sure i am not near the limit before i go do the next transfer.
even full duplex wont do it. splitting bandwidth is not magic bandwidth you still get bad experience from the way it does it.
and the total amount of bandwidth will still be low for the amount of instances going to happen at the same time.





lets put a realistic scenario:

Cluster




 

can Z590 handle it.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

current problem. and i have 2 more sata ssd's swapping them. because I cant even plug everything to my computer (not enough sata ports)
my workflow is Turtle speed now
0 back up
0 redundancy
and no cloud storage.
also the free space you see is me Deleting the max I can delete. and holding back to the absolute best = zero work done, zero new photage zero projects all because no space.
I got the QVO recently filling it then will swap it with one of the extra ssd's afterwards (which they are full)

so..

current problem.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Electronics Wizardy said:

First then, the dmi speed is doubled with z590 and a 11th gen cpu. So the eqv of 8x pcie gen 3

 

 

Next thing, I think its very unlikely you will be maxing out 10gbe speeds, and copying from nvme drives at once time. And when you are, it will be for short periods of time. If this is a major issue buy something like a dual xeon system, threadripper or epyc. The consumer platforms are not made to have all the io pushed to the limit at once.

 

Well in this case they kinda do. The chipset is splitting the bandwidth between the devices, not the lanes. 

 

Also PCIe is full duplex, so you can copy data and recieve data at full speed at the same time. So you can copy from a nvme ssd to anouther nvme ssd at the full speed over the dmi.

 

I don't think you understand for the chipset splits pcie. It doesn't split lanes, it splits bandwidth, so you can easily copy from 10gbe to a ssd just fine. You will be network limited, not dmi limited

 

PCIe is full duplex, so the copy and receive can happen at the same time.

 

You don't use those 4x dmi lanes up as they are shared bandwidth. This system will work fine here.

even if we will go for the diagram.

the x8 you talking about it will still be the same issue

x16 to the gpu
x8 to the z590

total 24 😄
and from the z590  4 will go to nvme ssd
remaining 4 to the network + io ( which is what am trying to see where my math is wrong to know how to solve it)


Intel-Z590-Chipset-Block-Diagram.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, 1988fido said:

wrote long reply but here is short version:

My workstation/desktop the i9 machine Z590. will have these Disks for example
Cluster A ( 2 sata SSD’s raid 0)
Cluster B (2 sata SSD’s Raid 0)
Cluster C (bunch of hardisks for huge capacity
And D ( nvme SSD 1 in the nas and 1 in the computer connected to the nas)

My Nas/Server will have these Disks for example: [made it the same to be simple]
Cluster A ( 2 sata SSD’s raid 0)
Cluster B (2 sata SSD’s Raid 0)
Cluster C (bunch of hardisks for huge capacity
And D ( nvme SSD 1 in the nas and 1 in the computer connected to the nas)


or reverse the entire thing. or replace some clusters with C or B or A etc.. got it ?

so having 2 simultaneous transfers is the Minimum.
4x pcie can handle it?
in numbers it doesn't seems like it.
only way to handle it is if am going to use the system like many ppl do which is not good, basically AFK till the computer finish transfers then go do the rest in order after the previous ones finish (means i have to wait long)  and work in slow way.

First, the DMI is a x8 connection.

 

Next, it will be more than fast enough.

 

Since basically all the time the speed limit of copies is 10gbe here, thats well under the limit of the DMI, and you won't be able to fill up the DMI doing copies over the network no matter what drive config you have here.

 

1 hour ago, 1988fido said:

even if we will go for the diagram.

the x8 you talking about it will still be the same issue

x16 to the gpu
x8 to the z590

total 24 😄
and from the z590  4 will go to nvme ssd
remaining 4 to the network + io ( which is what am trying to see where my math is wrong to know how to solve it)

The CPU has 20 pcie lanes. 16 go to your gpu, 4 go to your nvme drive. There is also the dmi on the cpu. 

 

The chipset running off the DMI has another 24 pcie lanes(depends on config) for a total of 44 lanes on the platform. This is more than enough for all the devices your connecting.

 

1 hour ago, 1988fido said:

remaining 4 to the network + io ( which is what am trying to see where my math is wrong to know how to solve it)

There isn't only 4 remaning, there is about 24 remaing that will run the network, usb, wifi, sata and other devices. You have plenty of io here.

 

2 hours ago, 1988fido said:

current problem. and i have 2 more sata ssd's swapping them. because I cant even plug everything to my computer (not enough sata ports)
my workflow is Turtle speed now
0 back up
0 redundancy
and no cloud storage.
also the free space you see is me Deleting the max I can delete. and holding back to the absolute best = zero work done, zero new photage zero projects all because no space.
I got the QVO recently filling it then will swap it with one of the extra ssd's afterwards (which they are full)

Why do you want this platform for a nas?

Id get something like the asrock rack board with x570. That way you get ecc support and impi

 

Also why does the backup have ssds? Id go hdd only for the backup and the ssds won't make a difference speed wise.

Link to comment
Share on other sites

Link to post
Share on other sites

As I understand it the key point is that a 4x PCIe device going through the chipset is not "stealing" 4x DMI lanes, because the chipset muxes from DMI to PCIe.

 

So as mentioned above, it effectively has up to 24 PCIe lanes it can allocate to devices connected to the chipset. As its muxed, it allows that x8 bandwidth to be shared across those 24 lanes, rather than a 4x device hogging 4x even when its not being used.

Router:  Intel N100 (pfSense) WiFi6: Zyxel NWA210AX (1.7Gbit peak at 160Mhz)
WiFi5: Ubiquiti NanoHD OpenWRT (~500Mbit at 80Mhz) Switches: Netgear MS510TXUP, MS510TXPP, GS110EMX
ISPs: Zen Full Fibre 900 (~930Mbit down, 115Mbit up) + Three 5G (~800Mbit down, 115Mbit up)
Upgrading Laptop/Desktop CNVIo WiFi 5 cards to PCIe WiFi6e/7

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/15/2021 at 3:07 PM, Alex Atkin UK said:

As I understand it the key point is that a 4x PCIe device going through the chipset is not "stealing" 4x DMI lanes, because the chipset muxes from DMI to PCIe.

 

So as mentioned above, it effectively has up to 24 PCIe lanes it can allocate to devices connected to the chipset. As its muxed, it allows that x8 bandwidth to be shared across those 24 lanes, rather than a 4x device hogging 4x even when its not being used.


yes its 4x pcie lanes shared across 24 lanes , so if am going to use the computer like this it will throttle !?

the 10gbe alone won't throttle it i know its basic math, but what about the rest of the disks copying data?
and the reason i want to do what is in the picture , its simple when i back things up to nas i will free some space on the local drive so i will re organize the files and sort them that will make me copy paste / cut paste etc..
so i dont want to be waiting for the lan transfer to finish to be able to transfer files fast between the local disks
can Z590 handle it.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×