Jump to content

Why can't it just work? [10Gbe FreeNAS]

Hey together,

 

i've been fighting with freenas and 10gbe for a while now and sometimes it gives me hope while mostly it's depressing as hell.

 

In my signature you can see the Boss-NAS i'm using. rn it's equipped with an Intel X540T2 10gbe network card.

One port goes to the Netgear XS708E 10gbe Switch while the other one goes (for testing purposes) to another Intel X540T2 in my Test-PC that 

has an i5 2500K and some MX500 SSD or whatever and runs Windows 7. The other port of this PC also goes to the same switch as the FreeNAS goes.

 

Theoretically i'd have two separate 10GBe connections now but in the real world it's nothing like that.

 

Doing Iperf (as well as Crystal Disk Marks to the RAID0 SSD's or the NVMe SSD) it gives me ~3.5gbe on the read side and 8.5gbe on the write side.

That isn't reflected in real work perfomance either but i think the issue here is probably the SSD in the test PC.

 

 

Here's the iperf i did:

first one is PC client p2p

second PC client over switch

third PC server p2p

fourth PC server over switch

tC7lH1i.png

 

 

As you can see it doesn't matter if it's connected p2p or through the switch, read's always s*ck while writes show there is enough bandwidth actually.

I DID get a real 10gbe connection between the PC's once with a p2p SFP+ connection where i could see my RAID0 SSD's going to full 1gb/s in both reads and writes but now that

i've "upgraded" to a full 10gbe RJ45 environment everything seems to be collabsing.

 

 

 

I DID also enable MTU 9000 on both ends just not the switch as that apparently has that set automatically and i can't find it anywhere in the setup.

Even worse is the fact that my other PC, a 2600K with X540T2 and Windows 10 doesn't even remotely reach that but stays at 3gbe in both directions, even just from PC to PC without the NAS (switch in between obviously)

 

 

 

So please, does anyone ( @Windows7ge) have an idea why everything is to terrible? lol.

 

 

 

EDIT: What i think i'll try next is connecting the freenas with my SFP+ Card to the single SFP+ Port on the Switch and from there go with RJ45 to the X540T2's on the Test PC, as SFP+ once upon a time gave me 10gbe.

 

Gaming HTPC:

R5 5600X - Cryorig C7 - Asus ROG B350-i - EVGA RTX2060KO - 16gb G.Skill Ripjaws V 3333mhz - Corsair SF450 - 500gb 960 EVO - LianLi TU100B


Desktop PC:
R9 3900X - Peerless Assassin 120 SE - Asus Prime X570 Pro - Powercolor 7900XT - 32gb LPX 3200mhz - Corsair SF750 Platinum - 1TB WD SN850X - CoolerMaster NR200 White - Gigabyte M27Q-SA - Corsair K70 Rapidfire - Logitech MX518 Legendary - HyperXCloud Alpha wireless


Boss-NAS [Build Log]:
R5 2400G - Noctua NH-D14 - Asus Prime X370-Pro - 16gb G.Skill Aegis 3000mhz - Seasonic Focus Platinum 550W - Fractal Design R5 - 
250gb 970 Evo (OS) - 2x500gb 860 Evo (Raid0) - 6x4TB WD Red (RaidZ2)

Synology-NAS:
DS920+
2x4TB Ironwolf - 1x18TB Seagate Exos X20

 

Audio Gear:

Hifiman HE-400i - Kennerton Magister - Beyerdynamic DT880 250Ohm - AKG K7XX - Fostex TH-X00 - O2 Amp/DAC Combo - 
Klipsch RP280F - Klipsch RP160M - Klipsch RP440C - Yamaha RX-V479

 

Reviews and Stuff:

GTX 780 DCU2 // 8600GTS // Hifiman HE-400i // Kennerton Magister
Folding all the Proteins! // Boincerino

Useful Links:
Do you need an AMP/DAC? // Recommended Audio Gear // PSU Tier List 

Link to comment
Share on other sites

Link to post
Share on other sites

Did you verify that jumbo packets are working with:

ping X.X.X.X -l 8972 -f

If it tells you that it needs to be fragmented but DF is set then Jumbo Packets isn't working end-to-end.

 

What does you pool consist of? When reading ZFS caches data in RAM so your reads for frequently accessed files aught to significantly outperform writes.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Windows7ge said:

Did you verify that jumbo packets are working with:


ping X.X.X.X -l 8972 -f

If it tells you that it needs to be fragmented but DF is set then Jumbo Packets isn't working end-to-end.

 

What does you pool consist of? When reading ZFS caches data in RAM so your reads for frequently accessed files aught to significantly outperform writes.

This is pinging the NAS through the switch.

image.png.6de863148db8f4f935b947500d081347.png

 

So 4 packs send, 4 received, 0 lost and it looks fine to me, does it?

 

The pool i'm focussing of is a RAID0 of two 500gb Samsung 850 Evo's or a single 250gb 970 Evo. My other one, a Raidz2 of 6x4TB WD Red's worked to 500mb/s read/write too once but is limited to the same 300 reads now. It would make sense with ZFS Cache and everything but reads are actually siginificantly LOWER than writes.

Gaming HTPC:

R5 5600X - Cryorig C7 - Asus ROG B350-i - EVGA RTX2060KO - 16gb G.Skill Ripjaws V 3333mhz - Corsair SF450 - 500gb 960 EVO - LianLi TU100B


Desktop PC:
R9 3900X - Peerless Assassin 120 SE - Asus Prime X570 Pro - Powercolor 7900XT - 32gb LPX 3200mhz - Corsair SF750 Platinum - 1TB WD SN850X - CoolerMaster NR200 White - Gigabyte M27Q-SA - Corsair K70 Rapidfire - Logitech MX518 Legendary - HyperXCloud Alpha wireless


Boss-NAS [Build Log]:
R5 2400G - Noctua NH-D14 - Asus Prime X370-Pro - 16gb G.Skill Aegis 3000mhz - Seasonic Focus Platinum 550W - Fractal Design R5 - 
250gb 970 Evo (OS) - 2x500gb 860 Evo (Raid0) - 6x4TB WD Red (RaidZ2)

Synology-NAS:
DS920+
2x4TB Ironwolf - 1x18TB Seagate Exos X20

 

Audio Gear:

Hifiman HE-400i - Kennerton Magister - Beyerdynamic DT880 250Ohm - AKG K7XX - Fostex TH-X00 - O2 Amp/DAC Combo - 
Klipsch RP280F - Klipsch RP160M - Klipsch RP440C - Yamaha RX-V479

 

Reviews and Stuff:

GTX 780 DCU2 // 8600GTS // Hifiman HE-400i // Kennerton Magister
Folding all the Proteins! // Boincerino

Useful Links:
Do you need an AMP/DAC? // Recommended Audio Gear // PSU Tier List 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, FloRolf said:

This is pinging the NAS through the switch.

 

So 4 packs send, 4 received, 0 lost and it looks fine to me, does it?

 

The pool i'm focussing of is a RAID0 of two 500gb Samsung 850 Evo's or a single 250gb 970 Evo. My other one, a Raidz2 of 6x4TB WD Red's worked to 500mb/s read/write too once but is limited to the same 300 reads now. It would make sense with ZFS Cache and everything but reads are actually siginificantly LOWER than writes.

It's fine jumbo packets are working.

 

I have had a similar experience working with ZFS except it wasn't reads that were hindered it was writes (right around 350MB/s) and it didn't matter what my pool consisted of or how I configured them. I spent a couple years going though different forums getting help and advice from a lot of people and in the end nobody could figure out what the root of the problem was.

 

Ignoring iperf I would copy a large file over (something like a movie) then restart the server. This will flush the file from RAM. Then copy it to your computer. Document the performance. Then delete it from your desktop and copy it from the server to your desktop again. This time instead of from disk(or SSD) it should be reading from RAM and you can verify that by checking the ARC. If the used memory has increased by the size of the file it successfully cached. If the performance is identical I suspect something's wrong at the network or client level. If the performance shoots up to 1GB/s then something's probably wrong at the storage level on the server side.

 

You can also try alleviating the potential bottleneck on the client end by making a RAM disk if you're not using a NVMe SSD.

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, Windows7ge said:

It's fine jumbo packets are working.

 

I have had a similar experience working with ZFS except it wasn't reads that were hindered it was writes (right around 350MB/s) and it didn't matter what my pool consisted of or how I configured them. I spent a couple years going though different forums getting help and advice from a lot of people and in the end nobody could figure out what the root of the problem was.

 

Ignoring iperf I would copy a large file over (something like a movie) then restart the server. This will flush the file from RAM. Then copy it to your computer. Document the performance. Then delete it from your desktop and copy it from the server to your desktop again. This time instead of from disk(or SSD) it should be reading from RAM and you can verify that by checking the ARC. If the used memory has increased by the size of the file it successfully cached. If the performance is identical I suspect something's wrong at the network or client level. If the performance shoots up to 1GB/s then something's probably wrong at the storage level on the server side.

 

You can also try alleviating the potential bottleneck on the client end by making a RAM disk if you're not using a NVMe SSD.

okay, that's something at least ...

 

Doing that transfer thingy rn but i don't expect anything fundamental to happen. I also reboot it NAS every night as it turns off at midnight and turns on again at 6. I just wanna safe some energy.

 

I do have an NVMe in there but it's currently stand-alone and not used as a cache or whatever.

 

 

So instead of using my Test PC which SSD itself only benched to r/w of 300/200 (don't buy Crucial BX Drives!) i used my other PCs 250gb 840 Evo which is also Ram cached.

Now i copied a 3gb file over to the PC and i did see the RAM/ARC go up by exactly 3gb as well. transfer speeds were 250mb/s FLAT.

Second run now using the cache also 250gb FLAT. I used my NAS' NVMe drive for this.

 

Next i copied the same file from PC to the NVMe NAS and also got 250mb/s flat with a little rise to 330mb at the end.

 

 

Result: There is something seriously wrong with the Network. What? i don't know. I thought it was the Switch at first but since p2p gives the same result i don't believe it anymore.

 

 

 

 

EDIT: Could the issue be that i've replaced the Switch's Fans and now the Fan LED is on? i keep temps down though and iirc speeds was slow even before replacing the fans. 

Gaming HTPC:

R5 5600X - Cryorig C7 - Asus ROG B350-i - EVGA RTX2060KO - 16gb G.Skill Ripjaws V 3333mhz - Corsair SF450 - 500gb 960 EVO - LianLi TU100B


Desktop PC:
R9 3900X - Peerless Assassin 120 SE - Asus Prime X570 Pro - Powercolor 7900XT - 32gb LPX 3200mhz - Corsair SF750 Platinum - 1TB WD SN850X - CoolerMaster NR200 White - Gigabyte M27Q-SA - Corsair K70 Rapidfire - Logitech MX518 Legendary - HyperXCloud Alpha wireless


Boss-NAS [Build Log]:
R5 2400G - Noctua NH-D14 - Asus Prime X370-Pro - 16gb G.Skill Aegis 3000mhz - Seasonic Focus Platinum 550W - Fractal Design R5 - 
250gb 970 Evo (OS) - 2x500gb 860 Evo (Raid0) - 6x4TB WD Red (RaidZ2)

Synology-NAS:
DS920+
2x4TB Ironwolf - 1x18TB Seagate Exos X20

 

Audio Gear:

Hifiman HE-400i - Kennerton Magister - Beyerdynamic DT880 250Ohm - AKG K7XX - Fostex TH-X00 - O2 Amp/DAC Combo - 
Klipsch RP280F - Klipsch RP160M - Klipsch RP440C - Yamaha RX-V479

 

Reviews and Stuff:

GTX 780 DCU2 // 8600GTS // Hifiman HE-400i // Kennerton Magister
Folding all the Proteins! // Boincerino

Useful Links:
Do you need an AMP/DAC? // Recommended Audio Gear // PSU Tier List 

Link to comment
Share on other sites

Link to post
Share on other sites

Did you only discover these issues with your new NICs or were these problems apparent with your old ones?

 

It is interesting to see that there is indeed a bottleneck somewhere with a real world test. Something is limiting the connection to 250MB/s read & write (ignoring the little boost at the end).

 

What are the servers specs?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Windows7ge said:

Did you only discover these issues with your new NICs or were these problems apparent with your old ones?

 

It is interesting to see that there is indeed a bottleneck somewhere with a real world test. Something is limiting the connection to 250MB/s read & write (ignoring the little boost at the end).

 

What are the servers specs?

Well no not really but I've never fully had the chance to test it. 

 

Previously I had sfp+ nics in all 3 systems but I didn't like that with all the different p2p connections so I wanted a 10gbe router. I got my hands on the 708e which only has one sfp+ port so I had to get new nics. First I only had one X540-T2 which went into my NAS and another sfp+ in the test pc. There I already saw the shitty speeds but I thought it might be an issue havinh both sfp+ and RJ45 in one network. So I just got two more X540-T2. For the PC's and speeds are still shit, as we saw. 

 

I do know though that the NAS is capable of delivering 10gbe speeds because it worked with the old sfp+ p2p connections. 

 

Now my NAS' specs are in the sig under 'Boss-NAS' but it's a R5 2400G, 16gb ram and all the drives I already listed. Haven't seen anything of that maxed out tho in freenas however correct monitoring for Ryzen just got introduced with 11.2 U5 a couple days ago. 

 

 

 

EDIT: Weird is also that between the two windows PC's running X540T2's i only get 3gbit. Doesn't matter which one is server and which one is client in Iperf. I'll just setup a p2p between the two PC's and see how that goes.

 

EDIT2: So yeah, even with a p2p between the two PC's Iperf3 maxes at 3.8gbit. Are the NIC's broken? 

 

EDIT3: okay now shit's getting out of hand. I ran Edit2 with 4k jumbos accidentally and had the other NICs ports still connected to the switch with each other. Now I have ONLY the p2p connection going on and enabled 9k Jumbos on both. Speed dropped to 1gbit now. What?! 

Thought maybe Temps but Intel reads "Temperature: Normal" so idek anymore. 

Gaming HTPC:

R5 5600X - Cryorig C7 - Asus ROG B350-i - EVGA RTX2060KO - 16gb G.Skill Ripjaws V 3333mhz - Corsair SF450 - 500gb 960 EVO - LianLi TU100B


Desktop PC:
R9 3900X - Peerless Assassin 120 SE - Asus Prime X570 Pro - Powercolor 7900XT - 32gb LPX 3200mhz - Corsair SF750 Platinum - 1TB WD SN850X - CoolerMaster NR200 White - Gigabyte M27Q-SA - Corsair K70 Rapidfire - Logitech MX518 Legendary - HyperXCloud Alpha wireless


Boss-NAS [Build Log]:
R5 2400G - Noctua NH-D14 - Asus Prime X370-Pro - 16gb G.Skill Aegis 3000mhz - Seasonic Focus Platinum 550W - Fractal Design R5 - 
250gb 970 Evo (OS) - 2x500gb 860 Evo (Raid0) - 6x4TB WD Red (RaidZ2)

Synology-NAS:
DS920+
2x4TB Ironwolf - 1x18TB Seagate Exos X20

 

Audio Gear:

Hifiman HE-400i - Kennerton Magister - Beyerdynamic DT880 250Ohm - AKG K7XX - Fostex TH-X00 - O2 Amp/DAC Combo - 
Klipsch RP280F - Klipsch RP160M - Klipsch RP440C - Yamaha RX-V479

 

Reviews and Stuff:

GTX 780 DCU2 // 8600GTS // Hifiman HE-400i // Kennerton Magister
Folding all the Proteins! // Boincerino

Useful Links:
Do you need an AMP/DAC? // Recommended Audio Gear // PSU Tier List 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, FloRolf said:

Well no not really but I've never fully had the chance to test it. 

 

Previously I had sfp+ nics in all 3 systems but I didn't like that with all the different p2p connections so I wanted a 10gbe router. I got my hands on the 708e which only has one sfp+ port so I had to get new nics. First I only had one X540-T2 which went into my NAS and another sfp+ in the test pc. There I already saw the shitty speeds but I thought it might be an issue havinh both sfp+ and RJ45 in one network. So I just got two more X540-T2. For the PC's and speeds are still shit, as we saw. 

 

I do know though that the NAS is capable of delivering 10gbe speeds because it worked with the old sfp+ p2p connections. 

 

Now my NAS' specs are in the sig under 'Boss-NAS' but it's a R5 2400G, 16gb ram and all the drives I already listed. Haven't seen anything of that maxed out tho in freenas however correct monitoring for Ryzen just got introduced with 11.2 U5 a couple days ago. 

708E? Are you referring to the Netgear XS708E Switch? If yes what did you pay for it?

 

Yeah switching between SFP+ & Ethernet based connections shouldn't make any difference so long as they're both running at 10Gbit. What type of Ethernet cable are you using? (as much as that information probably doesn't matter based on the iperf test)

 

As much as I don't want to admit it I'm out of ideas. If this is an issue specific to the new NIC's in use I'm going to call in someone who will likely have better in depth knowledge than me. @Electronics Wizardy I've only recently started using the X540 but I have it built-in to a server I have running Windows Server 2016 so it's been working a treat. If I had to throw out a guess I'm wondering if it's a communication or configuration issue on the server side between the X540 and FreeNAS (or perhaps an issue with the AMD platform?). You could rule this out or confirm this by putting the SFP+ NIC back in the server and using the switch( or router)'s SFP+ port then Ethernet to the PC.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Windows7ge said:

708E? Are you referring to the Netgear XS708E Switch? If yes what did you pay for it?

 

Yeah switching between SFP+ & Ethernet based connections shouldn't make any difference so long as they're both running at 10Gbit. What type of Ethernet cable are you using? (as much as that information probably doesn't matter based on the iperf test)

 

As much as I don't want to admit it I'm out of ideas. If this is an issue specific to the new NIC's in use I'm going to call in someone who will likely have better in depth knowledge than me. @Electronics Wizardy I've only recently started using the X540 but I have it built-in to a server I have running Windows Server 2016 so it's been working a treat. If I had to throw out a guess I'm wondering if it's a communication or configuration issue on the server side between the X540 and FreeNAS (or perhaps an issue with the AMD platform?). You could rule this out or confirm this by putting the SFP+ NIC back in the server and using the switch( or router)'s SFP+ port then Ethernet to the PC.

Yeah the Netgear one. Sadly not the V2 :/ I paid like 200€ so i think that's alright (if it would work).

 

Thanks for the help though m8, i really don't know whats going on as you can see by my edits, lol. 

i bought brand spanking ne CAT6a cables aswell so they should really really do 10gbe, especially since they are only 3meters long.

 

 

i'll try the SFP+ thing now i think even though the connection between the PC's isn't even 10gb, not even without the damn switch involved.

Gaming HTPC:

R5 5600X - Cryorig C7 - Asus ROG B350-i - EVGA RTX2060KO - 16gb G.Skill Ripjaws V 3333mhz - Corsair SF450 - 500gb 960 EVO - LianLi TU100B


Desktop PC:
R9 3900X - Peerless Assassin 120 SE - Asus Prime X570 Pro - Powercolor 7900XT - 32gb LPX 3200mhz - Corsair SF750 Platinum - 1TB WD SN850X - CoolerMaster NR200 White - Gigabyte M27Q-SA - Corsair K70 Rapidfire - Logitech MX518 Legendary - HyperXCloud Alpha wireless


Boss-NAS [Build Log]:
R5 2400G - Noctua NH-D14 - Asus Prime X370-Pro - 16gb G.Skill Aegis 3000mhz - Seasonic Focus Platinum 550W - Fractal Design R5 - 
250gb 970 Evo (OS) - 2x500gb 860 Evo (Raid0) - 6x4TB WD Red (RaidZ2)

Synology-NAS:
DS920+
2x4TB Ironwolf - 1x18TB Seagate Exos X20

 

Audio Gear:

Hifiman HE-400i - Kennerton Magister - Beyerdynamic DT880 250Ohm - AKG K7XX - Fostex TH-X00 - O2 Amp/DAC Combo - 
Klipsch RP280F - Klipsch RP160M - Klipsch RP440C - Yamaha RX-V479

 

Reviews and Stuff:

GTX 780 DCU2 // 8600GTS // Hifiman HE-400i // Kennerton Magister
Folding all the Proteins! // Boincerino

Useful Links:
Do you need an AMP/DAC? // Recommended Audio Gear // PSU Tier List 

Link to comment
Share on other sites

Link to post
Share on other sites

what emulux nic do you have? Can you try a different model? Some older nics seem to just have weird issues, and from my experience the mellanox connect x cards(2 and 3 for 10gbe and cheap) work best. A lot of the cheap nics are pretty old and have wired issues

 

What are you using to connect the sfp+ ports? fiber or dac? What transceivers?

 

 

How much hardware do you have to test with? id keep chaning hardware until you find the issue. Do you have anouther system you can put that nic in?

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, FloRolf said:

Yeah the Netgear one. Sadly not the V2 :/ I paid like 200€ so i think that's alright (if it would work).

 

Thanks for the help though m8, i really don't know whats going on as you can see by my edits, lol. 

i bought brand spanking ne CAT6a cables aswell so they should really really do 10gbe, especially since they are only 3meters long.

 

 

i'll try the SFP+ thing now i think even though the connection between the PC's isn't even 10gb, not even without the damn switch involved.

So only about $225. I was going to say if you paid the current market value you could have gotten the US-16-XG-US or the 16-XG-US for cheaper and kept your SFP+ NICs. Under those circumstances it would have saved you money. Although I believe I saw somewhere that the XS708E comes with a stronger CPU so you could use it more for routing purposes which I believe justifies the higher cost. The Ubiquiti switches would only be good for switching.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Electronics Wizardy said:

what emulux nic do you have? Can you try a different model? Some older nics seem to just have weird issues, and from my experience the mellanox connect x cards(2 and 3 for 10gbe and cheap) work best. A lot of the cheap nics are pretty old and have wired issues

 

What are you using to connect the sfp+ ports? fiber or dac? What transceivers?

 

 

How much hardware do you have to test with? id keep chaning hardware until you find the issue. Do you have anouther system you can put that nic in?

For sfp+ i have 2x Chelsio N320E-SR, 1x Emulex OCE14102

 

Ethernet are 3x Intel X540T2.

 

The SFP's aren't even connected rn and (from client side) also didn't work with 10gbe with the Swtich.

 

SFP's were DAC though with Cisco H10GB-CU3M transceivers. But again, with p2p SFP+ there was no issue.

 

 

That's all the hardware i have for testing from the Networking side. Also only the two PC's and the NAS.

Gaming HTPC:

R5 5600X - Cryorig C7 - Asus ROG B350-i - EVGA RTX2060KO - 16gb G.Skill Ripjaws V 3333mhz - Corsair SF450 - 500gb 960 EVO - LianLi TU100B


Desktop PC:
R9 3900X - Peerless Assassin 120 SE - Asus Prime X570 Pro - Powercolor 7900XT - 32gb LPX 3200mhz - Corsair SF750 Platinum - 1TB WD SN850X - CoolerMaster NR200 White - Gigabyte M27Q-SA - Corsair K70 Rapidfire - Logitech MX518 Legendary - HyperXCloud Alpha wireless


Boss-NAS [Build Log]:
R5 2400G - Noctua NH-D14 - Asus Prime X370-Pro - 16gb G.Skill Aegis 3000mhz - Seasonic Focus Platinum 550W - Fractal Design R5 - 
250gb 970 Evo (OS) - 2x500gb 860 Evo (Raid0) - 6x4TB WD Red (RaidZ2)

Synology-NAS:
DS920+
2x4TB Ironwolf - 1x18TB Seagate Exos X20

 

Audio Gear:

Hifiman HE-400i - Kennerton Magister - Beyerdynamic DT880 250Ohm - AKG K7XX - Fostex TH-X00 - O2 Amp/DAC Combo - 
Klipsch RP280F - Klipsch RP160M - Klipsch RP440C - Yamaha RX-V479

 

Reviews and Stuff:

GTX 780 DCU2 // 8600GTS // Hifiman HE-400i // Kennerton Magister
Folding all the Proteins! // Boincerino

Useful Links:
Do you need an AMP/DAC? // Recommended Audio Gear // PSU Tier List 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, FloRolf said:

For sfp+ i have 2x Chelsio N320E-SR, 1x Emulex OCE14102

 

Ethernet are 3x Intel X540T2.

 

The SFP's aren't even connected rn and (from client side) also didn't work with 10gbe with the Swtich.

 

SFP's were DAC though with Cisco H10GB-CU3M transceivers. But again, with p2p SFP+ there was no issue.

 

 

That's all the hardware i have for testing from the Networking side. Also only the two PC's and the NAS.

just saying, sfp+ is ethernet

 

Im guessing you don't have anymore switches?

 

Do you get full speed with rj45 on the switch?

 

Im guessing it might be an issue with the switch, but you can't really check that unless you have anouther switch laying around.or sfp+ iH10GB-CU3M

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Electronics Wizardy said:

just saying, sfp+ is ethernet

 

Im guessing you don't have anymore switches?

 

Do you get full speed with rj45 on the switch?

 

Im guessing it might be an issue with the switch, but you can't really check that unless you have anouther switch laying around.or sfp+ iH10GB-CU3M

yeah, sorry, lol.

 

No more switches, never got full speed out of the RJ45's. Neither through the switch nore p2p between two PC's That's the point that makes me question if it's really the switch.

 

 

i'll do something weird now i think. I'll try to put a x540T2 on the same network together with itself and iperf it. That should theoretically tell me if the card is 10gbe capable at all, if it works. 

Gaming HTPC:

R5 5600X - Cryorig C7 - Asus ROG B350-i - EVGA RTX2060KO - 16gb G.Skill Ripjaws V 3333mhz - Corsair SF450 - 500gb 960 EVO - LianLi TU100B


Desktop PC:
R9 3900X - Peerless Assassin 120 SE - Asus Prime X570 Pro - Powercolor 7900XT - 32gb LPX 3200mhz - Corsair SF750 Platinum - 1TB WD SN850X - CoolerMaster NR200 White - Gigabyte M27Q-SA - Corsair K70 Rapidfire - Logitech MX518 Legendary - HyperXCloud Alpha wireless


Boss-NAS [Build Log]:
R5 2400G - Noctua NH-D14 - Asus Prime X370-Pro - 16gb G.Skill Aegis 3000mhz - Seasonic Focus Platinum 550W - Fractal Design R5 - 
250gb 970 Evo (OS) - 2x500gb 860 Evo (Raid0) - 6x4TB WD Red (RaidZ2)

Synology-NAS:
DS920+
2x4TB Ironwolf - 1x18TB Seagate Exos X20

 

Audio Gear:

Hifiman HE-400i - Kennerton Magister - Beyerdynamic DT880 250Ohm - AKG K7XX - Fostex TH-X00 - O2 Amp/DAC Combo - 
Klipsch RP280F - Klipsch RP160M - Klipsch RP440C - Yamaha RX-V479

 

Reviews and Stuff:

GTX 780 DCU2 // 8600GTS // Hifiman HE-400i // Kennerton Magister
Folding all the Proteins! // Boincerino

Useful Links:
Do you need an AMP/DAC? // Recommended Audio Gear // PSU Tier List 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, FloRolf said:

yeah, sorry, lol.

 

No more switches, never got full speed out of the RJ45's. Neither through the switch nore p2p between two PC's That's the point that makes me question if it's really the switch.

 

 

i'll do something weird now i think. I'll try to put a x540T2 on the same network together with itself and iperf it. That should theoretically tell me if the card is 10gbe capable at all, if it works. 

since you got 3 sfp+ nics and 3 computers, why not just get a sfp+ switch and run fiber between the systems?

 

how about a switch like this guy https://mikrotik.com/product/crs305_1g_4s_in

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Electronics Wizardy said:

since you got 3 sfp+ nics and 3 computers, why not just get a sfp+ switch and run fiber between the systems?

 

how about a switch like this guy https://mikrotik.com/product/crs305_1g_4s_in

because i want to extend the network in the future over longer distance with RJ45. And ii have the switch now :P oh and i only have 1 DAC Cable

 

So here's what just happened if i connect a X540T2 with itself.

image.png.cda804acaa80fbcb5e915f96096ee9a0.png

 

The thing can even do more than 10gbe with itself. i also tried the one in the Win7 PC and that does 8.6-9.1gbit with itself. 

Between the two PC's i only get 3gbit though.

Gaming HTPC:

R5 5600X - Cryorig C7 - Asus ROG B350-i - EVGA RTX2060KO - 16gb G.Skill Ripjaws V 3333mhz - Corsair SF450 - 500gb 960 EVO - LianLi TU100B


Desktop PC:
R9 3900X - Peerless Assassin 120 SE - Asus Prime X570 Pro - Powercolor 7900XT - 32gb LPX 3200mhz - Corsair SF750 Platinum - 1TB WD SN850X - CoolerMaster NR200 White - Gigabyte M27Q-SA - Corsair K70 Rapidfire - Logitech MX518 Legendary - HyperXCloud Alpha wireless


Boss-NAS [Build Log]:
R5 2400G - Noctua NH-D14 - Asus Prime X370-Pro - 16gb G.Skill Aegis 3000mhz - Seasonic Focus Platinum 550W - Fractal Design R5 - 
250gb 970 Evo (OS) - 2x500gb 860 Evo (Raid0) - 6x4TB WD Red (RaidZ2)

Synology-NAS:
DS920+
2x4TB Ironwolf - 1x18TB Seagate Exos X20

 

Audio Gear:

Hifiman HE-400i - Kennerton Magister - Beyerdynamic DT880 250Ohm - AKG K7XX - Fostex TH-X00 - O2 Amp/DAC Combo - 
Klipsch RP280F - Klipsch RP160M - Klipsch RP440C - Yamaha RX-V479

 

Reviews and Stuff:

GTX 780 DCU2 // 8600GTS // Hifiman HE-400i // Kennerton Magister
Folding all the Proteins! // Boincerino

Useful Links:
Do you need an AMP/DAC? // Recommended Audio Gear // PSU Tier List 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, FloRolf said:

because i want to extend the network in the future over longer distance with RJ45. And ii have the switch now :P oh and i only have 1 DAC Cable

You can go much farter with fiber(up to around 10km with 10g base-lr), and its cheaper than cat 5e aswell. fiber is just a better standard these days.

 

3 minutes ago, FloRolf said:

The thing can even do more than 10gbe with itself.

How are you testing that? with 2 nics on one system? Id be worried the data isn't actually going over the nics, that seems like an error.

 

3 minutes ago, FloRolf said:

Between the two PC's i only get 3gbit though. 

Id check cables here, what cables do you have?

 

Does that netgear switch have a cable test?

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, Electronics Wizardy said:

You can go much farter with fiber(up to around 10km with 10g base-lr), and its cheaper than cat 5e aswell. fiber is just a better standard these days.

 

How are you testing that? with 2 nics on one system? Id be worried the data isn't actually going over the nics, that seems like an error.

 

Id check cables here, what cables do you have?

 

Does that netgear switch have a cable test?

That was actually one NIC in a single system conncted with itself. Could be that data isn't going over it yeah but idk anymore.

 

Brand new CAT6A Cables, 3m long. Switch has a cable tester and it says everything is fine. 

 

 

 

EDIT: Could the old 1gbit intel driver of the onboard NIC be the issue here?

Nope it's not

Gaming HTPC:

R5 5600X - Cryorig C7 - Asus ROG B350-i - EVGA RTX2060KO - 16gb G.Skill Ripjaws V 3333mhz - Corsair SF450 - 500gb 960 EVO - LianLi TU100B


Desktop PC:
R9 3900X - Peerless Assassin 120 SE - Asus Prime X570 Pro - Powercolor 7900XT - 32gb LPX 3200mhz - Corsair SF750 Platinum - 1TB WD SN850X - CoolerMaster NR200 White - Gigabyte M27Q-SA - Corsair K70 Rapidfire - Logitech MX518 Legendary - HyperXCloud Alpha wireless


Boss-NAS [Build Log]:
R5 2400G - Noctua NH-D14 - Asus Prime X370-Pro - 16gb G.Skill Aegis 3000mhz - Seasonic Focus Platinum 550W - Fractal Design R5 - 
250gb 970 Evo (OS) - 2x500gb 860 Evo (Raid0) - 6x4TB WD Red (RaidZ2)

Synology-NAS:
DS920+
2x4TB Ironwolf - 1x18TB Seagate Exos X20

 

Audio Gear:

Hifiman HE-400i - Kennerton Magister - Beyerdynamic DT880 250Ohm - AKG K7XX - Fostex TH-X00 - O2 Amp/DAC Combo - 
Klipsch RP280F - Klipsch RP160M - Klipsch RP440C - Yamaha RX-V479

 

Reviews and Stuff:

GTX 780 DCU2 // 8600GTS // Hifiman HE-400i // Kennerton Magister
Folding all the Proteins! // Boincerino

Useful Links:
Do you need an AMP/DAC? // Recommended Audio Gear // PSU Tier List 

Link to comment
Share on other sites

Link to post
Share on other sites

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/8/2019 at 6:42 PM, Windows7ge said:

Any progress here? Did it behave any different with the SFP+ NIC back in the server?

okay i made some progress but no success.

 

Swapped the X540-T2 from the NAS with a Chelsio SFP+ card and tested two different X540-T2's in my main system (win10). both times similar results on Crystal Disk Mark around 390/330 mb/s again, same as with all RJ45.

However i got one run with a mad spike to 540/360 that i can't really explain.

1533837318_SFPNIC2_2.png.036d0a2c75d45d22998f19375d6de7d1.png

 

 

On the Win 7 machine the speeds are a solid 850/750 again. Next i'll try out the Win7 machines Card in my Win10 machine to see if maybe the other 2 are broken. (95% not).

Test.PNG.87dff51f6b6096b502c2f24d73feebdf.PNG

 

 

i've also enabled SMB1 in Win10 to see if that changes things but it doesn't. 

 

From the other, not madly weird, runs you can see Taskviewer is really hitting a limit there but i have no idea where that limit is.

Taskmanager.png.0af66f92444edf7ec6db5d303280e309.png

 

 

 

Any ideas? :( 

 

@Electronics Wizardy

Gaming HTPC:

R5 5600X - Cryorig C7 - Asus ROG B350-i - EVGA RTX2060KO - 16gb G.Skill Ripjaws V 3333mhz - Corsair SF450 - 500gb 960 EVO - LianLi TU100B


Desktop PC:
R9 3900X - Peerless Assassin 120 SE - Asus Prime X570 Pro - Powercolor 7900XT - 32gb LPX 3200mhz - Corsair SF750 Platinum - 1TB WD SN850X - CoolerMaster NR200 White - Gigabyte M27Q-SA - Corsair K70 Rapidfire - Logitech MX518 Legendary - HyperXCloud Alpha wireless


Boss-NAS [Build Log]:
R5 2400G - Noctua NH-D14 - Asus Prime X370-Pro - 16gb G.Skill Aegis 3000mhz - Seasonic Focus Platinum 550W - Fractal Design R5 - 
250gb 970 Evo (OS) - 2x500gb 860 Evo (Raid0) - 6x4TB WD Red (RaidZ2)

Synology-NAS:
DS920+
2x4TB Ironwolf - 1x18TB Seagate Exos X20

 

Audio Gear:

Hifiman HE-400i - Kennerton Magister - Beyerdynamic DT880 250Ohm - AKG K7XX - Fostex TH-X00 - O2 Amp/DAC Combo - 
Klipsch RP280F - Klipsch RP160M - Klipsch RP440C - Yamaha RX-V479

 

Reviews and Stuff:

GTX 780 DCU2 // 8600GTS // Hifiman HE-400i // Kennerton Magister
Folding all the Proteins! // Boincerino

Useful Links:
Do you need an AMP/DAC? // Recommended Audio Gear // PSU Tier List 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, FloRolf said:

okay i made some progress but no success.

 

Swapped the X540-T2 from the NAS with a Chelsio SFP+ card and tested two different X540-T2's in my main system (win10). both times similar results on Crystal Disk Mark around 390/330 mb/s again, same as with all RJ45.

However i got one run with a mad spike to 540/360 that i can't really explain.

1533837318_SFPNIC2_2.png.036d0a2c75d45d22998f19375d6de7d1.png

 

 

On the Win 7 machine the speeds are a solid 850/750 again. Next i'll try out the Win7 machines Card in my Win10 machine to see if maybe the other 2 are broken. (95% not).

Test.PNG.87dff51f6b6096b502c2f24d73feebdf.PNG

 

 

i've also enabled SMB1 in Win10 to see if that changes things but it doesn't. 

 

From the other, not madly weird, runs you can see Taskviewer is really hitting a limit there but i have no idea where that limit is.

Taskmanager.png.0af66f92444edf7ec6db5d303280e309.png

 

 

 

Any ideas? :( 

 

@Electronics Wizardy

id do testing with iperf as smb with disks just adds variables.

 

Id  give linux live disks a shot to find out if hardware or software issue as its a seprate driver stack.

 

This is just point to point, right? no switch. 

 

I have used about 4 different 10g nics on various systems, and always get >9gbit so idk whats wrong here.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Electronics Wizardy said:

id do testing with iperf as smb with disks just adds variables.

 

Id  give linux live disks a shot to find out if hardware or software issue as its a seprate driver stack.

 

This is just point to point, right? no switch. 

 

I have used about 4 different 10g nics on various systems, and always get >9gbit so idk whats wrong here.

From what i've seen Crystal Disk Mark and Iperf give me pretty much the exact same results because i'm testing with the NVMe drive that can easily do 10G.

Those results are all with the switch in between! i know that p2p SFP+ worked but p2p RJ45 didn't really (same results as with switch).

 

i'll try Linux tomorrow, maybe i just need a windows reinstall or something. I think it's very weird though, that Win7 with slower CPU, RAM, and SSD are faster than Win 10 with all better components. 

 

 

EDIT:

Jesus f-ing christ. I didn't change a single thing on the Win7 machine or literally anywhere and now only get 470/760 there. Still faster than Win 10 but WHYYYYY? 

That system's taskmanager now also shows a flat curve on the network utilization....

Unbenannt.PNG.c038cc8d331a8276d224fb204d07022c.PNG

Gaming HTPC:

R5 5600X - Cryorig C7 - Asus ROG B350-i - EVGA RTX2060KO - 16gb G.Skill Ripjaws V 3333mhz - Corsair SF450 - 500gb 960 EVO - LianLi TU100B


Desktop PC:
R9 3900X - Peerless Assassin 120 SE - Asus Prime X570 Pro - Powercolor 7900XT - 32gb LPX 3200mhz - Corsair SF750 Platinum - 1TB WD SN850X - CoolerMaster NR200 White - Gigabyte M27Q-SA - Corsair K70 Rapidfire - Logitech MX518 Legendary - HyperXCloud Alpha wireless


Boss-NAS [Build Log]:
R5 2400G - Noctua NH-D14 - Asus Prime X370-Pro - 16gb G.Skill Aegis 3000mhz - Seasonic Focus Platinum 550W - Fractal Design R5 - 
250gb 970 Evo (OS) - 2x500gb 860 Evo (Raid0) - 6x4TB WD Red (RaidZ2)

Synology-NAS:
DS920+
2x4TB Ironwolf - 1x18TB Seagate Exos X20

 

Audio Gear:

Hifiman HE-400i - Kennerton Magister - Beyerdynamic DT880 250Ohm - AKG K7XX - Fostex TH-X00 - O2 Amp/DAC Combo - 
Klipsch RP280F - Klipsch RP160M - Klipsch RP440C - Yamaha RX-V479

 

Reviews and Stuff:

GTX 780 DCU2 // 8600GTS // Hifiman HE-400i // Kennerton Magister
Folding all the Proteins! // Boincerino

Useful Links:
Do you need an AMP/DAC? // Recommended Audio Gear // PSU Tier List 

Link to comment
Share on other sites

Link to post
Share on other sites

Inconsistent results are always the worst. Everything you've shown is as I've experienced in the past going between W10 & FreeNAS. As I said I was never able to diagnose what causes this issue but at least I'm not alone here.

 

I've since stopped using FreeNAS as my primary network storage (sadly) but what I can try doing is getting it going on an old server board with a SFP+ Mellanox Connect X2 card to a 10G switch to my desktop with a Broadcom BCM57810S and test that with iperf. See how different my results are. What version of FreeNAS are you using? 11.2 U5? I'm use to the legacy interface so I'm not sure how hard it'll be if I have to navigate the new one.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...
On 7/12/2019 at 6:56 PM, Windows7ge said:

Inconsistent results are always the worst. Everything you've shown is as I've experienced in the past going between W10 & FreeNAS. As I said I was never able to diagnose what causes this issue but at least I'm not alone here.

 

I've since stopped using FreeNAS as my primary network storage (sadly) but what I can try doing is getting it going on an old server board with a SFP+ Mellanox Connect X2 card to a 10G switch to my desktop with a Broadcom BCM57810S and test that with iperf. See how different my results are. What version of FreeNAS are you using? 11.2 U5? I'm use to the legacy interface so I'm not sure how hard it'll be if I have to navigate the new one.

Just did a quick test running Ubuntu and got the exact same result of roughly 350mb/s up and down. At this point I really don't even know anymore. Either win 10 and Ubuntu both can't handle 10g while win7 can or my systems hardware is going nuts.

 

Yeah I'm running 11.2U5 but iirc you can always use the legacy navigation so if you have the time to waste, why not :P

 

 

Gaming HTPC:

R5 5600X - Cryorig C7 - Asus ROG B350-i - EVGA RTX2060KO - 16gb G.Skill Ripjaws V 3333mhz - Corsair SF450 - 500gb 960 EVO - LianLi TU100B


Desktop PC:
R9 3900X - Peerless Assassin 120 SE - Asus Prime X570 Pro - Powercolor 7900XT - 32gb LPX 3200mhz - Corsair SF750 Platinum - 1TB WD SN850X - CoolerMaster NR200 White - Gigabyte M27Q-SA - Corsair K70 Rapidfire - Logitech MX518 Legendary - HyperXCloud Alpha wireless


Boss-NAS [Build Log]:
R5 2400G - Noctua NH-D14 - Asus Prime X370-Pro - 16gb G.Skill Aegis 3000mhz - Seasonic Focus Platinum 550W - Fractal Design R5 - 
250gb 970 Evo (OS) - 2x500gb 860 Evo (Raid0) - 6x4TB WD Red (RaidZ2)

Synology-NAS:
DS920+
2x4TB Ironwolf - 1x18TB Seagate Exos X20

 

Audio Gear:

Hifiman HE-400i - Kennerton Magister - Beyerdynamic DT880 250Ohm - AKG K7XX - Fostex TH-X00 - O2 Amp/DAC Combo - 
Klipsch RP280F - Klipsch RP160M - Klipsch RP440C - Yamaha RX-V479

 

Reviews and Stuff:

GTX 780 DCU2 // 8600GTS // Hifiman HE-400i // Kennerton Magister
Folding all the Proteins! // Boincerino

Useful Links:
Do you need an AMP/DAC? // Recommended Audio Gear // PSU Tier List 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, FloRolf said:

Just did a quick test running Ubuntu and got the exact same result of roughly 350mb/s up and down. At this point I really don't even know anymore. Either win 10 and Ubuntu both can't handle 10g while win7 can or my systems hardware is going nuts.

 

Yeah I'm running 11.2U5 but iirc you can always use the legacy navigation so if you have the time to waste, why not :P

Like I said I troubleshooted this for the longest time and never found an answer but those numbers match the numbers I use to get for real world transfers so I have to assume you have the same problem.

 

You might like to try testing if it scales. Get two transfers going at the same time from different clients and see if you get 350MB/s x2 or if it gets cut in half per client.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×