Jump to content

Question Regarding Dual Ethernet on ASRock Motherboards.

EnSabahNur

Hello LTT Community, 

 

I recently purchased a ASRock Fatal1ty X99M Killer/3.1 Micro ATX LGA2011-3 Motherboard and I wanted to ask the knowledgeable community here what the dual (x2) Ethernet ports were used, or what they could be used for on this motherboard? I read a few reference articles on using both of the same time as a weird feature, but I'm still unsure how to take advantage of both ethernet ports simultaneously. 

 

Moreover, as you can see this board comes with a killer network card. Can anyone that has experience with, or is an expert themselves, give me a description of the performance feature improvements this card has over other standard ones? 

Link to comment
Share on other sites

Link to post
Share on other sites

It's unfortunately more of a sales & marketing tool than something actually productive. If you have a managed switch theoretically you can team them together. Server 2012 also has supported for NIC teaming, either load balanced or fail over with some network cards though they're normally limited to Intel and Broadcom workstation & server chipsets, not consumer.

Link to comment
Share on other sites

Link to post
Share on other sites

As Windspeed has said, this will often not be very useful. I have a server board with four

ports, and I run multiple virtual machines on that computer. So instead of teaming, I can

give different VMs different ethernet ports.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

As Windspeed has said, this will often not be very useful. I have a server board with four

ports, and I run multiple virtual machines on that computer. So instead of teaming, I can

give different VMs different ethernet ports.

 

You give VM's an ethernet port? What hypervisor do you use?

Comb it with a brick

Link to comment
Share on other sites

Link to post
Share on other sites

You give VM's an ethernet port? What hypervisor do you use?

Regular Arch Linux as the host OS, then QEMU/KVM on top of that (also Arch Linux guests, at least so far).

I run more VMs than I have ports, so it's not strictly one port per VM, but I balance them out enough

that those VMs which actually need bandwidth have it available when needed without clashing with each

other.

I define bridge interfaces with netctl on the host, then pass the bridge to QEMU. Each QEMU VM gets its

own MAC address, even if they use the same bridge. Then inside the guest, I just define a regular network

profile via netctl for the network adapter it sees.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Ah, I see. So basically I should just use the killer NIC port, as it is a more high performance NIC?

Link to comment
Share on other sites

Link to post
Share on other sites

Ah, I see. So basically I should just use the killer NIC port, as it is a more high performance NIC?

 

I believe the Intel NIC is a higher performance NIC overall. I think Killer prioritizes certain network traffic, which may or may not be good, and I believe the Intel one features more offloading features.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

It's unfortunately more of a sales & marketing tool than something actually productive. If you have a managed switch theoretically you can team them together. Server 2012 also has supported for NIC teaming, either load balanced or fail over with some network cards though they're normally limited to Intel and Broadcom workstation & server chipsets, not consumer.

 

What @Windspeed36 said here applies, although I can see one other use being for redundancy. If you've got 2 ISP's providing your residence with internet you could, in theory, connect up one LAN port to ISPa and the other port to ISPb. This way, if one ISP ever went down, you'd be sure to maintain your connection.

 

This is kind of pointless from a server aspect however, as each ISP connection would have different IP addresses, so if you were running a www.domain.name.com pointing to your server you'd have to update and wait for DNS to propagate, or have some kind of managed network switch or load balancer.

 

If you understood none of what I just said, then again it's another reason to not have dual LAN ports on a home PC :P

Desktop: KiRaShi-Intel-2022 (i5-12600K, RTX2060) Mobile: OnePlus 5T | Koodo - 75GB Data + Data Rollover for $45/month
Laptop: Dell XPS 15 9560 (the real 15" MacBook Pro that Apple didn't make) Tablet: iPad Mini 5 | Lenovo IdeaPad Duet 10.1
Camera: Canon M6 Mark II | Canon Rebel T1i (500D) | Canon SX280 | Panasonic TS20D Music: Spotify Premium (CIRCA '08)

Link to comment
Share on other sites

Link to post
Share on other sites

Regular Arch Linux as the host OS, then QEMU/KVM on top of that (also Arch Linux guests, at least so far).

I run more VMs than I have ports, so it's not strictly one port per VM, but I balance them out enough

that those VMs which actually need bandwidth have it available when needed without clashing with each

other.

I define bridge interfaces with netctl on the host, then pass the bridge to QEMU. Each QEMU VM gets its

own MAC address, even if they use the same bridge. Then inside the guest, I just define a regular network

profile via netctl for the network adapter it sees.

 

Ahh, I'm just so used to ESXi with vSwitches and VMnetworks and portgroups, on my supermicro server.

Comb it with a brick

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×