Search the Community
Showing results for tags '10gbe'.
-
Hi Guys I have just bought a handful of Mellanox connectx-3 10gbe SFP+ cards on the cheap, I have tried the all in a Linux server to test if any of them should be DOA, but happy days all of them are registered and shows up beautifully in Unraid. But when I take the same cards and put them in a Windows 10 pc they just doesn't show up in the BIOS or in device manager? at first I thought it might not be supported, I can however find drivers for my exact Windows 10 version on Mellanox driver page? Any help or reason why I am an idiot who doesn't see the obvious problem would be greatly appreciated
- 3 replies
-
- networking
- sfp+
-
(and 2 more)
Tagged with:
-
Hello guys I think current networking is very much limited by the commonly used 1Gbit. So I watched the video: 10x Your Network Speed.. On a Budget! I thought that 10GbE is still childsplay. So I googled about alternative technology and found that infiniband can also do normal IP networking. So I bought two Infiniband adapters with 40Gbit network speed each and a 7 Meter QSFP cable. All in all less than 100 Dollars. something like this: http://www.ebay.com/itm/QLogic-QLE7340-Single-Port-40GBp-s-QDR-Infiniband-HCA-PCI-E-x8-w-Warranty-/182288795362?hash=item2a714246e2:g:r4YAAOSwCGVX4qUG I really find the technology fascinating as normal 40GbE Adapters are VERY expensive and I never heard of infiniband before. I bought two cards and they show up with 32Gbit. File sharing speed is at about 5 Gbytes/s (not Gigabit)! I cant find anything wrong with this solution. It works, is cheap and behaves like any other NIC (because of IP over Infiniband) So normal SMB and other network stuff works as usual. So my question is if anyone here on this channel has experience with similar technology or also want faster network speed (lets put the NEED aside here) Please share you setup and experience. (This is a MHQH19B-XTR currenty used) with a dual adapter card and NIC teaming you could easily achieve 80Gbits/s.
-
Can anyone here recommend or know of any adapter that converts USB 3.1 to 10GbE? Many adapters from USB 3.0 to Gigabit already exist. USB 3.1 supports 10Gbit data transfer so adapters should be possible. It should also be cheap as possible without sacrificing anything said above. Thx
-
Can anyone here recommend a reliable NAS with 1 or more 10GBit ports. It should be capable of 1,2Gbyte/s transfer speed and at least 8 drives in Raid 0. It should also be cheap as possible without sacrificing anything said above. Thx
-
Can anyone here recommend a reliable 10Gbit addin card for any PCIe slot. It should be capable of 1,2Gbyte/s transfer speed. It should also be cheap as possible without sacrificing anything said above. Thx
-
Preface: This is not a personal build, but rather a build I'm leading for the company I work at. We're a large photography studio here down in Southern California, and we also operate one of the industry leading photography educational sites. I'm sure ya'll can guess who we are! Previous build log I've neglected a couple years ago on OTT: I'm treating this build as if it were my own, and would love to share back with the LTT community as quite a bit of knowledge was learned from Linus's videos regarding network rendering and adobe premiere. We're actually not too far from each other in the way we build our machines, except LTT does get the HW hookups, we don't hah! This will be an on-going project, and I will try and document what I can without divulging too much about our internal configuration, just you know, legal reasons. Phase 1 that will be shared in this post is where everything currently stands, this was doing over the course of a few days to move from our old location to the new location, there's still plenty more to come! Rack 1 Networking gear: 1st switch is an Edge-Core AS4610-52T running Cumulus Linux. This is a 48 port PoE+ switch with 4xSFP+ 10G, and 2x QSFP+ 20G stacking ports, unfortunately the stacking ports are not supported on Cumulus. This switch will be uplinked via 4x10G with VLAN trunks to a core 10/40g switch in Rack 2 shown later. 2nd switch is a Juniper EX3300-48T pulled from our old office, just needed additional 1G ports, as really we'll only have 8-10 devices on PoE+ ( Aruba AP's and PoE+ IPTV cameras ) 3rd switch is Cisco 3560, this switch is for management traffic on Rack 1 ( IPMI, PDU/UPS, back-end management of switches etc ) 4th switch is a Netgear XS712T that I quite hate. But kept so we don't have to add new network cards on 2 of our existing ZFS units as they have 2x10Gbase-T onboard. Below that is a pfsense router. This router will be replaced eventually with a redundant 2 node pfsense cluster running E3-1270V2's with 10G networking for LAN portion, and 2x1G switch to the EX4200-VC cluster on Rack2 for internet uplink the 4 4U servers below are Supermicro 24bay units running ZFS. Each with 128GB of RAM. One of the servers is running Dual ported 12G SAS 8TB Seagate drives with a 400GB NVMe drive as L2arc. Total raw storage ~ 504TB in raw spinny drives Rack #2 Disclaimer: I do not recommend storing liquids on top of IT equipment regardless if they are powered on or not, in hindsight I probably shouldn't have included this picture, but it's already on reddit and I took the flaming.. Lesson learned. 32x 32GB Hynix DDR4 2400 ECC REG dimms for the VM hosts + 8x16GB DDR3 1866 dimms for upgrade of old ZFS nodes ( 64GB to 128GB ) Total ram 1024GB of DDR4 + 128GB of DDR3. It's crazy how dense DRAM is now.. 8x QSFP+ SR4 40GbE optics. Initially was going to go 2x40GbE to the ZFS nodes but ended up running out of 10GbE ports. So 1x 40GbE per ZFS node ( 2x total ) and the rest of the 4x40GbE ports will be split out to 10GbE over PLR40GbE long range 4x10GbE optics to end devices. Top switch: Edge-Core AS5712 running Cumulus Linux again. 48x10GbE SFP+ and 6xQSFP+ 40GbE. Loaded with singlemode FiberStore optics ( not pictured ) Sitting above it is a 72strand MTP trunk cable going to our workstation/postproducer area. Still awaiting the fiber enclosures and LGX cassetes so we can break this out into regular LC/LC SMF patch cords to the switch Below that are two EX4200-48T's sitting in a Virtual Chassis cluster. These will be used primarily for devices that should have a redundant uplink, so I'm spanning LACP across the two "line-cards", this will also be used in LACP for our VM host nodes for Proxmox cluster/networking traffic. Ceph/Storage traffic will be bonded 10GbE for nodes that need it. Again a wild Cisco 3560 10/100 switch with 2x 1GbE LACP uplinks to the EX4200 for management traffic. Rack #2 will eventually be filled with 4x 2U 12bay Supermicro units loaded with SSD's as Ceph nodes, and then some E3 hosts for smaller VM guests. Below that you can spy a Supermicro Fattwin 4U chassis with 4x Dual Xeon V3 nodes. Each node will have 256GB of RAM, and depending on the node they will either have dual E5-2676 V3 or E5-2683 V3 CPUs. There will be minimal if not 0 local storage as all VM storage will live on the Ceph infrastructure. 4x10GbE everywhere. Random photos of our IT room 4x Eaton 9PX 6KvA UPS's, 2 per rack. Fed with 208V AC PDU's are Eaton's G3 managed EMA107-10
-
Hi guys, I am building FreeNAS, where I need fast transfers for big files. I would like to use 10GBE, I read at FreeNAS forum that Chelsio is the choice over Intel, however I would like to know your opinion on the subject. I think to go for Chelsio s320e-sr or Chelsio s310e-cr. I live in UK, and Chelsio is something you can not get in here. So in this case I would have to get it from US. Did any one of you experience some problems with Intel ?. Do you use 10GBE Intel with FreeNAS ? let me know. many thanks Bartosz
-
Any recommendations for a simple Intel 10GbE Nic? There are so many options out there, thinking about the ebay route. Are dell/hp branded cards fine?
-
-
Hello, as 10GbE is rather new at consummer grade level, I found this usefull to start a generic thread for sharing experiences, setups, tips... My own experience is 3/4 month old... based on 2 intel X520-DA2 cards and two intel x710-da4 cards, One x710 is in a RS3614xs from synology other cards are in workstations and hooked to a used Arista 7124S switch grabbed on ebay (very very noisy... but do the job for learning !) All is linked with twinax direct attach cables (1m, 3m and 10 m DAC with SFP+ conenctors) I had bad experience with some 10 m active DAC cables from Cisco (reference SFP-H10GB-ACU10M which despite missing the + are SFP+ cables) : those cables are not recognized by the switch. I had chance to test in other switches and it was the same... (but they do work in direct link from machine to machine...) It seems that vendors are not willing so much to go for standardization on this field... So far, getting 10Gb/s performances is not too difficult when testing pure network side with iperf tool (https://iperf.fr/) .This tool allow to avoid bottleneck of drives etc... Tuning of intel drivers offers a lot of knobs, you feel like a newbie in the cabin of a jetliner... and all what online help says is basically "leave to default" which is not such a good way to educate users I guess. What is your experience with that and also other vendors hardware ? The rather difficult part is Samba and shares from the NAS... although thoeretical performances of the drives (SSD loaded in the NAS) should allow to saturate the link, it does not seem that simple and read performances are a little under what is expected : around 850-900 MB/s instead of the expected 1200MB/s (thanks to cache, write performance is Ok even for rather large files/transfer chunks) Also there is this LACP issue on windows 10 and intel... where both Microsoft and Intel are saying that the other part is reponsible, leaving basically Windows 10 without VLAN or link aggragation capability... (not a pure 10GbE point) That all for now, the aim was to start this thread and see what we can share !
-
Hello, when do you think 10 Gb/s will overcome/replace the current 1 Gb/s lan ports on virtually all motherboards/systems that are sold at date ? Do you have any doubts that it will ever happen ? Do you have any doubts that it will be available through 10GB-BaseT "RJ45" standard ?
-
Hello all, I wanted to log my successes and failures with setting up a direct connect 10GbE network between a "server" PC and my Workstation PC. First I want to start with my original equipment. Workstation PC: intel i7 3930K (six Core, hyper threaded = 12 logical cores) Asus Sabertooth x79 rev2 motherboard intel x540t1 10GbE network interface card GTX 580 32GB DDR 3 Memory Corsair Samgung 840Pro 1TB SSD 120GB ssd 1TB Western Digital Black 2x 2TB Western Digital Black in RAID 0 Corsair Obsidian 750D case Corsair 1000Watt power supply "Server" PC AMD Phenom 2 X4 Quad Core (no hyper threading, so 4 logical cores) [Replaced with intel i7 3930k] Asus M5A99FX PRO r2 [Replaced with P9X79 PRO] 32GB DDR3 Memory Scandisk Extreme Pro 240GB SSD intel 540x-t1 10GbE Network interface card LSI MegaRAID 9266 8i RAID controller xfx 750 watt powersupply Roswill 12 bay 4u server case 2x 2TB Western Digital Black Now to the story. I started with the typical install of all the parts. I have my existing workstation with Windows 10 (upgraded from windows 7), and I installed Server 2012 R2 Essentials on the "Server" PC. My first challenge was configuring the two cards to communicate with each other in a way that I could easily understand. They auto detect each other and connect automatically, but I wanted to be able to map drives and everything without any trouble, so I continued to edit the IPV4 configuration so that each would have a static IP (different from my on board 1GbE domain and DNS). I chose 192.168.2.1 for my "Server" PC and I chose 192.168.2.2 for my Workstation. I let the subnet auto populate, and I chose the "Server" computer to be the DNS, so I entered 192.168.2.1 into both the "Server" PC and the Workstation. Now that that is talking to each other in an easy way to remember, I can start testing, and configuring the intel x540 t1 NIC in both the server and PC. First issue I ran into was communicating and getting iperf to even register the two. After hours of fiddling, I found that the firewall built into Server 2012 R2 Essentials waspreventing me from getting any throughput. I disabled the firewall and could get a connection. Now that I know I'm getting throughput It's time to test the connection speed with a file transfer. I utilized SoftPerfect's Ram Drive to create 10GB ram disks on both my "Server" PC and my workstation to do direct transfers via shared drives. This way my hardware wouldn't be the bottleneck and I could truly test the 10GbE interface speeds. I found I was getting sub par speeds (200MB/s with 95% CPU utilization). After a long time fiddling, I got some advice via this forum, and Cinevate's blog, that says to disable the Security policies for encryption. Me being a newbie to Server 2012, I didn't realize how sophisticated and out of my league this OS is. I continued testing. I adjusted the intel x540 t1 settings on both the "Server" PC and the Workstation: Turned on Jumbo packets to their largest setting, maxed the transmit and receive buffers, and set my RSS Cueues to match the number of logical cores (4 on the "Server" PC and 8 being the only option without going over the logical 12 core count on my Workstation) I was able to break the 200MB/s barrier, but ran into a 500MB/s Cap. I had 50% CPU usage on my "Server" PC and less than 15% on my Workstation. With a lot of trial and error and literally writing down the results I got with different configurations, I came to a conclusion that my bottleneck was actually my CPU in my "Server"PC. With only 4 cores I'm limiting the number of streams of data I can send down the pipe, preventing me from reaching maximum transfer speeds. My solution was to find a better processor and motherboard. I was fortunate to track down the exact same chipset motherboard and six core processor that my Workstation has. I installed the new hardware, and it was off to the masses! I locked in and configured the hardware, and presto! I reached 900MB/s transfers instantly! I did some more tweaking and I could get 1.1 GB/s with no problem. Now I have been putting quotations around "Server" PC this entire article due to the fact that I wanted something more user friendly than a Server OS. So I installed a copy of Windows 10 on my "Server" PC. I ran into a couple hicups that were Microsoft issues with SSD's and high utilization, which I eventually just swapped out my SSD with another and for the most part has solved my problem, but the NIC configuration has changed, and the results haven't changed. Everything works flawlessly. Right now I'm in the process of ensuring stability with Windows 10 as it often times gives me high utilization with my OS SSD, which eventually calms. I'm suspecting Windows updates, since I noticed it was downloading automatically. I'm also testing RAID operations. I had a RAID 0 in the Workstation which I removed, and I had a RAID 0 set up on my LSI Card. I ran into some trouble which I hope this will solve for many. I needed to use the diskpart command to clean the drives to remove any data relating to a RAID on them. You can google search how to use the commands, it's very easy, just be careful to select the proper disk when using it. I was able to build a RAID 0 with 4 WD 2TB Black and I got 600-750MB transfers between the RAID0 on my "Server" PC and a RAM Drive on my workstation. From here I'm going to test how image editing and Video editing work with a h264, CDNG, and RAW workflow with Adobe Products. My End Goal: My end goal is to utilize the server case for 8 Bays of RAID 6 storage for Redundancy and large backup utilizing WD RE or Red 4TB drives, and install another LSI Mega RAID 9266 4i card to create a RAID0 with my 4x 2TB WD Black drives for a working platform. Now I did leave out a lot of my troubleshooting, but this is just a general guide to what limited me in my pursuit for true 10GbE performance. Server 2012 R2 had security issues that blocked or created too much overhead for my file transfers, and my CPU was my bottleneck in the end. So keep that in mind when you are looking to build something capable of 10GbE transfers. Consider your Logical Core count. You could probably get away with a quad core hyperthreaded processor and get 1GB/s Transfers, but if you have a Dual 10GbE NIC you will need twice that to handle the through put, mainly the Receive side scaling (RSS) lanes. I'm relieved with my setup, but it came at a cost. intel x540-t1 NIC came in at $300 each I got lucky with the mother board and CPU getting those for $400, but I could probably find a cheaper or adequate option for that price. The RAID controller was $500 plus drives (Which I can verify is the best option, especially if you're hardware needs to be changed. I went from one motherboard to another without loosing my RAID0 and worked perfectly in the new/revised setup)
- 28 replies
-
Browsing the FreeNAS forum I found a reference to this: http://routerboard.com/CRS226-24G-2SplusRM And here is the Amazon link: http://www.amazon.com/MikroTik-CRS226-24G-2S-RM-Ethernet-manageable/dp/B00MDXAVWE Here's a less expensive version: http://www.amazon.com/gp/product/B00LYFJAYW/ref=pd_lpo_sbs_dp_ss_3?pf_rd_p=1944687762&pf_rd_s=lpo-top-stripe-1&pf_rd_t=201&pf_rd_i=B00MDXAVWE&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=0ZYTK9K92GF676FE5JM1 It also has two 10GbE ports, the difference seems to be a lower level of RouterOS license. The features for license versions can be found here. This could be a really inexpensive way to get a 10GbE backbone into your network, with cables like these connecting a machine to the switch.
-
Hello, I am wanting to use a 10G SFP+ ethernet solution for my computers and my production servers. Here is the 10G SFP+ card I am wanting... http://www.ebay.com/itm/Mellanox-ConnectX-2-Single-Port-SFP-10GBE-Network-Card-MNPA19-XTR-/271934252546?pt=LH_DefaultDomain_0&hash=item3f508b6602 If I buy two of these, can I just connect the two together, plus some daisychaining, and have a 10GBe link to my server? I am not familiar with SFP+ and fiber networking yet. Is there any additional things I would need other than a 10G switch? Please help! -Cam
- 6 replies
-
- 10 gigabit
- 10gbe
-
(and 1 more)
Tagged with:
-
I'm looking at NICs for a high-speed interconnect between two machines, does anyone have experience with the x540 or I350 NICs? I'm primarily interested in consumer grade motherboard compatibility and operating temperatures. Thanks in advance.
-
Hey guys I have a maximus VII Gene. I currently have both the PCI-E v3 16x slots taken with video cards. I want to add a 10GBE card however only have a PCI-E v2 4x slot avaliable and a mini pcie. All the 10GBE cards that i have been able to find that are 4x are PCI-E v2.1. My research tells me that v2.1 is not backwards compatible with v2. If this in incorrect please let me know. If anyone has any suggestions how i can reach 10GBE or knows any PCI-E v2 4x 10GBE cards please let me know. Thanks.
-
anyone got there hands on one of these haven't seen anything much in the way of reviews are they any good. https://www.ubnt.com/edgemax/edgeswitch-16-xg/ https://www.servethehome.com/ubiquiti-edgeswitch-es-16-xg-review-quality-control-absent/
-
QNAP, the famous NAS manufacturer, has just revealed their first network switch. Their first switch is called "QSW-1208-8C", a 12-port-10GbE unmanaged switch. It has 4 SFP+ 10GbE port, and 8 mixed ports, which consist of 8 SFP+ ports and 8 10GBase-T RJ45 port. You can use either the SFP+ or RJ45 port, but not at the same time for those 8 ports (Remember this is a 12 ports (8 mixed ports+4 SFP+ ports) switch!). It also come with a low noise design, nicely-crafted chassis, and a switching capacity up to 240Gbps. Pricing and release date is still unknown at this moment, but QNAP is launching a public testing program for this switch in Taiwan, which you could apply, and buy the switch for NT$8800 (~$290) if you attend a 10GbE event held by QNAP in Taipei on 3rd May. In that event, QNAP will be showcasing their 10GbE solutions and accessories in different environment. Noted that this public testing only takes 100 people, and pricing may be higher than NT$8800 when it officially launch. No words on whether this public testing program will launch elsewhere in the world. The 10GbE event page is in Chinese, but a reddit user "dunkurs1987" translated the portion about "QSW-1208-8C" in English. QNAP has been pushing advanced features, like VM, 10GbE, and SSD Caching for a long time. Making this 10GbE switch is a step to complete their 10G product line. As 10G switches is now getting cheaper and cheaper (for example Buffalo recently released 2 10GbE switches, which could also run 2.5G and 5G, for ~$880 (12 ports) and ~$570 (8 ports)), upgrading to 10G is getting cheaper and cheaper. Also I like the mixed port design, which you need not to choose between going SFP/SFP+ or RJ45. It could potentially saving money upgrading network adapters and cables. On spec the switch looks great, and I hope their switches in the future will follow this mixed ports design, with more and more ports. https://www.qnap.com/static/landing/2017/10gbe-ready-event-0503/index.html (in Chinese) *update on 26/04/2017: an iperf test of this switch was ran and filmed (probably a pre-production unit).... https://nas.world/43-全民 10gbe-來了-qsw-1208-8c-封測包.html (in Chinese)