Search the Community
Showing results for tags '10gbe'.
-
Hi, I've got an issue with driver installation for my HP NC523SFP+ 10GBe card. Since the Qlogic changed their website to marvell I am unable to find drivers for qle3242 chip. Could someone provide the extracted files for the driver installation or any other solution? Because all of the executables that installs the drivers automatically failes. In addition I provide the screenshot of message that Windows gives me when I try to do automatic installation.
-
Hi, I have server that is not on domain everything works fine. I just got TP-Link TX401 10Gb card and was testing the speeds: PC to Shared drive 400 MB/s - OK Shared drive to PC 600 MB/s - OK PC to Server (not on domain) - 900 MB/s - OK Server to PC - 2 MB/s - how is this possible? what can it be? Everything has 10GB network cards, server has 2 x 10GB connections PC on domain. All devices on static IP addresses. Share drive not on domain. thank you!
-
Hi all I have a couple of questions about my ubiquiti setup and about some SFP+ to RJ45 adapters. I wired my house for 10GbE (Cat6A) and for some of the devices I want to be able to utilize the full 10GbE connection. I recently bought a Ubiquiti USW-Aggregation which has 8x 10Gb SFP+ ports on it. I also have 2 other switches, a USW-Enterprise-8-PoE and a USW-Pro-24. My router is a UDM Pro. I plan to use the Switch Aggregation to connect the switches together as well as use 4 of the ports for computers I want to have 10G on and also one port for a NAS. My original plan was to aggregate 2x 10G SFP+ ports together to connect the other two switches together which would use up 4 of the ports, 2 10G links for each. The other 3 would be used for desktops around the house and the last one as I said would be used for the NAS. Is there any reason the Dream Machine Pro would need to be connected through 10G rather than just 1G? My internet connection is only 1G so the only reason I could see the need for 10G would be for the NVR access to the drive in the Dream Machine Pro but I doubt you'd need 10G for a mechanical HDD. Let me know if I'm wrong My other question was about these 10Gb SFP+ to RJ45 adapters. I need these to connect a few desktops to my Ubiquiti Switch Aggregation. Is there any difference between these? Are any brands better than others? FS.com option 1 Amazon option 2 Amazon option 3 Amazon option 4 There's also this "Industrial" one from FS.com. Thanks for anyones input!
-
Rampage Vi Extreme gen 1 Asus 1080ti 3 Samsung ssds Asus 10gbe Nic (in addtion to the onboard) 1600 AXi Windows 10 So just recently. My 10Gbe card both of them report (Network) cable unplugged. Randomly. I currently only use 1 the onboard to directly connect to my server (tested all ports working fine) I am also connected to my Lan and a public wifi in addition to P2P connection to my server. ALL of which are on different subnets. I tried cleaning out the ports, 3 different cables, uninstalled and reinstalled the drivers. Disable restarted enable restart, ping test, troubler shooter It will work fine one minute and randomly say the network Cable is unplugged and it does while I am just watching a Movie. I get no lights what so ever out of the ports or card indicator. During plug and unplug All power management is off in the properties. I would think it was a faulty card but the fact the add-on also has the same issue points to a software issues. I check Asus driver page nothing either. All I get is cable unplugged. The 1gbe and wifi are working.
-
Who makes the best 10GbE NIC for windows 10? I’ve seen good things about these: >ASUS XG-C100C >TP-Link TX401 >10GTek X540-10G-2T-X8 >Intel X540-T2 >ipolex X540-10G-2T Which have you guys had luck with? I want the best out of box experience.
-
I am looking at wiring my new house for ethernet (walls are open). For regular ethernet jacks around the house this is my plan: Punch-down Keystone Wall Plate > Cat6E Cable > Patch Panel with Pass-through Couplers (not punch down style) > Patch Cat6 cable > Switch For WiFi Access Points this is my plan: AP > Cat6E Cable > Patch Panel with Pass-through Couplers (not punch down style) > Patch Cat6 cable > PoE Injector > Patch Cat6 cable > Switch I am not sure if this is too complicated or if it will work fine My main reason for this forum post is does anyone know if the PoE injectors will work with those FS.com couplers in the blank patch panel? They are circuit board instead of individually wired internally. I contacted their support but they told me no twice and yes once for PoE support from three different representatives. The other reason is does anyone know if my current setup will be able to handle 10GbE in the future (other than the switch)? I can run Cat6a instead of the Cat6E if thats better for the future. The parts are as follows: >RJ45 Modular Ends: Cable Matters 100 Pack Pass Through RJ45 Modular Plugs for Solid or Stranded UTP Cable >Punch-down Keystone Jacks: Cable Matters UL Listed 25-Pack Slim Profile 90 Degree Cat 6, Cat6 RJ45 Keystone Jack with Keystone Punch Down Stand in White >Cat6E Cable: CAT6E Riser (CMR), 1000ft, UTP 24AWG, Solid Bare Copper, 600MHz, UL Certified, Easy to Pull (Reelex II) Box, Bulk Ethernet Cable in White >Router: Ubiquiti Dream Machine Pro >Switch: Ubiquiti Switch Pro 24 >Patch Panel: Cable Matters Rackmount or Wall Mount 24 Port Keystone Patch Panel (Blank Patch Panel for Keystone Jacks/Keystone Panel) >Couplers for Blank Patch Panel: Cat6 Keystone RJ45 Coupler, Unshielded, Female to Female Insert Inline Coupler >Patch Cables: 12ft (3.7m) Cat6 Snagless Unshielded (UTP) PVC CM Ethernet Network Patch Cable >Access Points: Ubiquiti UniFi Access Point WiFi 6 Long-Range & Ubiquiti UniFi Access Point WiFi 6 Lite >Wall Plates: Two Ports Keystone Single Gang Wall Plate
-
Hello guys, I need some help setting up my 10GbE P2P So I got two NC523SFP off Ebay, one is in my trueNAS Server and one in my Win 10 Desktop. They are connected with two 455883-B21. The one in the NAS system showed up in config with no prior driver installations - no problems. For the Windows side i got the official HP Windows Server Driver, extracted the .exe and installed the drivers manually - this apparently worked as they got recognized as Network adapters, no problems here either. At least I think so lol Only thing is, that they claim there's no network cable attached - however this might be because since theres no DHCP and no static IPs set? I did set static IPs and Subnet and all on both sides, but this did not give me a connection or any visible change. Also note that truenas has the Link State "DOWN" - do I have to UP it manually? And is there any way how I can tell the two ports apart? Which one is ql0 and which is ql1? Also also the lights on the NICs are not blinking whatsoever - is this a sign that something went wrong with the drivers or did they overheat? (Any way i can check their temperature?) So any help is appreciated, did I do something wrong or did I miss anything? Do I have to update the NICs Firmware? Cheers
-
I have a CAT6 cable that cannot be changed out that I need to push 10gig through. The way it is currently set up would lend itself to being changed out on one end for a SFP+ to ethernet adapter plugged into my unifi aggregation switch and on the other a 10gig ethernet switch. I don't know if going from sfp+ adapted to ethernet to a 10gig ethernet port is compatible. If someone knows can they please clear this up for me. I'm kinda a noob with fibre standards (all I've done up to this point has been multimode fibre runs that are very generic)
-
Thanks for clicking on this post, you're the best already. I have a 10gbit router with (just upgraded to) 4gbit symmetrical internet. The fiber router is a Nokia XS-250WX-A provided by the ISP/cabling provider. The router shows 10gbit negotiation correctly and no visible errors. My issue is that when I connect with 10gbit I get around 1.6gbit down on speed tests and 400-500mbit up, if I force negotiation to 1gbit I get 1gbit down and 1gbit up. moving from 1gbit to 10gbit networking cuts my upload in half Issue occurs throughout different ethernet cables (have tried CAT6, CAT6a and CAT5e) This occurs on 2 isolated computer setups. Setup 1 3950x x570 aorus xtreme Aquantia AQC107 10gbit NIC RTX 3090 (I know this isn't important, I just like telling people about my 3090 lol). Have attempted: - Updating drivers for NIC - Updating firmware for NIC - Removing all other virtual NICs (e.g. hyper-v, vmware workstation) - Playing with lots of settings on NIC (e.g. jumbo packets) Setup 2 1800x x370 asus crosshair vi hero USB-C Sabrent 5gbit adaptor (Aquantia chipset). RX 480 My current thoughts are maybe I'm hitting incompatibility between Aquantia chipsets and the Nokia router, I'm really grasping at straws now.
-
Hello dear community, We have a full 10Gbe network in the office for our video production. Unfortunately I noticed that our systems with Threadripper 1950x as well as 2990wx only manage ~300 MB/s max in reading (writing is double @~700)+. Our NAS is a Synology FS3400 with 18x4TB SSDs. Connected to a Netgear M4300 x24-f24 via 4x10Gbe LACP. Our Windows Server 2016 with 4x10Gbe LACP is the only one that can nearly max out the full 10Gbe in both directions at CrystalDiskMark. Our Threaripper 3990x and all Intel systems also manage ~700mb here. Only the "older" Threadrippers do not. With iperf I don't get over 6Gbit/s in both directions (more than one thread). The bottleneck when reading is about the same as the performance with iperf with only one thread on the loopback. Could it have to do with the fact that the Aquantia agc107 which is installed onboard or also as a plug-in card in the form of an Asus XGC-100C only has 4x PCIe Gen 2.0? All PCs use this card, but the TR 1+2 Gen have deficits with it. I have now ordered a Mellanox Conect X 4 with RJ45 transceiver for testing. The card has PCIe Gen 3. I have also tried all settings in the network card. As well as other PCIe slots and drivers. I have also found two forums on the net where users describe this problem in connection with Windows 10 Pro 2004. For me, however, the problem exists with 1909 as well as 2004 and Windows 10 Pro for Workstation. I am open for ideas. Thanks a lot!
-
It's finally time to upgrade. I'm coming from and i5-3570k system with a GTX 1060. Amd's 5600x has tickled my fancy so I'm finally upgrading. The 1060 is the only thing I'm keeping from my old build. I would love to build an ITX system this time that will last 8-10 years again. So I'm looking at the following: AMD 5600x ROG Strix X570-I Gaming ITX mobo 16-32 Gigs of ram current 1060 (upgrade to RX 6800 down the road) 1TB WD SN750 nvme drive So my dilemma is I have a NAS and 10g card I'd like to connect to this PC using a Mellanox MCX311A-XCAT CX311A ConnectX-3 EN sfp+ Network Card. It's pcie 4x I'm thinking I can use the Mobo's back m.2-2 connector. Using an m.2 to pcie riser cable and plug in my sfp+ card. Like This one. Has anyone done this? I'm thinking it should work... The only limitation I've read is the m.2's limited power but that seems to be 7W usually and the Mellanox card is listed as less than 5W it's a single port. I've never used M.2 for anything and I'm unfamiliar with it. I thought it was just for storage but I see now it's more just another way to connect pcie. Thoughts? Thank You Country: Canada Used for: Games, DaVinci Resolve
-
Hi All, thanks in advance for any help. I am trying to setup a 10gbe PFSense server on some Supermicro hardware that I bought on eBay. I am trying to figure out if I have enough PCIe lanes for both the 10gbe NIC and a Samsung 970 evo in one of these enclosures. Any thoughts? Separately, does anyone have a good way that I would retain functionality of the front bays? I assume that when I install the 10gbe NIC, there is no way to keep the front slots active? This server doesn't seem to have any documentation from super micro which makes researching this difficult.
-
Computer 1 (Client): Asus Rampage IV Extreme. i-7900K, 16GB ram, 1080ti, 960evo ssd. Windows 10 Computer 2 (Server): Zenith Extreme II, Treadripper 2950X, 64GB ECC ram, 780ti, 2x raidz2 of 12x 8tb drives, 970evo 1tb ssd. Ubuntu 20 Each computer has there own 1gbe connect to internet and P2P on 10gbe directly to each other. iperf has ok connect from PC1 to PC2 Server listening on 5201 ----------------------------------------------------------- Accepted connection from 10.2.2.1, port 58668 [ 5] local 10.2.2.3 port 5201 connected to 10.2.2.1 port 58670 [ ID] Interval Transfer Bandwidth [ 5] 0.00-1.00 sec 956 MBytes 8.02 Gbits/sec [ 5] 1.00-2.00 sec 959 MBytes 8.04 Gbits/sec [ 5] 2.00-3.00 sec 962 MBytes 8.07 Gbits/sec [ 5] 3.00-4.00 sec 962 MBytes 8.07 Gbits/sec [ 5] 4.00-5.00 sec 979 MBytes 8.21 Gbits/sec [ 5] 5.00-6.00 sec 972 MBytes 8.16 Gbits/sec [ 5] 6.00-7.00 sec 974 MBytes 8.17 Gbits/sec [ 5] 7.00-8.00 sec 972 MBytes 8.16 Gbits/sec [ 5] 8.00-9.00 sec 962 MBytes 8.07 Gbits/sec [ 5] 9.00-10.00 sec 961 MBytes 8.06 Gbits/sec [ 5] 10.00-10.00 sec 0.00 Bytes 0.00 bits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 5] 0.00-10.00 sec 0.00 Bytes 0.00 bits/sec sender [ 5] 0.00-10.00 sec 9.43 GBytes 8.10 Gbits/sec Although 10Gbe should be 9.4Gbits/s unfettered Problems from PC2 to PC1 Connecting to host 10.2.2.1, port 5201 [ 4] local 10.2.2.3 port 55244 connected to 10.2.2.1 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 236 MBytes 1.98 Gbits/sec [ 4] 1.00-2.01 sec 104 MBytes 858 Mbits/sec [ 4] 2.01-3.01 sec 94.0 MBytes 794 Mbits/sec [ 4] 3.01-4.00 sec 138 MBytes 1.17 Gbits/sec [ 4] 4.00-5.00 sec 48.1 MBytes 403 Mbits/sec [ 4] 5.00-6.00 sec 204 MBytes 1.71 Gbits/sec [ 4] 6.00-7.00 sec 64.1 MBytes 540 Mbits/sec [ 4] 7.00-8.01 sec 8.75 MBytes 72.3 Mbits/sec [ 4] 8.01-9.01 sec 98.0 MBytes 824 Mbits/sec [ 4] 9.01-10.01 sec 31.9 MBytes 267 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.01 sec 1.00 GBytes 861 Mbits/sec sender [ 4] 0.00-10.01 sec 1.00 GBytes 861 Mbits/sec receiver Why is this happening? Here is Ubuntu ifconfig. enp9s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.2.2.1 netmask 255.255.255.0 broadcast 10.2.2.255 inet6 fe80::42b0:76ff:fed7:f029 prefixlen 64 scopeid 0x20<link> ether 40:b0:76:d7:f0:29 txqueuelen 10000 (Ethernet) RX packets 2798990 bytes 3486031645 (3.4 GB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7477093 bytes 10843502915 (10.8 GB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 Windows machine has jumbo packets enabled. I tried to disable all offload features. Didn't help. I was worried about ZFS speeds but from internal sdd zfs speed are great. So it seems a network issue. What is the issue. What is causes the limit in to the server?
-
I keep getting this damn transfer error while moving large video files to my server. Server is Ubuntu with 24x drives 64Gs of ram and a tread-ripper processor. This is connected via 10gbe directly connection. This connection was completely fine a month ago. The server log says: Aug 6 11:54:33 GalacticLeyline smbd: pam_unix(samba:session): session closed for user brandon Aug 6 11:54:33 GalacticLeyline smbd: pam_unix(samba:session): session opened for user brandon by (uid=0) Aug 6 11:55:12 GalacticLeyline smbd: pam_unix(samba:session): session closed for user brandon Aug 6 11:55:12 GalacticLeyline smbd: pam_unix(samba:session): session closed for user nobody Aug 6 11:55:19 GalacticLeyline smbd: pam_unix(samba:session): session opened for user brandon by (uid=0) Aug 6 11:55:40 GalacticLeyline smbd: pam_unix(samba:session): session closed for user brandon Aug 6 11:55:47 GalacticLeyline smbd: pam_unix(samba:session): session opened for user brandon by (uid=0) Aug 6 11:56:01 GalacticLeyline smbd: pam_unix(samba:session): session closed for user brandon Aug 6 11:56:32 GalacticLeyline smbd: pam_unix(samba:session): session opened for user brandon by (uid=0) Aug 6 11:56:43 GalacticLeyline smbd: pam_unix(samba:session): session closed for user brandon WIndows Event Viewer: Application popup: Windows - Delayed Write Failed : Exception Processing Message 0xc000a080 - Unexpected parameters {Delayed Write Failed} Windows was unable to save all the data for the file :the data has been lost. This error may be caused by network connectivity issues. Please try to save this file elsewhere. The same network share over 1gbe works fine. Only over 10gbe are the issues.
-
- samba
- windows 10
-
(and 1 more)
Tagged with:
-
I'm in the process of moving from a direct-attached array to a 10Gbe Asustor NAS. All was going well until I realized that my cloud backup provider (Backblaze) doesn't support backups from network drives unless you upgrade your subscription to a metered one that would be 10X the cost for my current amount of data (~14TB). After reading about iSCSI and SANs, it seams to me like I could create a local logical drive in Windows using the NAS's volume as an iSCSI LUN and trick the Backblaze uploader into thinking it's a local drive. I'm a little worried about reliability though - does anyone know if there are things I should look out for? The Asustor AS4004T NAS is connected peer-to-peer via an 10GbE NIC on my editing machine. Am I crazy or would this work and potentially save me $50USD/mo on a B2 Cloud subscription?
-
Hello, I'm looking for home storage options with these possibilities: Stream 4k to TV Video recorder (initial 2 cam HD) RAID with 4 TB initial storage, expandable minimum energy consumption Home assistant docker Cache nvme (write and read) 10 GbE PCIe FreeNAS, proxmox, unraid? Hardware recommendations? A2SDi-8C+-HLN4F / X10SDV-2C-7TP4F / C3758D4I-4L / C3758D4U-2TP
-
Hello there, I'm having problems with 10gbe, I recently purchased a Synology NAS 1819+ with the dual 10gbe network card from Synology, I also purchased the Synology single 10gbe port thinking it would work out of the bat in the computer, results it won't, even if it said that it was Windows 10 supported, I worked around this finding out the chip this single port card uses is Aquantia so I downloaded these drivers and my card was recognized, now, my Nas has 4 x 12TB seagate Firewolf Pro in SHR with 1 fault disk tolerance. And the rest is 1 read cache ssd for this array and another Raid 0 of 3x Samsung SSD. 2 Computers are connected to the NAS directly with Cat 8 cable. Results I do an iperf test and my speeds won't go over 5gb/s on one computer, and 3gb/s on the other. Asuming the PCI port speed is in play, I know one is running at 4x and the other computer at 8x, but its never reaching more than those 3gb/s or 5gb/s, I have enabled Jumbo Frames and everything. These are my speed test on write and read. Should I be looking at something better? Will I get better speeds if I change my network cards on the computers? I have seen people getting way higher speeds with spinning disks. Thanks SSD Raid 0 | Speeds 769.0 Write / 607.8 Read SHR 32TB | 442.3 Write / 308.3 Read
- 6 replies
-
- networking
- 10gbe
-
(and 3 more)
Tagged with:
-
Hey I need advise my pc has a 10gb Ethernet port and my nas has 2 1gb ports and does link aggregation I’m wondering which switch I should use for my home office and gaming setup I’ve looked into the Netgear nighthawk switches I don’t think I need a 10gb switch since only my pc has a port and I am the only really using it other than my girlfriend who’s only using Internet
-
Hi Guys I have just bought a handful of Mellanox connectx-3 10gbe SFP+ cards on the cheap, I have tried the all in a Linux server to test if any of them should be DOA, but happy days all of them are registered and shows up beautifully in Unraid. But when I take the same cards and put them in a Windows 10 pc they just doesn't show up in the BIOS or in device manager? at first I thought it might not be supported, I can however find drivers for my exact Windows 10 version on Mellanox driver page? Any help or reason why I am an idiot who doesn't see the obvious problem would be greatly appreciated
- 3 replies
-
- networking
- sfp+
-
(and 2 more)
Tagged with:
-
I run a production company, we currently have an 80TB RAID array (10x 8TB drives with a MegaRAID controller) shared across a 10GB ethernet network... it's been working great until the last month (was installed in April 2018). The 'server' has Windows 10 pro installed with a max of 4 network users editing or accessing content over the network (Netgear 10Gbe switch) via 3 macs / 1 PC. I didn't build the system and unfortunately the people who did apparently didn't install enough cooling as we had an issue last month where under heavy usage the HDD temps were rising north of 60*C. This caused severe performance issues, files weren't copying / verifying correctly and it was all a bit of a mess. I installed more fans, backed everything up, all has been fine since then. However, today the server had a major meltdown where during a fairly basic copy of about 60Gb of data, from one of the networked Macs, the copy failed and the network interface on the server crashed (it just disconnected and all the lights on the card went out) taking a restart to get back up and running. This problem re-occurred every time the data was copied, yet copying to the local drive on the mac was fine. The question is will I see better, more stable results by installing Windows Server 2019 rather than running Windows 10? And perhaps more error reporting too? I have a feeling the latest networking error has come from using a teamed connection through the Intel 10GB NIC I have installed so I'm going to reverse this, however I know this is a feature I could use from Windows Server if we had it rather than using the Intel utility in Windows 10. Thoughts appreciated
-
Hi All, I own a small video production company which operates out of a vacant office within an existing office building, owned by a friend. This friend has allowed me to utilise their internal network for internet but, as one might expect, it only supports 1Gbps. I feel this this will put a spanner in the works as I am looking to purchase a 10GbE NAS and edit directly from that. How might I go about ensuring the devices I use can make use of the 10GbE connection to the NAS whilst whilst still utilising the existing 1GbE connection for the internet? Upgrading the entire office to 10GbE isn't an option as it's of no use to the other business that operates here.
- 12 replies
-
Hey together, i've been fighting with freenas and 10gbe for a while now and sometimes it gives me hope while mostly it's depressing as hell. In my signature you can see the Boss-NAS i'm using. rn it's equipped with an Intel X540T2 10gbe network card. One port goes to the Netgear XS708E 10gbe Switch while the other one goes (for testing purposes) to another Intel X540T2 in my Test-PC that has an i5 2500K and some MX500 SSD or whatever and runs Windows 7. The other port of this PC also goes to the same switch as the FreeNAS goes. Theoretically i'd have two separate 10GBe connections now but in the real world it's nothing like that. Doing Iperf (as well as Crystal Disk Marks to the RAID0 SSD's or the NVMe SSD) it gives me ~3.5gbe on the read side and 8.5gbe on the write side. That isn't reflected in real work perfomance either but i think the issue here is probably the SSD in the test PC. Here's the iperf i did: first one is PC client p2p second PC client over switch third PC server p2p fourth PC server over switch As you can see it doesn't matter if it's connected p2p or through the switch, read's always s*ck while writes show there is enough bandwidth actually. I DID get a real 10gbe connection between the PC's once with a p2p SFP+ connection where i could see my RAID0 SSD's going to full 1gb/s in both reads and writes but now that i've "upgraded" to a full 10gbe RJ45 environment everything seems to be collabsing. I DID also enable MTU 9000 on both ends just not the switch as that apparently has that set automatically and i can't find it anywhere in the setup. Even worse is the fact that my other PC, a 2600K with X540T2 and Windows 10 doesn't even remotely reach that but stays at 3gbe in both directions, even just from PC to PC without the NAS (switch in between obviously) So please, does anyone ( @Windows7ge) have an idea why everything is to terrible? lol. EDIT: What i think i'll try next is connecting the freenas with my SFP+ Card to the single SFP+ Port on the Switch and from there go with RJ45 to the X540T2's on the Test PC, as SFP+ once upon a time gave me 10gbe.
-
Hey Everyone, first of, hope you're having a wonderful day! I'm looking to try and set up a fiber backbone that will realistically be able to do 10Gb today and if possible move to 40Gbe+ later on down the line. current infrastructure is 2 servers, one for vm's /plex provider(unraid), the other acting as both high speed nas and longterm HDD storage (timemachine backups and the like)(freenas). on the network we typically see 15 devices at the low point with a maximum of around 40. everything from phones to laptops to proper workstations, networked receivers etc. Everything is currently running 1gbe rj45, except for workstations which are a combination of 2 or 4 1gbe links bonded few things: reason for fiber backbone is that the total length of the space is 100' by 50' 2 floors, not to mention that we'd be able to upgraded the receivers down the line if/when the need or opportunity arrises for a network upgrade. my current thinking for the network is to deploy Mikrotik switches because of their support for 4*10Gb SFP+ on inexpensive switches (crs305 and crs309). For router was thinking going for the RB 4011 since it has fiber and ethernet, and if I go Mikrotik for the AP's the 4011 is afforded with a 4*4 MU-MIMO on 5GHz and 3*3 on the 2.4 band. Thoughts? Proposed set up is as such, from top down view https://imgur.com/a/O5l8bKV For AP's I'm fairly open. Was thinking either Unify or mikrotik,
-
I've got a few servers now, and I'm finally ready to get into the 10 Gb game. I was browsing ebay and the NC550SFP seemed like a good bet for directly connecting two of my servers with SFP. Has anyone had any bad experience with these cards, or anything I should know before I buy?