Search the Community
Showing results for tags '10gbe'.
-
Hey Everyone, first of, hope you're having a wonderful day! I'm looking to try and set up a fiber backbone that will realistically be able to do 10Gb today and if possible move to 40Gbe+ later on down the line. current infrastructure is 2 servers, one for vm's /plex provider(unraid), the other acting as both high speed nas and longterm HDD storage (timemachine backups and the like)(freenas). on the network we typically see 15 devices at the low point with a maximum of around 40. everything from phones to laptops to proper workstations, networked receivers etc. Everything is currently running 1gbe rj45, except for workstations which are a combination of 2 or 4 1gbe links bonded few things: reason for fiber backbone is that the total length of the space is 100' by 50' 2 floors, not to mention that we'd be able to upgraded the receivers down the line if/when the need or opportunity arrises for a network upgrade. my current thinking for the network is to deploy Mikrotik switches because of their support for 4*10Gb SFP+ on inexpensive switches (crs305 and crs309). For router was thinking going for the RB 4011 since it has fiber and ethernet, and if I go Mikrotik for the AP's the 4011 is afforded with a 4*4 MU-MIMO on 5GHz and 3*3 on the 2.4 band. Thoughts? Proposed set up is as such, from top down view https://imgur.com/a/O5l8bKV For AP's I'm fairly open. Was thinking either Unify or mikrotik,
-
Hey together, i've been fighting with freenas and 10gbe for a while now and sometimes it gives me hope while mostly it's depressing as hell. In my signature you can see the Boss-NAS i'm using. rn it's equipped with an Intel X540T2 10gbe network card. One port goes to the Netgear XS708E 10gbe Switch while the other one goes (for testing purposes) to another Intel X540T2 in my Test-PC that has an i5 2500K and some MX500 SSD or whatever and runs Windows 7. The other port of this PC also goes to the same switch as the FreeNAS goes. Theoretically i'd have two separate 10GBe connections now but in the real world it's nothing like that. Doing Iperf (as well as Crystal Disk Marks to the RAID0 SSD's or the NVMe SSD) it gives me ~3.5gbe on the read side and 8.5gbe on the write side. That isn't reflected in real work perfomance either but i think the issue here is probably the SSD in the test PC. Here's the iperf i did: first one is PC client p2p second PC client over switch third PC server p2p fourth PC server over switch As you can see it doesn't matter if it's connected p2p or through the switch, read's always s*ck while writes show there is enough bandwidth actually. I DID get a real 10gbe connection between the PC's once with a p2p SFP+ connection where i could see my RAID0 SSD's going to full 1gb/s in both reads and writes but now that i've "upgraded" to a full 10gbe RJ45 environment everything seems to be collabsing. I DID also enable MTU 9000 on both ends just not the switch as that apparently has that set automatically and i can't find it anywhere in the setup. Even worse is the fact that my other PC, a 2600K with X540T2 and Windows 10 doesn't even remotely reach that but stays at 3gbe in both directions, even just from PC to PC without the NAS (switch in between obviously) So please, does anyone ( @Windows7ge) have an idea why everything is to terrible? lol. EDIT: What i think i'll try next is connecting the freenas with my SFP+ Card to the single SFP+ Port on the Switch and from there go with RJ45 to the X540T2's on the Test PC, as SFP+ once upon a time gave me 10gbe.
-
Hey I need advise my pc has a 10gb Ethernet port and my nas has 2 1gb ports and does link aggregation I’m wondering which switch I should use for my home office and gaming setup I’ve looked into the Netgear nighthawk switches I don’t think I need a 10gb switch since only my pc has a port and I am the only really using it other than my girlfriend who’s only using Internet
-
I've got a few servers now, and I'm finally ready to get into the 10 Gb game. I was browsing ebay and the NC550SFP seemed like a good bet for directly connecting two of my servers with SFP. Has anyone had any bad experience with these cards, or anything I should know before I buy?
-
Hi, i'm looking at a replacement SAN for our esxi hosts, we have about 20 vm's running between 2 hosts, currently using a 4Gb link direct to the SAN, however.. our SAN is due to be replaced and repurposed and i'm looking at upgrading to 10Gb Ethernet direct to the hosts with at least 36TB of Storage with possibility to expand if needed. i've had some quotes back for just over 10k in GBP, and would like to see if i can do better, i'm not mega familiar with what sans are out there and which ones to avoid, so i'm really looking for recommendations, it must have ideally quad 10gb ethernet, at least 2U, but can be larger if needed doesn't matter. reliable and robust for 24/7 running, obviously the configuration of the actual array will be decided one we know storage capacity and amount of drives. this is for a educational environment not for myself, so it MUST be reliable. Cheers guys !
-
Server Hp xw8600 2x x5430 Xeon 28GB ddr2 ECC ram Dual port 10Gbe network card LSI 16e 6gpbs SAS/Sata card (HBA) Nvidia 710GT (Basically headless server) Boot Drive WD 250GB SSD Storage Drives 7x 4TB Drive WD/HGST in raidz2 (estimated transfer speed 380MB/s) (Looking to add 13x more drives in a full rack mount case which will make estimated write of 1.0GB/s) I want to explore Infiniband 40Gbps but know the limits are in my storage drives. I like getting second hand equipment so all new SSD array is not an option. Yes Yes I know why do you need to go that fast? Because it is fun. How do you break the HDD bottlenecks.
- 8 replies
-
- infiniband
- 10gbe
-
(and 2 more)
Tagged with:
-
Hi everyone! I need your opinion on this. We're proposing to upgrade our existing Gigabit ethernet to 10Gbe. The main purpose is for transfer files from and to our NAS, as well as our editors can edit without any lag. Most of our machines here are Macbook Pros and iMacs. Our plan is to get a Synology DS1819+, equipped with the 10Gbe adapter. Bay 1 + 2, will be filled with SSD for caching, while Bay 3-8 will be full of 12TB Seagate Drives. We're using Cat6a, to connect to our future Netgear XS712v2 12 Port-Switch. And from there, we will use the same cable to the rest of our systems. Am I getting this right? I need to note as well that, we will be getting some Thunderbolt 3 to 10Gbe Adapter.
- 2 replies
-
- 10gbe
- first time
-
(and 2 more)
Tagged with:
-
I am looking for an affordable ($100-300 USD) 8 port 10 GbE unmanaged switch from a reputable brand. I don't want 10 GbE trunk with many 1 GbE interfaces. I want true 10 GbE on all ports compatible with Cat 6a. Any ideas or recommendations? Thanks.
-
Hello LTT Forum I've been looking for a switch with either 24 or 48 1GBE ports with 2x 10GBE non-SFP ports (need the cable to be longer than 10m, estimated at 15 -20m required). Doesn't need to be managed, doesn't matter either way. The cheaper the better. So far I've only found SFP switches. I have seen SFP to RJ45 adapters, however I've read that they can decrease the speed to 1Gbps Thank you for your time -Olennex
-
Hi All, thanks in advance for any help. I am trying to setup a 10gbe PFSense server on some Supermicro hardware that I bought on eBay. I am trying to figure out if I have enough PCIe lanes for both the 10gbe NIC and a Samsung 970 evo in one of these enclosures. Any thoughts? Separately, does anyone have a good way that I would retain functionality of the front bays? I assume that when I install the 10gbe NIC, there is no way to keep the front slots active? This server doesn't seem to have any documentation from super micro which makes researching this difficult.
-
I'm in the process of moving from a direct-attached array to a 10Gbe Asustor NAS. All was going well until I realized that my cloud backup provider (Backblaze) doesn't support backups from network drives unless you upgrade your subscription to a metered one that would be 10X the cost for my current amount of data (~14TB). After reading about iSCSI and SANs, it seams to me like I could create a local logical drive in Windows using the NAS's volume as an iSCSI LUN and trick the Backblaze uploader into thinking it's a local drive. I'm a little worried about reliability though - does anyone know if there are things I should look out for? The Asustor AS4004T NAS is connected peer-to-peer via an 10GbE NIC on my editing machine. Am I crazy or would this work and potentially save me $50USD/mo on a B2 Cloud subscription?
-
Computer 1 (Client): Asus Rampage IV Extreme. i-7900K, 16GB ram, 1080ti, 960evo ssd. Windows 10 Computer 2 (Server): Zenith Extreme II, Treadripper 2950X, 64GB ECC ram, 780ti, 2x raidz2 of 12x 8tb drives, 970evo 1tb ssd. Ubuntu 20 Each computer has there own 1gbe connect to internet and P2P on 10gbe directly to each other. iperf has ok connect from PC1 to PC2 Server listening on 5201 ----------------------------------------------------------- Accepted connection from 10.2.2.1, port 58668 [ 5] local 10.2.2.3 port 5201 connected to 10.2.2.1 port 58670 [ ID] Interval Transfer Bandwidth [ 5] 0.00-1.00 sec 956 MBytes 8.02 Gbits/sec [ 5] 1.00-2.00 sec 959 MBytes 8.04 Gbits/sec [ 5] 2.00-3.00 sec 962 MBytes 8.07 Gbits/sec [ 5] 3.00-4.00 sec 962 MBytes 8.07 Gbits/sec [ 5] 4.00-5.00 sec 979 MBytes 8.21 Gbits/sec [ 5] 5.00-6.00 sec 972 MBytes 8.16 Gbits/sec [ 5] 6.00-7.00 sec 974 MBytes 8.17 Gbits/sec [ 5] 7.00-8.00 sec 972 MBytes 8.16 Gbits/sec [ 5] 8.00-9.00 sec 962 MBytes 8.07 Gbits/sec [ 5] 9.00-10.00 sec 961 MBytes 8.06 Gbits/sec [ 5] 10.00-10.00 sec 0.00 Bytes 0.00 bits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 5] 0.00-10.00 sec 0.00 Bytes 0.00 bits/sec sender [ 5] 0.00-10.00 sec 9.43 GBytes 8.10 Gbits/sec Although 10Gbe should be 9.4Gbits/s unfettered Problems from PC2 to PC1 Connecting to host 10.2.2.1, port 5201 [ 4] local 10.2.2.3 port 55244 connected to 10.2.2.1 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 236 MBytes 1.98 Gbits/sec [ 4] 1.00-2.01 sec 104 MBytes 858 Mbits/sec [ 4] 2.01-3.01 sec 94.0 MBytes 794 Mbits/sec [ 4] 3.01-4.00 sec 138 MBytes 1.17 Gbits/sec [ 4] 4.00-5.00 sec 48.1 MBytes 403 Mbits/sec [ 4] 5.00-6.00 sec 204 MBytes 1.71 Gbits/sec [ 4] 6.00-7.00 sec 64.1 MBytes 540 Mbits/sec [ 4] 7.00-8.01 sec 8.75 MBytes 72.3 Mbits/sec [ 4] 8.01-9.01 sec 98.0 MBytes 824 Mbits/sec [ 4] 9.01-10.01 sec 31.9 MBytes 267 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.01 sec 1.00 GBytes 861 Mbits/sec sender [ 4] 0.00-10.01 sec 1.00 GBytes 861 Mbits/sec receiver Why is this happening? Here is Ubuntu ifconfig. enp9s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.2.2.1 netmask 255.255.255.0 broadcast 10.2.2.255 inet6 fe80::42b0:76ff:fed7:f029 prefixlen 64 scopeid 0x20<link> ether 40:b0:76:d7:f0:29 txqueuelen 10000 (Ethernet) RX packets 2798990 bytes 3486031645 (3.4 GB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7477093 bytes 10843502915 (10.8 GB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 Windows machine has jumbo packets enabled. I tried to disable all offload features. Didn't help. I was worried about ZFS speeds but from internal sdd zfs speed are great. So it seems a network issue. What is the issue. What is causes the limit in to the server?
-
I keep getting this damn transfer error while moving large video files to my server. Server is Ubuntu with 24x drives 64Gs of ram and a tread-ripper processor. This is connected via 10gbe directly connection. This connection was completely fine a month ago. The server log says: Aug 6 11:54:33 GalacticLeyline smbd: pam_unix(samba:session): session closed for user brandon Aug 6 11:54:33 GalacticLeyline smbd: pam_unix(samba:session): session opened for user brandon by (uid=0) Aug 6 11:55:12 GalacticLeyline smbd: pam_unix(samba:session): session closed for user brandon Aug 6 11:55:12 GalacticLeyline smbd: pam_unix(samba:session): session closed for user nobody Aug 6 11:55:19 GalacticLeyline smbd: pam_unix(samba:session): session opened for user brandon by (uid=0) Aug 6 11:55:40 GalacticLeyline smbd: pam_unix(samba:session): session closed for user brandon Aug 6 11:55:47 GalacticLeyline smbd: pam_unix(samba:session): session opened for user brandon by (uid=0) Aug 6 11:56:01 GalacticLeyline smbd: pam_unix(samba:session): session closed for user brandon Aug 6 11:56:32 GalacticLeyline smbd: pam_unix(samba:session): session opened for user brandon by (uid=0) Aug 6 11:56:43 GalacticLeyline smbd: pam_unix(samba:session): session closed for user brandon WIndows Event Viewer: Application popup: Windows - Delayed Write Failed : Exception Processing Message 0xc000a080 - Unexpected parameters {Delayed Write Failed} Windows was unable to save all the data for the file :the data has been lost. This error may be caused by network connectivity issues. Please try to save this file elsewhere. The same network share over 1gbe works fine. Only over 10gbe are the issues.
-
- samba
- windows 10
-
(and 1 more)
Tagged with:
-
System 1: Windows 10 7900x, Rampage extreme VI, 16ram, Networked in to a lan on a 1gig port and 10gig port plugged directly into server. Server: Ubuntu 18.04 Dual Xeon X5430, 16GB ram, ASUS Aeorion 10GIG card directly into system 1, 1Gig port plugged into LAN. 1gbe ports are set to DHCP on my router in IP range of 192.168.1.1-100 The 10gbe ports are set manual one being 10.1.20.1 the other 10.1.20.2 default gateways pointed at each other and metric set 10 on the 10Gb conneciton and 1gig set to 20. The connection last about exactly 5 seconds and repeatedly tries to identify network again. When both cards are on in the windows machine both cards stop working. I tried the 10gb port on my LAN and it will work that way(but obvious only 1gb) I am straight up pulling my hair out I have tried different cards reinstall drivers, OSs, all cables at CAT 6A, disabled all firewalls. If I try and ping the other machine I get destination host unreachable. Tracert only gets a ping back from itself.
-
Hi I am making a Nas (again ) so this is what I could think of for my needs: Qnap TS-963X (Q1 can I upgrade the ram and is it easy to do so ? like reach 8GB or 16) my idea is to use the SSD bay's for fast access data while the 5.25" will be Archive hardisks to store big data on cheap for archiving purposes and a backup only. https://www.amazon.com/dp/B07CVLSCSJ/?coliid=I3HGAOFVNHOVJG&colid=Q0W9KCFV92QE&psc=0&ref_=lv_ov_lig_dp_it Now the network Card to put in my computer QXG-10G1T ( so am not sure if its Ethernet 10Gbe or sfp+ same as then as am not sure 100% so I want to double check with you guys on this forums ) no one Ask why 10Gbe plz common I have 18TB full right now that I need to back up so I need 32TB of data at least so I can make then as as backup then have another 18TB to start storing more stuff , so the speed of regular 1Gbe is going to take forever . https://www.amazon.com/dp/B07CW2C2J1/?coliid=I2VR7LKODWQCLF&colid=Q0W9KCFV92QE&psc=0&ref_=lv_ov_lig_dp_it The switch: the cheapest 10Gbe switch I could find that isn't random brand I never heard of Asus XG- (2 ports 10Gbe ) which I think should be good , because right now I need more regular Rj45 for the many devices I have hooked to the router and ran out of ports: Nintendo switch, TV Nvidia shield PC X3 (2nd router). ( only 1 pc need to have 10gbe , the work pc/gaming) ps4 so I am thinking to remove the 2nd router and just use 1 Router with 1 Switch make a close with all networking stuff in it or get a small rack 6u. https://www.amazon.com/dp/B01LZMM7ZO/?coliid=IZV5JRLP15G2K&colid=Q0W9KCFV92QE&psc=0&ref_=lv_ov_lig_dp_it Now my questions: all compatible ? the speed I will get the 10Gbe with that nas or the cpu is slow +ram too low ? and is it cheaper to go sfp+ route or are there any big benefits maybe I should consider to go sfp+ for like future proofing ? I tried to cut the cost for myself by choosing ethernet 10Gbe I have wiring in the walls using cat6A ( hopefully good quality ) and many cables + cable molds on the walls I could use if I have to. Network Layout Attatched Network Layout (1).pdf
- 14 replies
-
- nas
- netowrk switch
-
(and 2 more)
Tagged with:
-
Hi! I am moving into my new home and I plan to build two computers equipped with 10Gbe ethernet cards One is the gaming PC, the other a home server which serves as a backup unit and HTPC The idea is to connect the two PCs so they will appear in each other's network and be able to access each other's data. My question is: 1. Is it possible to do this without a switch i.e. using just an ethernet cable like a Cat7 one? 2. Will using a 10Gbe switch give me any extra benefits from a consumer's perspective? Thanks!
-
I would like to set up a 10Gbe link from my NAS to the rest of my network. I am trying to find a way to take 10Gbe (SFP or RJ45) from my NAS into a switch (behind the router) and have gigabit to the rest of my network. I may eventually want to add 10Gbe to my desktop as well. Is there a switch that would have a couple of 10gbe ports and the rest are gigabit? Also, I don't have any sort of rack setup so a non rack sized switch is perferred. Any suggestions would be greatly appreciated.
-
Hi, Just wondering if anyone else out there had this idea, I was thinking of using either an X99 or C612 board for pfSense with my 10Gbe network, probably something like an i7-6800K because of stupid fast clock speeds to drive routing and possiblity OpenVPN. I have an I've Bridge E3 server running pfSense right now and my iperf speeds have been underwhelming to say the least (tops out around 4Gbps). I could probably change some tunables but I don't ever see it getting much faster due to hardware limitations. Is anyone else out there running a setup like that for pfSense? Perhaps in ESXi, with an HTPC GPU passthrough? Heh. Lots of ideas...
-
So I finally have a 10gb connection from my gaming rig to server. But now I am hitting bottle neck because of the hard drive speed. Is there any linux software I can use to create a ssd buffer that takes the data first and automatically transfers to the zpool I have?
-
(Just want to indicate that this is a repost from the FreeNAS forums where I initially put this. I didn't get any replies there, and besides, I prefer LTT forums Also, the indentation and line breaks might look a little weird. Line wrap wasn't working for whatever reason. And the links are huge.) Hi FreeNAS users, So I've owned a FreeNAS nas for a few months now, with the following specs: i5-760 8GB RAM 1x, yes, you heard me right, just one WD Caviar SE 750GB HDD Biostar TH55HD Integrated Gigabit LAN on my home network As you can probably imagine from these specs, I am itching for an upgrade, since I'm tired of transferring files at less than 5 Mb/s. I've decided to look into major makeovers for my NAS setup, but I have a lot of questions that I haven't gotten satisfactory answers from Google for. The major upgrades I'm looking at are to move to 10GbE networking, switch to compact rackmount servers, and a storage upgrade (though I'm considering doing that later, I just want to focus on the networking for the time being). I've been planning my network setup a little and you can see how I'm envisioning it here: So here are my questions (don't feel obliged to answer all of them, just reply if you have anything to add.) 1. For 10GbE networking, is it better to use CX4 or SFP+ gear? I'm finding 10GbE CX4 switches to be a lot cheaper per port than SFP+ ones (http://www.ebay.com/itm/202146182153 vs http://www.ebay.com/itm/323032634858), but I'm also finding that CX4 ones typically lack normal RJ45 ports alongside them. While I have a gigabit switch right now, being able to manage all my networking in one switch would be nice. The other thing I'm noticing is that SFP+/SFP NICs and cables are a lot cheaper than their CX4 counterparts. So which one is probably better? 2. Also on the topic of switches, I was initially going to go with a 2-port card hooked up P2P with my 2 workstations in the diagram, but I decided to go with a switch for scalability. My initial concern was the price, since I had heard that they were exorbitantly expensive, so I didn't even check. But I'm finding switches like the ones I linked for under $100. Is there a catch or are these legit switches? Also, can my network be laid out like they would be if I went with P2P where I have my 10Gbe stuff on a different subnet, communicating on their own or will it interfere with the gigabit internet stuff? 3. Last question about switches. Do I need uplinks? Are they necessary? 4. My current NAS is in a Micro ATX box that is really suited better for an HTPC than a NAS, since it only has two drive bays. I've decided to move to rackmount stuff for greater storage expansion and more compactness (though noise might be an issue for me). I don't want to buy a prebuilt server, since I'm a little disappointed with the prices and options. Instead, I'm thinking of getting a Chenbro or similar 1u or 2u case with 4 or more drive bays and buying a used Supermicro 1156 motherboard, and hook up an LSI HBA to the backplane, since it'd suit my needs a lot better (most rackmount servers aren't designed for NAS use from what I see, and the ones that do are way out of my budget). Is this a good idea or should I buy prebuilt servers? 5. When I do replace my 750GB HDD with a bigger array, how should I safely transfer the ZFS datasets over? Does FreeNAS have a good way to do this? 6. On most servers, what's the purpose of the management port? How does one make use of it? Do you need specialized management consoles? People on the internet seem to be suggesting that you do. How does usage differ between RS232 and Ethernet management ports? Is IPMI related, and how do you take advantage of that? On ethernet management ports, normal and IPMI, do you just hook it up to a switch? Will that work? 7. Is SMB3 required to do 10Gbe? Is throughput reduced when SMB3 is not used? Is there a way within FreeNAS for the non-SMB3 clients I have to connect to, even if the speed isn't great, while still not limiting the throughput of modern clients? 8. Building on the SMB3 thing, in the diagram, you might see a server called "iSCSI to SMB bridge server". This is an idea I came up with to address the SMB3 thing, if it even is an issue, but I'm realizing it might not be as good of an idea as I initially thought. The idea was simply to set up iSCSI targets on the main NAS, and to mount those iSCSI shares on a separate server. Then, I would have the iSCSI shares shared on an SMB network with my retro machines using P2P gigabit connections and dirt cheap nics. But after doing some research on iSCSI, it seems like it wouldn't be possible to use my previously existing ZFS datasets (shares) as iSCSI targets without overwriting them. This obviously can't work, I need to have something available on SMB and iSCSI and I need the retro and modern clients to be able to access and write to the same stuff. Besides, it seemed a little overcomplicated and adds extra cost. Does anyone know of a different way to do this? I think that's everything. I still have yet to budget everything out, since I'm not buying until later this summer. Everything should fall within my max restriction of about $1000 for everything excluding the hard drives if I buy used gear on ebay. I might edit the post if I come up with other questions. Thanks in advance!
-
I did this upgrade awhile ago but never posted anything. I have more pictures if anyone cares to see them. 2 x Thinkserver RS140's (1 for pfsense and 1 for filesever/mumble/minecraft/etc) A custom built 2u rosewill server with fx8320e and 16gb ram (plex server) Unmanged TP-Link 24 Port Switch Manged TP-Link 24 Port Switch w/ 4 x 10gbps SFP+ 2 x 24 port keystone panels 1 Unfi AC Lite HTPC on top. 50u trip lite rack with caster kit I have another full depth shelf to mount my Lian Li PC-D600 build above the HTPC. Any suggestions on what you would do different or things you would add, please let me know!
- 12 replies
-
- server rack
- pfsense
-
(and 4 more)
Tagged with:
-
Can anyone here recommend or know of any adapter that converts USB 3.1 to 10GbE? Many adapters from USB 3.0 to Gigabit already exist. USB 3.1 supports 40Gbit data transfer so 10gbit Ethernet adapters should be possible. It should also be cheap as possible without sacrificing anything said above. Thx
-
Does anyone know if it is possible to build a high speed DAS/NAS using thunderbolt and GBe? Im looking to build a storage solution for editing workflows (video4k,lightroom,backup) for my thunderbolt3 iMac. It would also be a gigabit NAS for any other devices on the network, like gaming pc library. Would it be possible to utilize the thunderbolt connection from the iTX board to the iMac? Or any other thunderbolt enabled computers? Possible Build part list: https://ca.pcpartpicker.com/list/LFsLr6 A theoretical breakdown: If possible: NVMe cash > 4x2.5"SSD RAID 4,5,0,10 > backs up to > 3.5''8TBx2 in RAID 1 > backs up nightly on network. iMac using thunderbolt or USB 3.1 to the server. (10GB to the mac would be great without using the 16xPCI slot) Other devices on the network will connect via GBe The two spare 2.5'' external cages will be used for hot swapping OS's and physically moving large files to other locations. The plan is to run windows in a VM via KVM using fedora plus a file server. The windows VM for living room TV gaming 1080p 30-60fps is more then enough. This "highspeed" box would then backup each night to a much slower main redundancy server elsewhere on the network.
-
Two servers can't transfer files between each other on 10GbE
jkirkcaldy posted a topic in Networking
So I am having issues when trying to transfer files between my two servers. One server is a Dell R710 and the other is a whitebox build. Both are equipped with Mellanox Connect x2 10GbE adaptors (only nic the server can see, all others are dedicated to Hyper-V), both are running win server 2016, both are plugged into a Quanta LB4M switch. (2x10GB SFP+ 48x1GB RJ45) The speeds transfering from any other PC to either one of the servers is fine and seems to be working as expected. The issue is when trying to transfer files between the two servers themselves. I also have an issue now where I can't setup an iSCSI connection between them using the 10GbE connections. But have no problem connecting to other machines using the same connections. It's almost like, whilst the servers can see each other on the network, they can't talk to each other or there is something stopping them from transmitting too much data. I am sure it's something in the configurations that I have messed up somewhere, or a limitation of the switch (which would be annoying) but I have no idea where to start looking. Any help would be greatly appreciated.- 5 replies
-
- 10gbe
- server 2016
-
(and 4 more)
Tagged with: