Search the Community
Showing results for tags '10gbe'.
-
Hello guys, I need some help setting up my 10GbE P2P So I got two NC523SFP off Ebay, one is in my trueNAS Server and one in my Win 10 Desktop. They are connected with two 455883-B21. The one in the NAS system showed up in config with no prior driver installations - no problems. For the Windows side i got the official HP Windows Server Driver, extracted the .exe and installed the drivers manually - this apparently worked as they got recognized as Network adapters, no problems here either. At least I think so lol Only thing is, that they claim there's no network cable attached - however this might be because since theres no DHCP and no static IPs set? I did set static IPs and Subnet and all on both sides, but this did not give me a connection or any visible change. Also note that truenas has the Link State "DOWN" - do I have to UP it manually? And is there any way how I can tell the two ports apart? Which one is ql0 and which is ql1? Also also the lights on the NICs are not blinking whatsoever - is this a sign that something went wrong with the drivers or did they overheat? (Any way i can check their temperature?) So any help is appreciated, did I do something wrong or did I miss anything? Do I have to update the NICs Firmware? Cheers
-
I have a CAT6 cable that cannot be changed out that I need to push 10gig through. The way it is currently set up would lend itself to being changed out on one end for a SFP+ to ethernet adapter plugged into my unifi aggregation switch and on the other a 10gig ethernet switch. I don't know if going from sfp+ adapted to ethernet to a 10gig ethernet port is compatible. If someone knows can they please clear this up for me. I'm kinda a noob with fibre standards (all I've done up to this point has been multimode fibre runs that are very generic)
-
Thanks for clicking on this post, you're the best already. I have a 10gbit router with (just upgraded to) 4gbit symmetrical internet. The fiber router is a Nokia XS-250WX-A provided by the ISP/cabling provider. The router shows 10gbit negotiation correctly and no visible errors. My issue is that when I connect with 10gbit I get around 1.6gbit down on speed tests and 400-500mbit up, if I force negotiation to 1gbit I get 1gbit down and 1gbit up. moving from 1gbit to 10gbit networking cuts my upload in half Issue occurs throughout different ethernet cables (have tried CAT6, CAT6a and CAT5e) This occurs on 2 isolated computer setups. Setup 1 3950x x570 aorus xtreme Aquantia AQC107 10gbit NIC RTX 3090 (I know this isn't important, I just like telling people about my 3090 lol). Have attempted: - Updating drivers for NIC - Updating firmware for NIC - Removing all other virtual NICs (e.g. hyper-v, vmware workstation) - Playing with lots of settings on NIC (e.g. jumbo packets) Setup 2 1800x x370 asus crosshair vi hero USB-C Sabrent 5gbit adaptor (Aquantia chipset). RX 480 My current thoughts are maybe I'm hitting incompatibility between Aquantia chipsets and the Nokia router, I'm really grasping at straws now.
-
Who makes the best 10GbE NIC for windows 10? I’ve seen good things about these: >ASUS XG-C100C >TP-Link TX401 >10GTek X540-10G-2T-X8 >Intel X540-T2 >ipolex X540-10G-2T Which have you guys had luck with? I want the best out of box experience.
-
I am looking at wiring my new house for ethernet (walls are open). For regular ethernet jacks around the house this is my plan: Punch-down Keystone Wall Plate > Cat6E Cable > Patch Panel with Pass-through Couplers (not punch down style) > Patch Cat6 cable > Switch For WiFi Access Points this is my plan: AP > Cat6E Cable > Patch Panel with Pass-through Couplers (not punch down style) > Patch Cat6 cable > PoE Injector > Patch Cat6 cable > Switch I am not sure if this is too complicated or if it will work fine My main reason for this forum post is does anyone know if the PoE injectors will work with those FS.com couplers in the blank patch panel? They are circuit board instead of individually wired internally. I contacted their support but they told me no twice and yes once for PoE support from three different representatives. The other reason is does anyone know if my current setup will be able to handle 10GbE in the future (other than the switch)? I can run Cat6a instead of the Cat6E if thats better for the future. The parts are as follows: >RJ45 Modular Ends: Cable Matters 100 Pack Pass Through RJ45 Modular Plugs for Solid or Stranded UTP Cable >Punch-down Keystone Jacks: Cable Matters UL Listed 25-Pack Slim Profile 90 Degree Cat 6, Cat6 RJ45 Keystone Jack with Keystone Punch Down Stand in White >Cat6E Cable: CAT6E Riser (CMR), 1000ft, UTP 24AWG, Solid Bare Copper, 600MHz, UL Certified, Easy to Pull (Reelex II) Box, Bulk Ethernet Cable in White >Router: Ubiquiti Dream Machine Pro >Switch: Ubiquiti Switch Pro 24 >Patch Panel: Cable Matters Rackmount or Wall Mount 24 Port Keystone Patch Panel (Blank Patch Panel for Keystone Jacks/Keystone Panel) >Couplers for Blank Patch Panel: Cat6 Keystone RJ45 Coupler, Unshielded, Female to Female Insert Inline Coupler >Patch Cables: 12ft (3.7m) Cat6 Snagless Unshielded (UTP) PVC CM Ethernet Network Patch Cable >Access Points: Ubiquiti UniFi Access Point WiFi 6 Long-Range & Ubiquiti UniFi Access Point WiFi 6 Lite >Wall Plates: Two Ports Keystone Single Gang Wall Plate
-
Hello dear community, We have a full 10Gbe network in the office for our video production. Unfortunately I noticed that our systems with Threadripper 1950x as well as 2990wx only manage ~300 MB/s max in reading (writing is double @~700)+. Our NAS is a Synology FS3400 with 18x4TB SSDs. Connected to a Netgear M4300 x24-f24 via 4x10Gbe LACP. Our Windows Server 2016 with 4x10Gbe LACP is the only one that can nearly max out the full 10Gbe in both directions at CrystalDiskMark. Our Threaripper 3990x and all Intel systems also manage ~700mb here. Only the "older" Threadrippers do not. With iperf I don't get over 6Gbit/s in both directions (more than one thread). The bottleneck when reading is about the same as the performance with iperf with only one thread on the loopback. Could it have to do with the fact that the Aquantia agc107 which is installed onboard or also as a plug-in card in the form of an Asus XGC-100C only has 4x PCIe Gen 2.0? All PCs use this card, but the TR 1+2 Gen have deficits with it. I have now ordered a Mellanox Conect X 4 with RJ45 transceiver for testing. The card has PCIe Gen 3. I have also tried all settings in the network card. As well as other PCIe slots and drivers. I have also found two forums on the net where users describe this problem in connection with Windows 10 Pro 2004. For me, however, the problem exists with 1909 as well as 2004 and Windows 10 Pro for Workstation. I am open for ideas. Thanks a lot!
-
It's finally time to upgrade. I'm coming from and i5-3570k system with a GTX 1060. Amd's 5600x has tickled my fancy so I'm finally upgrading. The 1060 is the only thing I'm keeping from my old build. I would love to build an ITX system this time that will last 8-10 years again. So I'm looking at the following: AMD 5600x ROG Strix X570-I Gaming ITX mobo 16-32 Gigs of ram current 1060 (upgrade to RX 6800 down the road) 1TB WD SN750 nvme drive So my dilemma is I have a NAS and 10g card I'd like to connect to this PC using a Mellanox MCX311A-XCAT CX311A ConnectX-3 EN sfp+ Network Card. It's pcie 4x I'm thinking I can use the Mobo's back m.2-2 connector. Using an m.2 to pcie riser cable and plug in my sfp+ card. Like This one. Has anyone done this? I'm thinking it should work... The only limitation I've read is the m.2's limited power but that seems to be 7W usually and the Mellanox card is listed as less than 5W it's a single port. I've never used M.2 for anything and I'm unfamiliar with it. I thought it was just for storage but I see now it's more just another way to connect pcie. Thoughts? Thank You Country: Canada Used for: Games, DaVinci Resolve
-
Hi, I have server that is not on domain everything works fine. I just got TP-Link TX401 10Gb card and was testing the speeds: PC to Shared drive 400 MB/s - OK Shared drive to PC 600 MB/s - OK PC to Server (not on domain) - 900 MB/s - OK Server to PC - 2 MB/s - how is this possible? what can it be? Everything has 10GB network cards, server has 2 x 10GB connections PC on domain. All devices on static IP addresses. Share drive not on domain. thank you!
-
Hi all I have a couple of questions about my ubiquiti setup and about some SFP+ to RJ45 adapters. I wired my house for 10GbE (Cat6A) and for some of the devices I want to be able to utilize the full 10GbE connection. I recently bought a Ubiquiti USW-Aggregation which has 8x 10Gb SFP+ ports on it. I also have 2 other switches, a USW-Enterprise-8-PoE and a USW-Pro-24. My router is a UDM Pro. I plan to use the Switch Aggregation to connect the switches together as well as use 4 of the ports for computers I want to have 10G on and also one port for a NAS. My original plan was to aggregate 2x 10G SFP+ ports together to connect the other two switches together which would use up 4 of the ports, 2 10G links for each. The other 3 would be used for desktops around the house and the last one as I said would be used for the NAS. Is there any reason the Dream Machine Pro would need to be connected through 10G rather than just 1G? My internet connection is only 1G so the only reason I could see the need for 10G would be for the NVR access to the drive in the Dream Machine Pro but I doubt you'd need 10G for a mechanical HDD. Let me know if I'm wrong My other question was about these 10Gb SFP+ to RJ45 adapters. I need these to connect a few desktops to my Ubiquiti Switch Aggregation. Is there any difference between these? Are any brands better than others? FS.com option 1 Amazon option 2 Amazon option 3 Amazon option 4 There's also this "Industrial" one from FS.com. Thanks for anyones input!
-
Rampage Vi Extreme gen 1 Asus 1080ti 3 Samsung ssds Asus 10gbe Nic (in addtion to the onboard) 1600 AXi Windows 10 So just recently. My 10Gbe card both of them report (Network) cable unplugged. Randomly. I currently only use 1 the onboard to directly connect to my server (tested all ports working fine) I am also connected to my Lan and a public wifi in addition to P2P connection to my server. ALL of which are on different subnets. I tried cleaning out the ports, 3 different cables, uninstalled and reinstalled the drivers. Disable restarted enable restart, ping test, troubler shooter It will work fine one minute and randomly say the network Cable is unplugged and it does while I am just watching a Movie. I get no lights what so ever out of the ports or card indicator. During plug and unplug All power management is off in the properties. I would think it was a faulty card but the fact the add-on also has the same issue points to a software issues. I check Asus driver page nothing either. All I get is cable unplugged. The 1gbe and wifi are working.
-
Hi, I've got an issue with driver installation for my HP NC523SFP+ 10GBe card. Since the Qlogic changed their website to marvell I am unable to find drivers for qle3242 chip. Could someone provide the extracted files for the driver installation or any other solution? Because all of the executables that installs the drivers automatically failes. In addition I provide the screenshot of message that Windows gives me when I try to do automatic installation.
-
Hello, I'm looking for home storage options with these possibilities: Stream 4k to TV Video recorder (initial 2 cam HD) RAID with 4 TB initial storage, expandable minimum energy consumption Home assistant docker Cache nvme (write and read) 10 GbE PCIe FreeNAS, proxmox, unraid? Hardware recommendations? A2SDi-8C+-HLN4F / X10SDV-2C-7TP4F / C3758D4I-4L / C3758D4U-2TP
-
Hello there, I'm having problems with 10gbe, I recently purchased a Synology NAS 1819+ with the dual 10gbe network card from Synology, I also purchased the Synology single 10gbe port thinking it would work out of the bat in the computer, results it won't, even if it said that it was Windows 10 supported, I worked around this finding out the chip this single port card uses is Aquantia so I downloaded these drivers and my card was recognized, now, my Nas has 4 x 12TB seagate Firewolf Pro in SHR with 1 fault disk tolerance. And the rest is 1 read cache ssd for this array and another Raid 0 of 3x Samsung SSD. 2 Computers are connected to the NAS directly with Cat 8 cable. Results I do an iperf test and my speeds won't go over 5gb/s on one computer, and 3gb/s on the other. Asuming the PCI port speed is in play, I know one is running at 4x and the other computer at 8x, but its never reaching more than those 3gb/s or 5gb/s, I have enabled Jumbo Frames and everything. These are my speed test on write and read. Should I be looking at something better? Will I get better speeds if I change my network cards on the computers? I have seen people getting way higher speeds with spinning disks. Thanks SSD Raid 0 | Speeds 769.0 Write / 607.8 Read SHR 32TB | 442.3 Write / 308.3 Read
- 6 replies
-
- networking
- 10gbe
-
(and 3 more)
Tagged with:
-
Hey guys, I'm connecting two of my servers with dedicated 10GbE SFP+ connections. Both NICs are seen in OS and are transferring data successfully over the connection at near "max speeds". However the connection speed is not what it should be. I've enabled Jumbo Packets at max size, picture below.
- 7 replies
-
- 10gbe
- networking
-
(and 1 more)
Tagged with:
-
I'm sorry if this question has been asked before or if this is the wrong forum. 1. Is there an at home way to get a 10GbE internet connection? 2. Is it possible to do so for WiFi?
-
Hi all, I've been trying to reach 1GB/s file transfers with my NAS. First I'll start out with what I don't need: - What network card to get (cheap or expensive) - Questions why I want to do what I want to do, it's pretty obvious below - People trying to tell me 1GbE will be enough or teaming 1GbE would do the job - People telling me I need to install FreeNAS on hardware and not a VM (people get these results, and it's impossible I've tried... something to do with my Motherboard in which I opened a help ticket for) - Haters Ok, no on to my hardware: Server: Latest UNRAID running FreeNAS Corral in VM with HBA Passthrough and Intel NIC Passthrough Asus Z10PE D16 WS intel 2683 14 core Xeon in slot 1 (nothing in slot 2) 64GB ECC RAM LSI 9300 HBA inte x540 T2 (Dual 10GbE network card) 1 120GB ssd with Windows 2012 R2 installed (used in a VM) 2 120GB ssd in a storage array on UNRAID 5 6TB WD Black connected via HBA Clients: Windows 10 x79 motherboard intel 3930K 32GB RAM Several SSD intel x540 T1 (Single 10GbE network Card) The setup: On the server I have UNRAID isntalled, and I'm running FreeNAS corral (latest build and updates) with an HBA passthrough (works perfectly) plus the dual 10GbE network ports passed through as well. I have a RAIDZ2 with 5 6TB drives connected to the HBA. I've enabled SMB shares and made sure SMB 3 is chosen. I've set up static IP addresses on both the 10GbE network connections with different IP's and subnets (more on this in a second). In addition there is a provided connection via the VM creation. This is a virtual connection to my knowledge (only capable of 1GbE). My setup has 2 Clients connected via 10GbE to the two ports for the FreeNAS box. I've tested this through windows and it works like a dream with 1GB/s Transfers. What's stumping me is getting the same performance in FreeNAS. I understand that I won't get the full 1GB/s unless I have SSD's, but I know what I can get via RAID 6 with these drives (650MB/s) and I'm not even getting that, and I'm especially not getting that consistently. Right now I able to get one windows client to map a drive to the FreeNAS box, but I'm unable to get the other to map a drive.. it allows me to enter credentials, but with no luck actually mapping the shared folder. As for performance for file transfers, I'll get a burst of 500MB/s for about 8 seconds or so, but then it will drop to 25MB/s for the remainder of the time. This is by far much slower than I got with my single 1GbE connection. What I'm asking of the community: Anyone who has good knowledge with networking, especially with FreeNAS or FreeBSD for that matter help me or present a guide to reach those magic numbers. I've searched the FreeNAS forums, and several other forums (not forgetting google) to find a solution, but no one seems to have published the answers. I imagine I can direct connect with 2 clients for the time being, and then utilize a switch (hopefully next year) if they drop a little more in price to supply 10GbE throughout my home. What I'm going to use this for: Video editing, and file backup will be the main use for this NAS. I hope to edit directly off the NAS so that all my files are in a central place. This will also help me grow my storage as needed. I intend to start freelancing more, mostly comprised of stock footage, and I'll need quite a bit of fast storage to do the editing. I know the hardware is capable of it (via testing Windows 10 file server to Windows 10 Client with same hardware), but I want to ensure that I have better Data integrity and user friendly warning system when there's a problem over using an LSI RAID card like I've been utilizing. I will also be running a Windows Server 2012 R2 and learning how to use it for administration. This is a side project, but UNRAID has a powerful VM function that I want to use to utilize my powerful hardware. Any help would be greatly appreciated. Right now It's in a state where I can use it as a lab, so any tooling around I can do if you need me to troubleshoot. Thanks! CONCLUSION/SOLUTION: I've been testing this for a long time, and I never seemed to reach the speeds I was looking for. I tried Corral, and then I even downgraded to Freenas9.1(Due to Corral no longer being supported). First, I still have not been able to install Freenas on my hardware (Still bewilders me), but I'm able to use UNRAID to successfully run Freenas on a virtual machine with PCI pass through (using a HBA card). The other night (around the end of May in 2017) Freenas came out with an update that improved Samba, which in turn improves SMB (Windows file sharing protocol). So I gave it a whirl, first I followed these tweaks from 45 drives to do some tuning. Very simple stuff: http://45drives.blogspot.com/2016/05/how-to-tune-nas-for-direct-from-server.html Next, I did some testing of my transfer speeds using a RAMPerfect Ram drive and some SSD's and HDD in the FreeNAS build. I got around 250MB/s on both the SSD and HDD. So something is wrong especially if we have SSD's in a stripe and the HDD in a RAIDZ2 arrangement. I initiated the painless update (took about 5-8 minutes reboot included). When I came back to do the exact same tests I got a flying 1GB/s on my SSD stripe, and about 500MB-600MB/s on my 5 drive RAIDZ2. Needless to say, I'm finally relieved to get one hurdle out of the way, and now the next challenge will be to see if this new update will install on my hardware. I hope this provides some clarity for anyone who hasn't made the update and is looking for 10Gigabit speeds with Freenas 9.1
-
Hi guys I have a project for a high end home NAS, making my research and trying to see how I'll proceed my usage scenario is one where on a daily basis considerable amounts of data (about 1.5 - 3TB of large files total) will be going in and out of this NAS and updated with no internet, no VMs or PLEX or anything running on it, it'll be serving 3 workstations total, however should one day's work overlap another, I'll be in BIG trouble, speed is of the essence, reliability under load as well, and ease of maintenance (regardless of possible RAID rebuild times) however I will have a standalone computer with 1 mission: back up for this NAS's RAW data, should I lose the raid, I will have all the data at hand, pretty fast too even if not updated to the very last minute of use. note that this NAS will be in an all 10Gbe network my main concern here is built VS bought, and no rack mounting for both As far as bought is concerned, I've been considering Qnap TVS 1282T which comes with a built in 540T2 dual 10Gbe NIC installed and Thunderbolt 2 for volume expansion, say I get this, fill it with RAM, Noctua the exhaust fans and CPU fan and change the PSU to something more ear friendly, begin with 6 HDDs, either 8 or 10TB drives in RAID 1+0 ... after some research I found that this NAS uses what seems to be an IBM Serveraid M1015 for connecting all the drives. Now Qnap's QTS IS Linux based but I'm still trying to see how reliable this setup would be I did look at Synology NASs but it would appear like Qnap are generally higher quality builds, fixable noisy cases and more stable ... ? on the other hand there's me building a NAS, and regardless of case, mobo, ram, and 10Gbe NICS, I've been looking at raid cards, preferably non ROC ones or simply just dual SAS cards with direct access to HDDs, I guess a 4-6 core Xeon would be better than ROC ... ? but also looking at OS options, what worries me are a few things, primarily that I don't even know a single Linux thing , never used the OS, but also that some of the most common OSs used, like FreeNas seems to have trouble playing nice with this or that controller, may it be on board or PCI-E, I'm considering using Windows Server to try and not have those kinds of issues with good drivers but also since I have a license, note that I do have some time to build this NAS but once up and running I can't have it fail on me too regularly, I am a tinkerer, and prefer a built NAS because it's a lot easier to maintain but I really want to have a reliable NAS, the same HDDs are gonna go in this NAS as the bought one if ever, up to a total of 12 or 16 HDDs however. looking at some core options I see that ZFS is a Wholly Grail to a sizable chunk of home NAS builders however there's no proof of the actual exclusivity of it's features and reliability, then again I can't figure out how will an eventual Raid Z2 or 3 be performing with 6-12 or 18 drives, on another note I seem to understand that 128-256 GB ram is barely adequate for it should I not have any cache SSD ?, also an NVME SSD or 2 is very probable for me however not necessary, I mean where am I gonna cache 850GB if that happens to be happening in a single copy ... I've seen some NVME issues mentioned with Qnap's QTS, and seems to be a Linux thing, not just QTS, any thoughts on this ? considering caching, please note that "hot" data being cahced in an SSD for frequent use is a terrible idea in my case especially that the amount of data that may be labeled equally hot can be enormous and all of it change drastically in a very short time frame, what's way more useful for me is a network cache, copying the actual data being written to SSD first , then to the actual physical share and that's it, but, again, not necessary if troublesome I know I have't really ventured that much into NAS before but in this case 28+ TB usable space is a must, any ppl out there with Windows Server experience with this ? because I think this thread is gonna be swarming with Linux OS based defenders, and for good reasons too I guess ... Internet Could storage is ABSOLUTELY out of the question so please don't even think about it thanks for u'r time and attention
-
Help I need to build a low power server for a nas I need to have at least 3 sas connections for the hard drive backplane and one spare PCIe to later upgrade to 10gbe I need ideas on what CPU to use mainly that will accommodate 10gbe while still being low power under 150-200w total power
-
I have two issues. Issue 1: First I had to reinstall windows 10 on my "Server" set up. It auto installed drivers for both my motherboard NIC and my intel x540 T1 NIC. I noticed the driver configuration didn't look like the original install, so I installed the latest driver from intel for the x540. Upon installing I got this message: "There is an issue with Microsoft Windows 10 that prevents the Intel(R) Advanced Network Services (Intel(R) ANS) features from working correctly. You may install thea feature although you will be unable to create Inte(R) ANS Teams and VLANs. Do you wish to install the feature?" I hit yes, but found that the advanced tab disappeared. I couldn't find any work around. I tried manually installing the driver, reinstalling, downloading prior drivers ect. The only solution I came up with was uninstalling, rebooting and letting windows assign a driver to it. That's when the advanced tab came back. So my issue is all the features that I had previously from a late 2015 install are no longer available. The main difference I noticed is that you are limited to a RSS Queue of 8 rather than 16. Which I need an RSS Queue of 16 for my 12 logical cores. Issue 2: My second question is about windows 10 pro performance tuning. I have both my client and "Server" running windows 10 to support SMB3 shares and 10GbE connection throughput. I did some testing with RAM disks on both systems via the 10GbE connection. I copied 2 large files (15GB worth) from the server to the PC, and I got 850MB/s. When I copied from the Client to the Server I got 1.15GB/s!!! AWESOME!. So, I would like to know if there are any security features like Signing packets or anything like that for performance tuning. I know Windows Server 2012 R2 harped on transfer speeds when certain group policies were enabled. I went down that rabbit hole, and never got the speeds I'm seeing now, but I would like to see the full potential I'm seeing like the copies I got going from my Client PC to my Server. How do I get the full 1.15GB/s that I'm seeing when transfering from my client to my server going the opposite way (from my server to my PC) Fun note: Oh and if anyone is wondering why I hit a max of 1.15GB/s its probably due to overhead So here's what I've tried to improve these numbers. I've disabled any security measures like antivirus and real time scanning including Windows Defender. I saw slightly better results with 950MB/s from my client to the server. The server to the PC transfered at a consistent 1.15GB/s. The only thing that might make a change is a driver adjustment on my client PC as the driver version is: 3.10.162.1 with a release date of 4/24/2015 The Server Driver version is: 4.0.215.0 with a release date of 8/30/2016 I'm a little afraid of updating this due to the driver installation issue I had. Plus, I would loose the advanced feature of 16 Queues. I haven't read any white papers as of late, so I don't know if this is an old thing of the past, and 8 Queues is the most efficient??? Another question to be answered. Any help would be appreciated!
- 1 reply
-
- windows 10
- 10gbe
-
(and 4 more)
Tagged with:
-
I have a long series of questions that could be answered in a whole LinusTechTips episode: What is the purpose of Fibre Channel HBAs? Now you have generation 5 and generation 6 HBAs with speeds of 8/16/32 Gb/s transfer speeds using optical fiber for communication rather than copper wire. What is the purpose of FC today? They are marketed as for use with hard drives, but I see no INTERNAL connectors on them that would be suited for internal hard drives such as SAS or SATA connectors. So what is the purpose of them then? It looks like they are made for external communication only. But why use FC, why not use Ethernet, you have 1GbE, 10GbE, 40GbE, ... speeds for any taste and preference? Both wired and optical. So why would one use FC instead of high speed Ethernet? To be more frank, what protocols and drivers are involved with FC? Do you use TCP/IP or what is involved? When using the controller, will it just show other devices connected to it as PCI devices, or SCSI/SATA devices or what? Will it act as a hard drive controller or an Ethernet controller? What is the target hardware used for it, some special devices with FC, another computer with an FC HBA or a FC switch? If I were to build a Beowulf cluster spanning multiple computers, what would be the best way of implementing the usage of FC connections to connect the nodes in such a cluster? Are there any substantial benefits of using that instead of regular Ethernet? To tell the truth, why am I using all these four (five if we account for SATA) interfaces interchangeably? Is it expected to see these interfaces merge into one some day in the future? I guess then we should throw Thunderbolt into the mix and while we're at it why not HDMI and DP or ..... USB3?!? Even more confusing will it get when you have FibreChannel over Ethernet (FCoE)!!! Source information comes from Broadcom/LSI, Mellanox, etc... Glossary: HBA = Host Bus Adapter (used in distinction with "I/O controller" where the I/O controller mostly means the chip that handles the interface whereas an HBA is a complete solution involving the I/O chip, PCIe connector and other I/O connectors, i.e. a controller card). FC = Fibre Channel
- 7 replies
-
- sas
- fibrechannel
-
(and 4 more)
Tagged with:
-
Many of us have outgrown the capabilities of 1GbE-based networks but don't have the budget to upgrade to the ridiculously priced enterprise 10GbE solutions. ASUS is attempting to change that after releasing their 10GbE switch for a "measly" 250USD and now a PCIe add-on card for 99USD. Finally the value proposition for 10GbE is reasonable, and for those of us running home/PLEX servers with 4K films this will be a very welcome upgrade. Another problem for people looking to do a whole-house upgrade was the need to rerun all their cabling and replace it with Cat6a to satisfy the shielding standards that 10GbE requires. By supporting the 2.5GbE and 5GbE standards, shorter runs of less than 40-50 meters will still be able to run at higher speeds (albeit not 10gbps). Edit: forgot to add the source http://www.tomshardware.com/news/asus-xg-c100c-10gbase-t-nic,34844.html
- 76 replies
-
- asus
- networking
-
(and 2 more)
Tagged with:
-
I work for a small video production house and we're trying to build a NAS for our workflow. We currently have 4 Windows 10 workstations and an improvised "NAS" running Windows 7 file shares over a 1 gigabit network. Some lighter projects can be edited off on the Windows 7 file server but our main projects require no dropped frames during playback and can only be edited off our local hard drives. We plan to build a FreeNAS server that all 4 workstations can edit off with little to no dropped frames in Premiere for the least amount of money possible. I saw Linus feature the Asus XG-U2008. A $249 8-port gigabit switch with 2 10gbe ports. If we were to build a 10gbe capable NAS, connect that to one of the XG-U2008's 10gbe ports, then connect our workstations to the gigabit ports, can we edit 4K footage off of the FreeNAS server? Or do we need to buy a 10gbe switch and 10gbe NICs for each the workstations?
-
Hi All, I own a small video production company which operates out of a vacant office within an existing office building, owned by a friend. This friend has allowed me to utilise their internal network for internet but, as one might expect, it only supports 1Gbps. I feel this this will put a spanner in the works as I am looking to purchase a 10GbE NAS and edit directly from that. How might I go about ensuring the devices I use can make use of the 10GbE connection to the NAS whilst whilst still utilising the existing 1GbE connection for the internet? Upgrading the entire office to 10GbE isn't an option as it's of no use to the other business that operates here.
- 12 replies
-
I run a production company, we currently have an 80TB RAID array (10x 8TB drives with a MegaRAID controller) shared across a 10GB ethernet network... it's been working great until the last month (was installed in April 2018). The 'server' has Windows 10 pro installed with a max of 4 network users editing or accessing content over the network (Netgear 10Gbe switch) via 3 macs / 1 PC. I didn't build the system and unfortunately the people who did apparently didn't install enough cooling as we had an issue last month where under heavy usage the HDD temps were rising north of 60*C. This caused severe performance issues, files weren't copying / verifying correctly and it was all a bit of a mess. I installed more fans, backed everything up, all has been fine since then. However, today the server had a major meltdown where during a fairly basic copy of about 60Gb of data, from one of the networked Macs, the copy failed and the network interface on the server crashed (it just disconnected and all the lights on the card went out) taking a restart to get back up and running. This problem re-occurred every time the data was copied, yet copying to the local drive on the mac was fine. The question is will I see better, more stable results by installing Windows Server 2019 rather than running Windows 10? And perhaps more error reporting too? I have a feeling the latest networking error has come from using a teamed connection through the Intel 10GB NIC I have installed so I'm going to reverse this, however I know this is a feature I could use from Windows Server if we had it rather than using the Intel utility in Windows 10. Thoughts appreciated