Jump to content

An Introduction to SAN and Storage Networking

An Introduction to Storage Networking

SAN, NAS, networking, oh my!

 

I see lots of talk about NAS setups in the storage section. Much of it revolves around increasing storage performance through caching and complex RAID arrays, but there's less discussion about the networks they're connected to. If your storage is high-performance, then your network is usually the next bottleneck in the system, especially when multiple users are connected to a NAS.

There are a couple of concepts that I'll mention, starting with the idea of a Storage Area Network:
 
A SAN is a network which provides access to storage. Traditional implementations in the enterprise environment will use a dedicated network, and the storage is consolidated into a SAN appliance. Depending on the implementation, a SAN might be sharing individual hard drives, or volumes spread across multiple hard drives.
 
The network also doesn't matter, but requires hardware and software support. Two of the more common ones are Fibre Channel and iSCSI. Fibre Channel uses fibre cables which provide lower latency than iSCSI, which is typically run over copper cabling using SFP+ or RJ45 connectors. In addition, data is transferred in block form, rather than in file form, and the storage devices appear as local devices.
 
Can you tell which of these are physical hard drives and which ones are SAN storage?

post-653-0-91039000-1392406085.png

[spoiler=Answer]

Here, the Media drive and the VRAID drive are iSCSI targets. These ones are coming from two separate SAN appliances in our lab.

This allows for all sorts of neat functionality. Firstly, it allows you to perform Windows Backups to the machine over a network without owning Windows 7 Professional. It also allows you to install programs to remote storage. That remote device, if it's a part of a RAID-protected volume, will be much safer than a single hard drive. Also, since the data is stored in block form, you can format an iSCSI target with whatever format your file system can support.
 
Next is networking protocols. I won't talk about Fibre Channel since setting up an infrastructure is very expensive and specialized, and really only useful for the enterprise environment. However, I will discuss iSCSI, which has risen in popularity over the years and is usable in consumer environments.

 
SCSI is an interface that allows one to directly attach storage appliances to servers, where they appear like local storage. iSCSI takes it one step further, making the protocol talk over a network rather than through a dedicated cable. You can use iSCSI on almost any computer in existence. If you're running Windows 2000 or later, your OS comes with an iSCSI initiator by default. Linux has an iSCSI driver, FreeBSD has iscontrol, and you can get one for OS X (doesn't come with one by default).

 

post-653-0-26552200-1392406444.png

 

If you want to learn more about the details of iSCSI and/or want to configure it, here is a good guide to setting up iSCSI on a FreeNAS system, which also goes over the concepts around iSCSI like IQNs, CHAP, extents, etc.

 

For the purposes of our discussion, we'll refer to a storage system that can use iSCSI such as a FreeNAS system or a prebuilt NAS box from Synology, NetGear, QNAP, etc.

 

Here is some terminology:

  • SAN Appliance - This is where all of your storage is. A Synology/NetGear/QNAP prebuilt NAS, a FreeNAS system, etc. It's where your hard drives that you want to store data on live, connected to your computer over your local network.
  • Initiator - This is the software that connects to your storage and makes it visible to your file system.
  • Target - This is what you connect to with your iSCSI initiator, and is what appears in your local devices list. This could be a physical hard drive on the SAN appliance or a portion of a volume that spans multiple hard drives on the SAN appliance.

Let's say you do end up using iSCSI over your local Ethernet connection, because it's relatively fast and it's there. Let's consider several problems you might encounter:

 

1) Your SAN appliance has a single ethernet port. You set up an iSCSI target on the SAN Appliance, and use it in place of local drives, because you want your data to be safe. Now you start installing your programs on that target. 

 

Result: All the data that would normally be written to disk will now be sucking up bandwidth on your network, meaning that anyone else trying to use the system will experience a significant drop-off in performance, because the Ethernet port on the SAN will be hit with a lot of data.

 
Solution: We can get around some of these problems by using a SAN with multiple Ethernet ports that support link aggregation, so that one system can't max out the networking capabilities of the SAN. But now we have another bottleneck:

 

2) Your SAN's storage isn't high-performance, and, while faster than gigabit, is limited to around 150 MB/s.

 
Result: Even if the Ethernet ports don't get maxed out, you can still slow down people trying to access data on the SAN.

 

Solution: Have higher performance storage. This can be done with RAID arrays and caching. But we still have a bottleneck:

 

3) The Ethernet port on our computer is gigabit and we do our program installations and access other data on the target.

 
Result: In large file transfers, many hard drives can max out an Ethernet port, which means that accessing our storage (can) be slower than using a local hard drive. In most cases it won't matter, since most hard drive operations won't generate tons of throughput. In this respect, using an iSCSI target over gigabit has far greater benefits than a local drive. However if you're used to SSD performance, you'll hit many more bottlenecks. We've talked before about SSD performance, and for random operations we wouldn't be limited by gigabit. But for medium to large file transfers we'll hit the network bottleneck.

 

Another problem is that, by using an iSCSI target for storage, valuable bandwidth will be used up, and this can cause lag for other programs that use networked storage, like games or file downloads.

 

Solution: Get a better network connection, usually by buying a better NIC. You can set up link aggregation on a NIC with multiple gigabit ports, or upgrade to 10 GbE networking to increase throughput, which will help to alleviate some issues with performance and also reduce the impact on other programs.

 

Here is an example of a network bottleneck with iSCSI targets:

 

Although these volumes are not the same ones I used in the picture earlier, they are hosted by the same respective machines. The E: drive is hosted on an all-SSD SAN appliance with 24 SSDs running in RAID 10, so I know that the storage is capable. The SAN also has a higher-than-gigabit link to the network, and I am the only one using it. The only bottleneck is the Ethernet port at the back of my computer.

 

I first benchmarked it in CrystalDiskMark, the results are shown below. Please take these with a grain of salt, as I am using an enterprise SAN appliance that happens to be optimized for writing data. I also am on my company's public network, which has tons of traffic routed through the networking core and results in very inconsistent speeds sometimes. Our networking core is also very different from what would be in your house. Your results will differ.

 

post-653-0-08632500-1392408149.png

 

Sequential transfers, 512K transfers and 4K transfers at QD32 are all bottlenecked by the network in this scenario. The 4K random performance isn't as good as a local SSD could be (around 20MB/s), but it's still way faster than a local hard drive would be in 4K transfers.

 

F: is hosted on an identical SAN appliance, this time running 24 10K SAS drives in RAID 10. Same scenario as before:

 

post-653-0-37506400-1392408154.png

 

Again, sequential performance is bottlenecked by the network. The 4K read numbers seem appropriate for a mechanical disk RAID, but the 4K write speeds seem kind of weird don't they? Don't worry, that's the storage appliance doing all sorts of caching and its write optimizations. The 512K and 4K QD32 read speeds also are indicative of a mechanical RAID.

 

But compare the write speeds for a minute. They are pretty much identical. This shows that our storage on the other end is not the bottleneck for these operations. The SSD RAID got similar performance numbers, and the SSD RAID is light years faster than the mechanical RAID. The network is clearly the bottleneck.

 

I threw in a benchmark of one of my local drives as well:

 

post-653-0-15151600-1392408161.png

 

This illustrates a couple of things. Firstly, our sequential performance is absolutely bottlenecked by the network. Secondly, there are advantages to using an iSCSI target on a dedicated appliance. With caching, our iSCSI target performs as well or better than the local disk in 512K writes, 4K and 4K QD32 operations. If we didn't have a network bottleneck, those numbers for the iSCSI target would probably be higher.

 

4) Another issue with having data served over iSCSI is that, should someone else do something intensive, your performance can also be affected. If your parents start doing a backup to the SAN, then you will be affected, which is annoying when trying to play a game. This could manifest as long loading times, or you could lag in-game because your program needed to access files on your storage. Either way, it's an inconvenience to you.

 

Solution: This is the most common problem with iSCSI, and the solution is to have this type of traffic routed over a dedicated network so that when your parents go to back up their computer, it will be over a different network connection, and you won't have as many issues. If your storage isn't adequate, you can still be affected. This feature isn't really doable with a prebuilt system, but most enterprise environments will use equipment that can do it. You can also build one yourself, using multiple network cards to split traffic between a public network and your SAN network.

 

Now let's talk about NAS. A NAS is similar to appliances connected to a SAN, but instead of appearing as a local device it appears as a networked device. It's what you see when you map a network drive in Windows. The advantage of this is that a data store can be shared with multiple users, rather than. In addition, they're usually attached to an existing network of computers (sometimes called the 'client' network), rather than on a separate network.

 

post-653-0-22001100-1392584763.jpg

 

NAS uses different networking protocols like CIFS and NFS to share data. As far as their performance goes, they are usually limited by gigabit Ethernet, because the networks they are usually connected to are composed of regular PCs. Just like with SANs, the NAS can have a multi-gigabit backbone that will allow multiple users to connect at gigabit speeds. In addition, 10 gigabit NAS is coming to market with the introduction of Qnap's solution, found here.

 

I imagine that CIFS shares are what most people are using when connecting to their NAS, so that's what we'll benchmark. I'm using the same two SAN appliances that I used for the iSCSI targets, but now I've added NAS controllers to them so I can create CIFS shares.

 

For this case I created a single 3TB container and mapped a CIFS share to it. The NAS controllers have a 4 Gbit connection to the SAN appliance and have their own cache. This altered the benchmarks compared to the iSCSI target, where we were connecting to the system, so take these results with an even bigger grain of salt than before. The main focus of this will be how the networking protocol affects the data transfer.

 

Here is the SSD storage with the NAS controllers:

post-653-0-05625500-1392901966.png

 

Notice that some aspects of performance went down, most notable the 4k random writes and 4k QD32 performance. Sequential performance traded off read and write performance with the iSCSI target, so it's probably a wash. In this instance, it is the cache of the NAS controllers that is altering those performance numbers. In addition there is some networking overhead communicating between the NAS controllers and the SAN appliance that introduces latency. However if you focus on the 4k random read performance, they are pretty similar.

 

Here is the HDD storage with the NAS controllers.

post-653-0-48639900-1392901973.png

 

Notice that the performance numbers are almost identical to the SSD NAS system. The cache on the NAS controllers is masking the behavior of the storage on the other side. Normal users wouldn't have this exact setup, but it does illustrate that having a good cache for your storage system can increase the performance of mechanical storage. I'm sure if we had a longer benchmark and a faster connection we could blow through the cache and hit the storage more directly, but I didn't have time to test this.   

 

For my setups, it seems that performance is limited in some respect by the gigabit Ethernet port on the back of my machine, but doesn't seem to vary much depending on network protocol.

 

I didn't grab any photos of this, but I noticed a disadvantage of using CIFS shares; CPU usage. I don't mean that using CIFS shares increase CPU utilization much, but when I was first running these benchmarks I noticed that performance was significantly lower. Around 70 MB/s sequential, 4.5 MB/s 4k random. It turned out that I had my CPU at around 80%, and when I stopped the program that was running the performance shot up to the numbers presented here. Not really a huge problem, but if you're running lots of video editing software or some other intensive program then you might notice a difference.

 

Example Storage Network

 

Here is an example that I think is relatively practical for the given situation, especially if you're willing to invest in high-end hardware. For this we're going to focus on two concepts: Private data and public data.

 

  • Private Data is data that is accessible to only one user on one computer.
  • Public Data is data that is accessible to any number of users on any number of computers.

In this case, public data could be represented as your shared music library over the network, while private data behaves like a local hard drive inside your computer.

 

For this storage network, we also would like it to be repurposable, if possible. Should your needs change, it is desirable to be able to use stuff you already have.

 

•Household with multiple devices.
•May or may not be kids with their own PCs.
•Laptops, desktops, phones, tablets and other devices.
 

•Data redundancy, as complete as possible.
•Ability to keep data public and/or private.
•Ability to easily share public data with new devices.

 


•Support for game streaming, whether Steam, Nvidia or otherwise.
•Programs can be installed on remote storage.
•Systems can be run with only a boot drive, or possibly diskless.

 

post-653-0-88826000-1393596518_thumb.png

 

It's vastly simplified, but here are the features of the two parts of the network

 

  • iSCSI over 10GbE will appear like a local disk to a PC. Can store data on it and install programs on it.
  • Fast RAID array and caching will make the iSCSI target very fast, like an SSD. Slightly worse performance than a local SSD, but better than over a 1GbE network.
  • Because an iSCSI target is part of a redundant volume, all data stored on it is safer than a single local disk.
  • If PC crashes, only data on the boot drive is lost.
  • With server-grade NICs and specialized software, can run a diskless machine (Windows Deployment Services)
  • Can use GbE to save money instead.

 

  • If backbone to the SAN appliance is a 4x1GB NIC, supports multiple machines access data very quickly.
  • Because data is shared in a NAS style, anyone with valid credentials can access the data. 
  • Shares are part of redundant volumes, so data is safe.

 

There are other parts that are intermingled with both networks:

 

  • If games are installed on My PC, stored on an iSCSI target, then installation data is safe.
  • If My PC has lots of graphics horsepower, then it can be the backbone for game streaming.
  • Game streaming is done over the GbE/Wireless network.
  • Kids with their own desktop PCs can have similar setup and stream to their own devices.
  • With Steam family sharing coming out, users can play each other's games.
  • Nvidia shield can pull graphics horsepower from a PC over the GbE/Wireless network.

 

Let's check if we met our needs:

 

With the exception of local PC data, yes.

  • iSCSI targets keep private data safe (and potentially responsive)
  • CIFS/Samba shares keep public data safe

Do proper backups of local data to CIFS Shares/iSCSI targets

  • Laptops can back up to CIFS shares over wireless, iSCSI if they are plugged into a hard line.
  • Local PCs can do either, though backing up to an iSCSI target gets around Windows Network Backup restrictions to Professional or Ultimate versions.

 

Yes.

  • iSCSI keeps private data private.
  • CIFS Shares allow for shared data.

 

Yes

  • Browse to CIFS share to access data.
  • Fully public shares can be set to read-only, others can be set to read/write depending on login.

 

How about our other considerations?

 

Yes.

  • iSCSI targets provide safe storage. Depending on SAN appliance, very fast storage.

 



 

Yes.

  •  Games stored on safe iSCSI target, accessed from local PC, Steam Client and other devices.

 

Maybe. Will need software like Windows Deployment Services and specialized NICs to boot from an iSCSI target.

 

Also, you know you want to see this when you open up "My Computer":

 

post-653-0-46047400-1393598788.png

 

Games and Game Footage are both redundant iSCSI targets on the same SAN appliances as were outlined earlier on.

 

This is obviously only one combination. For instance, if you had a Steam Machine you could set it up such that those program files were on their own iSCSI target rather than doing in-home streaming to clients. But hopefully you can see that there are ways to get beefy home storage solutions with good networking equipment and a strong storage server.

 

That's all I have for now. Next will probably be some sort of DIY home storage guide, which I will tie this in to somewhat, including the kinds of hardware you'd need to make this possible.

 

If you have any constructive criticism, please let me know. I want this to be as accurate and informative as possible.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

Very informative, thanks for writing this to inform network storage newbies like me.

Link to comment
Share on other sites

Link to post
Share on other sites

 Updated with some info about CIFS. I'll do more complete work when I'm at work and have access to better resources.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

Oh c'mon! We all know SAN is just NAS backwards. There's no difference.

 

 

;):P

 

Thanks for the nice write up.

 

What would you consider an external drive expander?

15" MBP TB

AMD 5800X | Gigabyte Aorus Master | EVGA 2060 KO Ultra | Define 7 || Blade Server: Intel 3570k | GD65 | Corsair C70 | 13TB

Link to comment
Share on other sites

Link to post
Share on other sites

What would you consider an external drive expander?

That's meant to be used with a RAID card that has some ports on the outside of the PCI-E bracket.

 

Like this. SAS can be chained for 100+ drives if you really need lots of storage.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

That's meant to be used with a RAID card that has some ports on the outside of the PCI-E bracket.

 

Like this. SAS can be chained for 100+ drives if you really need lots of storage.

Yeah, but would it be called a drive expander? Or is there a proper name for it?

15" MBP TB

AMD 5800X | Gigabyte Aorus Master | EVGA 2060 KO Ultra | Define 7 || Blade Server: Intel 3570k | GD65 | Corsair C70 | 13TB

Link to comment
Share on other sites

Link to post
Share on other sites

Added NAS benchmarks.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

Finished for now, I added an example storage network along with some requirements, and outlined how the network meets those requirements.

 

As always, constructive criticism is appreciated.

 

Yeah, but would it be called a drive expander? Or is there a proper name for it?

I believe it's just called a SAS expander.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

Lets say I will only have 1 computer on my network other than the Storage server. should I go for more than 1gbps? maybe look at the quad ones? because I do like the idea of installing programs on the NAS and just accessing that quickly. Also would I need to have 2 quad nics? in order to get my speed? one in the Host computer and one in the actual server? thanks im looking for some feedback. I am just now setting a home server up for myself so I can have access to my documents, pictures and videos on the go it will also double as an HTPC since there will be 4 4TB and 3 1TB drives in there. thanks 

Link to comment
Share on other sites

Link to post
Share on other sites

Lets say I will only have 1 computer on my network other than the Storage server. should I go for more than 1gbps? maybe look at the quad ones? because I do like the idea of installing programs on the NAS and just accessing that quickly. Also would I need to have 2 quad nics? in order to get my speed? one in the Host computer and one in the actual server? thanks im looking for some feedback. I am just now setting a home server up for myself so I can have access to my documents, pictures and videos on the go it will also double as an HTPC since there will be 4 4TB and 3 1TB drives in there. thanks 

Maybe? It really depends. Local storage is still good for a single user; these setups are used when lots of computers need to connect to them, or you have a really small computer that also needs access to tons of storage.

 

If you decide to do this, I'd just go with GbE; you'll only see reduced speed in sequential transfers.

 

If you want full performance, then I wouldn't recommend having two 4-port NICs, because you can't just connect them together directly. You'd need a managed switch to allow them to aggregate. I would recommend directly connecting those two with a 10GbE NIC like this in both computers. They're about as expensive as a 4x1GbE NIC.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

thanks for the suggestion I wont get the quad nic. I wasnt going to but I just wanted a second opinion thanks alot man any experience with plex? that was the biggest thing I was going to used the NAS for. NAS and an HTPC is a good combo but many people suggest I should seperate them. last question im using maybe 2 old drives from an previous computer one of them has burned out on me (during the summer I think because of the heat) should I still use those drives? because I was just going to use everything in raid the 4tbs and the 1tbs just to make an gigantic pool. maybe I should just seperate the two do Raid 0 with the older ones and another raid 0 with the newer 4tb ones. 

Link to comment
Share on other sites

Link to post
Share on other sites

is there anything that is a little bit cheaper than the X540? also is there a specific speed that I need to install programs on the NAS? 

Link to comment
Share on other sites

Link to post
Share on other sites

is there anything that is a little bit cheaper than the X540? also is there a specific speed that I need to install programs on the NAS? 

Maybe on Ebay, but nothing that I know of.

 

You could go with a two-port gigabit NIC. Here's one from Amazon that's on a huge sale.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 month later...

Thanks I will go with the 2 port nic

Link to comment
Share on other sites

Link to post
Share on other sites

this is a good one. How much do you want to go into detail? I saw a few things that might be worth mentioning:

  • 10GbE has got higher throughput than 1GbE but a tad higher Latency, IOPS will be reduced compared to an SSD
  • SFP+ can be point to point
  • The difference of Block level and file level sharing and what it means for the user
  • Samba is single thread based (bottleneck for file level transfers)
  • Infiniband, Fibre, ... (different technology as a comparison as an extension to 1GbE base-T and 10GbE base-T and SFP+ as a combination of Fibre over Ethernet and stuff - maybe even that high energy consumption of 10GbE and the noise the switches are generating because of it)

I call out for a sticky, Thanks a lot for the work you put into this guide. This really can serve as a future reference

My builds:


'Baldur' - Data Server - Build Log


'Hlin' - UTM Gateway Server - Build Log

Link to comment
Share on other sites

Link to post
Share on other sites

this is a good one. How much do you want to go into detail? I saw a few things that might be worth mentioning:

  • 10GbE has got higher throughput than 1GbE but a tad higher Latency, IOPS will be reduced compared to an SSD
  • SFP+ can be point to point
  • The difference of Block level and file level sharing and what it means for the user
  • Samba is single thread based (bottleneck for file level transfers)
  • Infiniband, Fibre, ... (different technology as a comparison as an extension to 1GbE base-T and 10GbE base-T and SFP+ as a combination of Fibre over Ethernet and stuff - maybe even that high energy consumption of 10GbE and the noise the switches are generating because of it)

I thought about a lot of those things, but decided to hold off (they are probably coming in the future). I also wanted it to be more practical than informative, since very few people on here would consider using Infiniband or FibreChannel for storage.

 

Linus has (briefly) brought up SFP+ before (a video unboxing of his 10GbE network cards), maybe that'll come too. Also, the minimum latency for 10GBASE-T is only 1 microsecond longer than 1000BASE-T, but the maximum is far less (source). For any packet above 512B, the increased throughput of 10GbE will make overall transfer rate faster.

 

I call out for a sticky, Thanks a lot for the work you put into this guide. This really can serve as a future reference

Unfortunately, it's not useful to enough people to warrant a sticky (at least in my opinion). I do appreciate that though :)

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

I thought about a lot of those things, but decided to hold off (they are probably coming in the future). I also wanted it to be more practical than informative, since very few people on here would consider using Infiniband or FibreChannel for storage.

 

Linus has (briefly) brought up SFP+ before (a video unboxing of his 10GbE network cards), maybe that'll come too. Also, the minimum latency for 10GBASE-T is only 1 microsecond longer than 1000BASE-T, but the maximum is far less (source). For any packet above 512B, the increased throughput of 10GbE will make overall transfer rate faster.

 

Unfortunately, it's not useful to enough people to warrant a sticky (at least in my opinion). I do appreciate that though :)

I am talking about the technical side, not the theory. From my experience while 1GbE latency is under 1 ms for networks, 10GbE tends to be around 4-5ms and up to 14-15ms (unless you dig deep / even deeper in your pocket)

My builds:


'Baldur' - Data Server - Build Log


'Hlin' - UTM Gateway Server - Build Log

Link to comment
Share on other sites

Link to post
Share on other sites

Unfortunately, it's not useful to enough people to warrant a sticky (at least in my opinion). I do appreciate that though :)

OH and i disagree! The only thing that you would have to do is provide more general information and not focussing on SAN 70% of the article. You can extend the NAS part a little bit, I can step in as well with a hardware part to help picking hardware according to the needs.

 

I see potential in this

My builds:


'Baldur' - Data Server - Build Log


'Hlin' - UTM Gateway Server - Build Log

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 month later...

I see that in your simplified diagram you have the storage server ND the backup machine using the switch with me being only on a 1 user network should I still use the switch? Or use the gigabit ports on my router?

Link to comment
Share on other sites

Link to post
Share on other sites

Also I thought about it and the only things that will be transferring will be documents the server really needs the network speed because it's going to also use plex and I have full Blu-ray rips of 40gb that I would like to stream in other rooms but rarely on my actual desktop it will mostly be for a phone with AC wireless and a chromecast

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×