Jump to content

Difference between SAN and NAS

KraftDinner

I'm pretty lost, after doing a bunch of reading on the subject I can't seem to figure how these two differ from each other . In my eyes both SAN and NAS are just storage attached to a network, can someone please break down the difference between the two in simple terms? Thanks!

Link to comment
Share on other sites

Link to post
Share on other sites

 

 

SAN compared to NAS[edit]

Network-attached storage (NAS) was designed before the emergence of SAN as a solution to the limitations of the traditionally used direct-attached storage (DAS), in which individual storage devices such as disk drives are connected directly to each individual computer and not shared. In both a NAS and SAN solution the various computers in a network, such as individual users' desktop computers and dedicated servers running applications ("application servers"), can share a more centralized collection of storage devices via a network connection through the LAN.

Concentrating the storage on one or more NAS servers or in a SAN instead of placing storage devices on each application server allows application server configurations to be optimized for running their applications instead of also storing all the related data and moves the storage management task to the NAS or SAN system. Both NAS and SAN have the potential to reduce the amount of excess storage that must be purchased and provisioned as spare space. In a DAS-only architecture, each computer must be provisioned with enough excess storage to ensure that the computer does not run out of space at an untimely moment. In a DAS architecture the spare storage on one computer cannot be utilized by another. With a NAS or SAN architecture, where storage is shared across the needs of multiple computers, one normally provisions a pool of shared spare storage that will serve the peak needs of the connected computers, which typically is less than the total amount of spare storage that would be needed if individual storage devices were dedicated to each computer.

In a NAS solution the storage devices are directly connected to a "NAS-Server" that makes the storage available at a file-level to the other computers across the LAN. In a SAN solution the storage is made available via a server or other dedicated piece of hardware at a lower "block-level", leaving file system concerns to the "client" side. SAN protocols include Fibre ChanneliSCSIATA over Ethernet (AoE) and HyperSCSI. One way to loosely conceptualize the difference between a NAS and a SAN is that NAS appears to the client OS (operating system) as a file server (the client can map network drives to shares on that server) whereas a disk available through a SAN still appears to the client OS as a disk, visible in disk and volume management utilities (along with client's local disks), and available to be formatted with a file system and mounted.

One drawback to both the NAS and SAN architecture is that the connection between the various CPUs and the storage units are no longer dedicated high-speed busses tailored to the needs of storage access. Instead the CPUs use the LAN to communicate, potentially creating bandwidth bottlenecks.

While it is possible to use the NAS or SAN approach to eliminate all storage at user or application computers, typically those computers still have some local Direct Attached Storage for the operating system, various program files and related temporary files used for a variety of purposes, including caching content locally.

To understand their differences, a comparison of DAS, NAS and SAN architectures[2] may be helpful.

 

Source: https://en.wikipedia.org/wiki/Storage_area_network

Human intelligence decreases with increasing proximity to oncoming traffic.

 

Link to comment
Share on other sites

Link to post
Share on other sites

I'm pretty lost, after doing a bunch of reading on the subject I can't seem to figure how these two differ from each other . In my eyes both SAN and NAS are just storage attached to a network, can someone please break down the difference between the two in simple terms? Thanks!

 

Network Attached Storage (NAS) is file level storage over a network medium. Storage Area Network (SAN) is block level storage over a network medium. In general and highly simplified. Block level devices would appear as a physical disk to the server OS. 

 

NAS protocols would be NFS, SMB, AFP etc

SAN protocols would be iSCSI, FC, FCoE, iFC, SAS (when using SAS switches) etc

 

There has been a large shift, mostly driven by NetApp, to move VMware environments from block level storage and VMFS to NFS datastores. NetApp is what we have at work. We use NFS for our VMware servers, iSCSI for SQL/Exchange and SMB/NFS for file shares.

Link to comment
Share on other sites

Link to post
Share on other sites

Network Attached Storage (NAS) is file level storage over a network medium. Storage Area Network (SAN) is block level storage over a network medium. In general and highly simplified. Block level devices would appear as a physical disk to the server OS. 

 

NAS protocols would be NFS, SMB, AFP etc

SAN protocols would be iSCSI, FC, FCoE, iFC, SAS (when using SAS switches) etc

 

There has been a large shift, mostly driven by NetApp, to move VMware environments from block level storage and VMFS to NFS datastores. NetApp is what we have at work. We use NFS for our VMware servers, iSCSI for SQL/Exchange and SMB/NFS for file shares.

Thank you this helped clarify a lot!

Link to comment
Share on other sites

Link to post
Share on other sites

Network Attached Storage (NAS) is file level storage over a network medium. Storage Area Network (SAN) is block level storage over a network medium. In general and highly simplified. Block level devices would appear as a physical disk to the server OS. 

 

NAS protocols would be NFS, SMB, AFP etc

SAN protocols would be iSCSI, FC, FCoE, iFC, SAS (when using SAS switches) etc

 

There has been a large shift, mostly driven by NetApp, to move VMware environments from block level storage and VMFS to NFS datastores. NetApp is what we have at work. We use NFS for our VMware servers, iSCSI for SQL/Exchange and SMB/NFS for file shares.

 

How do you deal with HA on file level storage?

I used a netapp back in 2009 and it didn't support multiple active or seemless failover for file level, where as block has multipathing

A little knowledge is very dangerous
CPU: I7 6700K CPU Cooler: CORSAIR Hydro H110i Motherboard: Asus Maximus VIII Hero GPU: 2x Asus GTX980 STRIX RAM: 4x4 (16GB) Corsair DDR4 Case: Corsair 900D Storage: 750GB SSD PSU: Corsair HX1000W Displays: 2xAsus PB287Q (4k) 2x1080 Monitors Keyboard: QPAD MK50 Mouse: 1xRazor Naga Elite 2x Razor Naga Sound: Asus Essence STX, Quad Elite Pre Amp, Quad 909 Power Amp, Monitor Audio GR20 Speakers Headphones: Logitech G930, Sennheiser Momentum Black Microphone: Rode NT1-A, Behringer Xenyx 802, Behringer Ultra-Curve Pro EQ OS: Windows 7 64bit

Link to comment
Share on other sites

Link to post
Share on other sites

How do you deal with HA on file level storage?

I used a netapp back in 2009 and it didn't support multiple active or seemless failover for file level, where as block has multipathing

 

Production SAN is a 4 controller node 8040 in cluster mode on each site. There are 3 SVM's, ESX, NAS and SQL. ESX runs on 2 nodes and NAS/SQL on 1 node each, if there is a failure the SVM will move to another node. Each node has active passive team 10Gb and LIF groups are used for network redundancy. This protects us from node or network failures and the 2 IP addresses for the NFS datastores can seamlessly move around nodes if the preferred owner goes offline, ether from a network fault or node fault. Same for any of the other SVM's.

 

 

Backup SAN is 4 controller node 3220 in cluster mode on 2 sites. Only 1 SVM which runs on all 4 nodes, LIF groups used again with 4 IP addresses. Used for CommVault disk backup via SMB shares.

 

Edit: You also need to install all the NetApp VMware plugins for VAAI etc.

Link to comment
Share on other sites

Link to post
Share on other sites

Production SAN is a 4 controller node 8040 in cluster mode on each site. There are 3 SVM's, ESX, NAS and SQL. ESX runs on 2 nodes and NAS/SQL on 1 node each, if there is a failure the SVM will move to another node. Each node has active passive team 10Gb and LIF groups are used for network redundancy. This protects us from node or network failures and the 2 IP addresses for the NFS datastores can seamlessly move around nodes if the preferred owner goes offline, ether from a network fault or node fault. Same for any of the other SVM's.

 

 

Backup SAN is 4 controller node 3220 in cluster mode on 2 sites. Only 1 SVM which runs on all 4 nodes, LIF groups used again with 4 IP addresses. Used for CommVault disk backup via SMB shares.

 

Edit: You also need to install all the NetApp VMware plugins for VAAI etc.

interesting, thanks for that.

last time i used Netapp was on a 2050 sounds like things have moved on.

A little knowledge is very dangerous
CPU: I7 6700K CPU Cooler: CORSAIR Hydro H110i Motherboard: Asus Maximus VIII Hero GPU: 2x Asus GTX980 STRIX RAM: 4x4 (16GB) Corsair DDR4 Case: Corsair 900D Storage: 750GB SSD PSU: Corsair HX1000W Displays: 2xAsus PB287Q (4k) 2x1080 Monitors Keyboard: QPAD MK50 Mouse: 1xRazor Naga Elite 2x Razor Naga Sound: Asus Essence STX, Quad Elite Pre Amp, Quad 909 Power Amp, Monitor Audio GR20 Speakers Headphones: Logitech G930, Sennheiser Momentum Black Microphone: Rode NT1-A, Behringer Xenyx 802, Behringer Ultra-Curve Pro EQ OS: Windows 7 64bit

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×