Jump to content

SAN implimentations Roundup?

leasoncre

Does LTT/someone have a roundup of SAN implimentations? (ie: if this, then that)

There are a large number of NAS/SAN implimentations and software to run it on.

Can someone let me(/the community) know what features/software are good for what environment?

 .... there's Unraid, FreeNAS, ReadyNAS, and many more...

What would be the use case for each?

When shouldn't you use a particular NAS vendor/software?

And is there a tutorial/link we can use to learn how to setup/configure the software?

 

With so many options, it's hard to determine what works best with what implimentation.

 

>I'm looking to setup a local SAN to support my ESXi hosts, but finding it hard to determine what i should run on the SAN boxes.

and i'm sure most folks are just as confused about NAS implementations as well (less enterprise, more consumer-network level)

 

2ESXi hosts, 2SAN hosts, 10G local SAN connections. iSCSI conections planned, multipath. not sure if multiple ESXi hosts can access the same target simultaneously, had it working briefly on a test machine, before i found NFS was all i needed for reference files. but i want a proper redundant HA/FT setup for the VM's.

Thanks for the help. the internet is a mess, and after an hour or two of googling/forum searching, i couldn't find any specific info on this.

 

Surprised LTT doesn't have a SAN video. they have a NAS one or two about their video/file storage. but they don't do much virtualization...

Link to comment
Share on other sites

Link to post
Share on other sites

Why iscsi, id personally go nfs here, but thats me, should work fine.

 

Or run vsan and don't have a san and go hyperconverged.

 

Im guessing LTT doesn't do san, cause even power users don't normally go into the san world, and there goal is more larger viewer ship. 

 

How much storage? Budget? Are you going ssd only or combo?

 

Freenas is pretty common here for a home san/vm storage box. You can also roll your own linux/bsd setup. There is also a community/ cheap edition/trial of lots enterprise software like nutanix, storage spaces direct, vsan, ceph and others. 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Have a look at the ESOS project, it is all about having block-level storage on the usual server hardware. 

The problem with real SANs is that you are basically in 'just buy a textbook' territory. Even the cheap used enterprise gear is thousands of dollars and modern production SANs are inoperable without support contracts, further starving the secondary market. At the scale of a homelab most people are better off just doing a proxmox cluster to achieve HA. 

Intel 11700K - Gigabyte 3080 Ti- Gigabyte Z590 Aorus Pro - Sabrent Rocket NVME - Corsair 16GB DDR4

Link to comment
Share on other sites

Link to post
Share on other sites

SAN is generally used in more complex and larger environments. It's advantage is low latency high-speed block-level performance for disks.

SAN is not good for all environments - often you need NAS, when you need to access some share from Windows on multiple clients or similar.

 

As for ESX - SAN is good choice. You present LUN to multiple servers, and ESX handles simultaneous access to same LUN.

 

For most home-users and small offices, SAN is not a real choice - as it adds complexity and needed knowledge to handle it, and that's often reason why you find much info for 'home use' for SAN.

 

Another reason is - SAN storages are generally more expensive than NAS solutions - since they are 'bigger', more robust, and actually build for quite higher performance and scalability. Again, something that is overkill for SOHO.

 

However, if you want to 'just try', you can buy cheap FC adapters, and build your own 'storage' on Linux using SCST. That's actually what I have done at home for myself, as going 10G Ethernet is currently still too expensive, and 1G is a bit slowish.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, leasoncre said:

ReadyNAS

 

5 hours ago, leasoncre said:

What would be the use case for each?

ReadyNAS never, had way to many firmware issues and bugs with those. Not directly relevant but they have burned me enough I won't miss the opportunity to have a dig at them.

 

5 hours ago, leasoncre said:

>I'm looking to setup a local SAN to support my ESXi hosts, but finding it hard to determine what i should run on the SAN boxes.

and i'm sure most folks are just as confused about NAS implementations as well (less enterprise, more consumer-network level)

NAS and SAN are pretty well intermixed terms now days, they can all present any type of storage protocol so it's meaningless to try and distinguish the difference. Some platforms do NFS and SMB better than they do iSCSI or FC but that doesn't mean it's best to call that system as NAS for example.

 

5 hours ago, leasoncre said:

2ESXi hosts, 2SAN hosts, 10G local SAN connections. iSCSI conections planned, multipath. not sure if multiple ESXi hosts can access the same target simultaneously, had it working briefly on a test machine, before i found NFS was all i needed for reference files. but i want a proper redundant HA/FT setup for the VM's.

Thanks for the help. the internet is a mess, and after an hour or two of googling/forum searching, i couldn't find any specific info on this.

My advice would be to purchase a VMUG subscription ($200 USD/yr), scrap the SAN hosts and go with a 3 server VMware vSAN setup which will give you fully scalable and fault tolerant storage for VM hosting.

 

5 hours ago, leasoncre said:

Surprised LTT doesn't have a SAN video. they have a NAS one or two about their video/file storage. but they don't do much virtualization...

Because LTT doesn't have the skill set and experience to do such videos and don't have the editorial confidence to stand by advice they may give or demonstrate. The current server videos are heavily prefaced or implied as "Not best practice".

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, leadeater said:

Because LTT doesn't have the skill set and experience to do such videos and don't have the editorial confidence to stand by advice they may give or demonstrate. The current server videos are heavily prefaced or implied as "Not best practice".

Also those of us who do use SAN's don't take advice from LTT :P Additionally SAN's start to get to technical and irrelevant for 99.9% of LTT's audience. 

I cringe at much of the server setup they already do, pretty sure they don't have much redundancy against downtime in most of their servers like chassis with redundant PSU's, multipathed storage or network LAG's. But they're a small private company that can continue to function and have continuity with servers offline, and their videos are aimed at 'prosumers', they aren't having to deal with affecting other companies and thousands/millions of customers if something goes down. 

 

We deal with vendors and their best and supported practices for implementing their solution. We for example use 3PAR, EMC, NetApp & Nimble across our customers. 

SAN's really are enterprise solutions for Datacenters and/or spanning multiple locations. NAS is a much more simple solution for those with just a rack or a few hosts and these days support iSCSi, NFS, CIFS and all the protocols you could need.

 

 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Jarsky said:

I cringe at much of the server setup they already do

Yea it's often a demonstration on what not to do, fun to watch though.

 

9 hours ago, Jarsky said:

We for example use 3PAR, EMC, NetApp & Nimble across our customers. 

Personally really like Netapp myself, integrates so well with Commvault and together with SnapVault/Mirror you get wicked fast and space efficient backups that are really easy to define where copies are kept and for how long without actually having to do any difficult configuration. There's a trade off argument to be had about not using Commvault's global deduplication capabilities when doing such a solution but for us it's actually worked out significantly better in every way. We're keeping retention longer on disk while using less storage capacity than previously and you don't need super high end SSD array or NVMe to host a DDB and you don't need to set a DDB sealing rotation or anything since you're not using a DDB at all. Yea you can do this without Commvault in the mix at all but it's a lot easier setup, monitoring and management if you do plus we're still using DDB backups for other non Netapp platforms etc.

 

What we didn't manage to save money on was the media agents as we already purchased them before making the change, so we have 6 DL380's with 4 NVMe's each and quad 25Gb managing about a 10th of the DDB load originally spec'd for and most data movement is out of band directly between Netapp clusters.

 

A lot goes in to picking storage platforms, more than just figuring out if you block storage and/or file based storage, and the big vendors will talk more about data management capabilities and hardly anything protocol and raw performance or capacity. Aiding business workflows is far more valuable than saving a few percent on the purchase, like being able to create point in time clones of SQL databases and mount them anywhere you want in seconds using no extra storage compared to hours if it's a few TB and the overhead if you need multiple copies of it or need multiple points in time. Without thin provisioning, compression, flexclones etc we'd be using something like 44 times the storage we actually do.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×