Jump to content

Which Server OS - I'm not staying with freeNAS

Hello my friends.  I currently have a server with freeNAS running Plex and NextCloud on it and it has been working great.  The problem is that I found out about how difficult it is to add HDD to my one big share as well as lack of software support software for freeNAS.  I want to BOINC/FOLD, Plex, and do openHAB type stuff and creating a VM to do this on dosen't solve the fact that I can't really add HDD to my ZFS in freeNAS. I would Like to use ESXI and either linux server or windows server 2016 creating one big storage share available to the network.  Could anyone with insight suggest which way to go with the operating system.

Link to comment
Share on other sites

Link to post
Share on other sites

If you want to use ESXi and only run a single server you've only got two options, hardware RAID or HBA passthrough to a VM to handle the storage then present that back to the ESXi host with either NFS or iSCSI.

 

Just don't put the storage VM itself on the datastore created using that VM, power it down and you'll never be able to turn i back on. Use a local disk for that VM.

 

What about Proxmox mdadm or ceph or gluster?

Link to comment
Share on other sites

Link to post
Share on other sites

I found ESXi to be sub par at configuring VM's, as in the core assignment was wonky.  Granted I don't have much experience with it.  I'm under the impression that adding a disk to something other than jbod can only be done in windows storage spaces. Any kind of raid probably requires an entire array rebuild which is a pain in any OS.  JBOD and storage spaces are kind of the same thing as in it's just disks clumped together.

 

I am running windows server 2012 r2 with a raid 5 but all 6 of my drive bays are full so I don't have to worry about adding disks.

Open-Back - Sennheiser 6xx - Focal Elex - Phillips Fidelio X3 - Harmonicdyne Zeus -  Beyerdynamic DT1990 - *HiFi-man HE400i (2017) - *Phillips shp9500 - *SoundMAGIC HP200

Semi-Open - Beyerdynamic DT880-600 - Fostex T50RP - *AKG K240 studio

Closed-Back - Rode NTH-100 - Meze 99 Neo - AKG K361-BT - Blue Microphones Lola - *Beyerdynamic DT770-80 - *Meze 99 Noir - *Blon BL-B60 *Hifiman R7dx

On-Ear - Koss KPH30iCL Grado - Koss KPH30iCL Yaxi - Koss KPH40 Yaxi

IEM - Tin HiFi T2 - MoonDrop Quarks - Tangzu Wan'er S.G - Moondrop Chu - QKZ x HBB - 7HZ Salnotes Zero

Headset Turtle Beach Stealth 700 V2 + xbox adapter - *Sennheiser Game One - *Razer Kraken Pro V2

DAC S.M.S.L SU-9

Class-D dac/amp Topping DX7 - Schiit Fulla E - Fosi Q4 - *Sybasonic SD-DAC63116

Class-D amp Topping A70

Class-A amp Emotiva A-100 - Xduoo MT-602 (hybrid tube)

Pure Tube amp Darkvoice 336SE - Little dot MKII - Nobsound Little Bear P7

Audio Interface Rode AI-1

Portable Amp Xduoo XP2-pro - *Truthear SHIO - *Fiio BTR3K BTR3Kpro 

Mic Rode NT1 - *Antlion Mod Mic - *Neego Boom Mic - *Vmoda Boom Mic

Pads ZMF - Dekoni - Brainwavz - Shure - Yaxi - Grado - Wicked Cushions

Cables Hart Audio Cables - Periapt Audio Cables

Speakers Kef Q950 - Micca RB42 - Jamo S803 - Crown XLi1500 (power amp class A)

 

*given as gift or out of commission

Link to comment
Share on other sites

Link to post
Share on other sites

Sorry I should have been more detailed. It's a HP Proliant DL380 G5 which has a p400 raid card, 4x2TB drives in mirror so I only see 2. I know freeNAS goes from my usb to the RAM, If the other OS didn't do the same ill install an ssd for the os  and leave the storage free.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Psittac said:

JBOD and storage spaces are kind of the same thing as in it's just disks clumped together.

That's not how Storage Spaces works, you have all the similar options as ZFS (mirror, parity, stripe) while also being able to add disk to an existing pool and volume and the data will redistribute across those new disks leveling it out, something ZFS doesn't do.

 

12 minutes ago, Psittac said:

I found ESXi to be sub par at configuring VM's, as in the core assignment was wonky. 

How so? ESXi is basically the benchmark for VM hosting/hypervisors. You just pick how many vCPUs you want for a VM and it'll handle the rest for you. Want 3 pick 3, just actually have that many real CPU cores. You can also share real cores between VMs, good when no single VM will use the full resources of the assigned cores.

 

I'd say hypervisors like unRAID are actually the ones that are doing it wrong/confusing way but the way they do it fits well with the people who want to use it. The having to select which actual cores to use for a VM doesn't scale when you have 30 VMs on a single server, admin wise picking like that would be a real pain.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

That's not how Storage Spaces works, you have all the similar options as ZFS (mirror, parity, stripe) while also being able to add disk to an existing pool and volume and the data will redistribute across those new disks leveling it out, something ZFS doesn't do.

 

How so? ESXi is basically the benchmark for VM hosting/hypervisors. You just pick how many vCPUs you want for a VM and it'll handled the rest for you. Want 3 pick 3, just actually have that many real CPU cores. You can also share real cores between VMs, good when no single VM will use the full resources of the assigned cores.

 

I'd say hypervisors like unRAID are actually the ones that are doing it wrong/confusing way but they way they do it fits well with the people who want to use it. The having to select which actual cores to use for a VM doesn't scale when you have 30 VMs on a single server, admin wise picking like that would be a real pain.

When I first got my server I installed dell's esxi and for some reason I couldn't properly assign cores, I'm sure it was just due to a confusing interface however all I wanted to do was assign as many cores/threads possible and install windows 10 to run cinebench.  I just couldn't get it sorted out and decided to ditch the whole VM idea.  I'm happy with my choice but I'm sure some patience and research could have allowed me to configure it properly.

Open-Back - Sennheiser 6xx - Focal Elex - Phillips Fidelio X3 - Harmonicdyne Zeus -  Beyerdynamic DT1990 - *HiFi-man HE400i (2017) - *Phillips shp9500 - *SoundMAGIC HP200

Semi-Open - Beyerdynamic DT880-600 - Fostex T50RP - *AKG K240 studio

Closed-Back - Rode NTH-100 - Meze 99 Neo - AKG K361-BT - Blue Microphones Lola - *Beyerdynamic DT770-80 - *Meze 99 Noir - *Blon BL-B60 *Hifiman R7dx

On-Ear - Koss KPH30iCL Grado - Koss KPH30iCL Yaxi - Koss KPH40 Yaxi

IEM - Tin HiFi T2 - MoonDrop Quarks - Tangzu Wan'er S.G - Moondrop Chu - QKZ x HBB - 7HZ Salnotes Zero

Headset Turtle Beach Stealth 700 V2 + xbox adapter - *Sennheiser Game One - *Razer Kraken Pro V2

DAC S.M.S.L SU-9

Class-D dac/amp Topping DX7 - Schiit Fulla E - Fosi Q4 - *Sybasonic SD-DAC63116

Class-D amp Topping A70

Class-A amp Emotiva A-100 - Xduoo MT-602 (hybrid tube)

Pure Tube amp Darkvoice 336SE - Little dot MKII - Nobsound Little Bear P7

Audio Interface Rode AI-1

Portable Amp Xduoo XP2-pro - *Truthear SHIO - *Fiio BTR3K BTR3Kpro 

Mic Rode NT1 - *Antlion Mod Mic - *Neego Boom Mic - *Vmoda Boom Mic

Pads ZMF - Dekoni - Brainwavz - Shure - Yaxi - Grado - Wicked Cushions

Cables Hart Audio Cables - Periapt Audio Cables

Speakers Kef Q950 - Micca RB42 - Jamo S803 - Crown XLi1500 (power amp class A)

 

*given as gift or out of commission

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Benjamin_ONeal said:

Sorry I should have been more detailed. It's a HP Proliant DL380 G5 which has a p400 raid card, 4x2TB drives in mirror so I only see 2. I know freeNAS goes from my usb to the RAM, If the other OS didn't do the same ill install an ssd for the os  and leave the storage free.

Then ESXi should work well as it is. Just create two datastores, one for each of those mirrors then pick which ever OS you want for the VM to host the file shares.

 

Install ESXi on a USB as well, that's the most common way it's done. Newer servers have dedicated SD card slots for that instead of using USB but same deal.

Link to comment
Share on other sites

Link to post
Share on other sites

So Proxmox is very similar to ESXI but with different and/or added features. I'm open to it, I'm really using ESXI so I can view the system and manage remotely without SSH-ing into the OS. I just don't kknow witch OS is easier to make network shared of my storage space.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Benjamin_ONeal said:

So Proxmox is very similar to ESXI but with different and/or added features. I'm open to it, I'm really using ESXI so I can view the system and manage remotely without SSH-ing into the OS. I just don't kknow witch OS is easier to make network shared of my storage space.

If you have a Windows license and your goal is to have some SMB network shares then I would use that. Windows does a good job and compatibility wise with other Windows systems it's the best you'll get. I've never been a fan of SAMBA under Linux for hosting SMB shares myself, being reverse engineered from the Windows implementation and lagging in features and stability of new features I just find it's not worth the hassle when you do have Windows as an option.

Link to comment
Share on other sites

Link to post
Share on other sites

Yes, this one had a usb inside on the motherboard that i'm using, the next gen had the SD card

Link to comment
Share on other sites

Link to post
Share on other sites

Ok. well it's a hacked version i'm to poor to pay but if windows will let me put one big share on the network while using minimal system resources then that's what I want.  Then I can expand the mirror when the time comes and have program compatibility. Windows server also ha that RoFs file system, How do you guy's feel about that?

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Benjamin_ONeal said:

Ok. well it's a hacked version i'm to poor to pay but if windows will let me put one big share on the network while using minimal system resources then that's what I want.  Then I can expand the mirror when the time comes and have program compatibility. Windows server also ha that RoFs file system, How do you guy's feel about that?

Well technically we can't help out when it's pirated software so just don't say it is. Btw if you have school/college email address you can get a free legitimate license from Microsoft under the Microsoft Imagine program.

Link to comment
Share on other sites

Link to post
Share on other sites

The licence is also for windows server?  I will look into that then. I haven't gone and used the image's yet I'm waiting to hear what you guy's say. So should I use this Proxmox instead of ESXI? either way I'll figure out how to use it.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Benjamin_ONeal said:

The licence is also for windows server?  I will look into that then. I haven't gone and used the image's yet I'm waiting to hear what you guy's say. So should I use this Proxmox instead of ESXI? either way I'll figure out how to use it.

Yea Windows Server.

 

Not sure which you should use, I don't use Proxmox myself so can't give much insight in to it. Both will work, Proxmox may allow you to not have to create a VM for doing the SMB shares though.

Link to comment
Share on other sites

Link to post
Share on other sites

Oh, so Proxmox might be able to make the shares available with no VM installed.  Thank you very much for your help Leadeater, I might use virtualbox to have a play around with these.Take care

Link to comment
Share on other sites

Link to post
Share on other sites

48 minutes ago, leadeater said:

Yea Windows Server.

 

Not sure which you should use, I don't use Proxmox myself so can't give much insight in to it. Both will work, Proxmox may allow you to not have to create a VM for doing the SMB shares though.

you can make cifs shares in proxmox using samaba like any other linux distro in the command line.

 

In 5.2 that just came out they added support for doing this in the web dui.

 

Link to comment
Share on other sites

Link to post
Share on other sites

I suggest ESXi or Linux + KVM(either libvirt or oVirt) + gluster and/or ceph.  I work on Gluster:

 

https://www.gluster.org/

 

So my opinion may be skewed :P 

Our hyperconverged oVirt + gluster works really well these days:

 

https://www.ovirt.org/documentation/gluster-hyperconverged/chap-Deploying_Hyperconverged/     <-- storage & compute on same nodes

https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/  <-- blog on deploying

 

From there you have a ton of flexibility.  You can create multiple or a single namespace and share out to Win / Lin / MAC /whatever.  Gluster can work of glusterfs(native protocol), NFS v3 & 4, SMB, Object, and block.  If you wanted to go with Ceph it has similar functionalities, my rule of thumb is gluster for file and ceph for block.  Adding disks / nodes / clients is easy and all the components are free / opensource.

 

Happy to help if you have any questions!

 

-b

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, bennyturns said:

If you wanted to go with Ceph it has similar functionalities, my rule of thumb is gluster for file and ceph for block.  Adding disks / nodes / clients is easy and all the components are free / opensource.

Plus Gluster is a lot simpler than Ceph is for new comers.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×