Jump to content

Qayse

Member
  • Posts

    7
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Recent Profile Visitors

322 profile views
  1. if you do, would you mind sending me all the partitions besides the file partition? Or rather the whole operating system. We have one and managed to completely f*ck up the operating system and now this damn thing won't do anything but spin its fans. (the Operating system should be the first ~4gb on all 4 disks, or rather every partition besides the Raid File partition) I googled for literally ages and found nothing. Buffalos support team wont help either. thanks so much! expect some cookies as a reward mate! Some pics of the damn thing
  2. you could also use SAS SSD´s but NVMe just has a whole lot more throughput. (AFAIK seagate has 12gb/s SAS SSD´s with up to 4TB per SSD) I understand the idea ( using NVMe for a video production server ) but if your workstations aren't all connected with 10G ethernet and the storage server with like 8 cables in load balance mode you won't be able to measure any difference, because at that point, the network is the bottleneck.
  3. You could indeed connect the whole glob of storage to a Windows server using iSCSI and then host an SMB server using that windows server. You could also just install SAMBA4 and host SMB3 shares natively on linux.
  4. I know. I have directly connected the iSCSI device to the individual VM´s instead of using them as storage for the VHDX Files. So, most of them are formatted as EXT4 and mounted as / on the guest system itself and not mounted at all on the host system, this gets you way closer to full hdd/ssd performance. You could, of course, mount the iSCSI device on the Windows Server Host and format it as NTFS.
  5. well, he talked about that supermicro server, which indeed has 24 NVMe connectors, and then stated what a pain in the a** software raids are and how he had no idea how to actually use the ssd´s ( cant raid em )
  6. Sorry for not posting this in NAS, but i think because its about the server, and the storage itself, and not about SAN or NAS attachment, it fits better in this subforum.
  7. I just installed something exactly like linus's planned NVMe storage server ( SAS tho, not NVME but it would work for NVME ) for a Live Migration HyperV Cluster: (Talked about in WAN Show, November 25, 2016) Use FreeBSD, Put em all in a ZPool, RAID6 Mode Make a bunch of ZVols ( remember ONLY ONE CLIENT MAY WRITE AT THE TIME or Data = Potato Salad ) Connect it as iSCSI drives over Dual 10G in Load Balancing to your individual stations. ( SMB Shares would work too ( no data corruption on multiple user access ). but iSCSI is faster, and needs WAY less CPU performance ) ZFS is Copy-On-Write and Snapshotting so you can never really loose data, ( as long as you have disk space ) even if your whole system gets CryptoLocked, go back to the snapshot ( saved in Read Only Filesystem. impossible to encrypt ) and youre golden. For remote backup: rsync as a cronjob. easy as pie.
×