Jump to content

Strikermed

Member
  • Posts

    137
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Recent Profile Visitors

1,018 profile views

Strikermed's Achievements

  1. While I'm on the topic of freenas and performance. I have an off topic question to ask. Is there anyway that I could utilize my 10gig connection on my freenas to do a full system back up to a Synology DS1817 also using it's 10gig connection. As of right now my only solution is using SyncBackPro running on a Windows VM to run a backup task of each share, but it runs at 1Gig and I have upwards to 15TB of data to copy over.
  2. Hmm interesting. I've been looking into ESXi as well. I'll have a test rig ready soon to play around with it. I need to come up with a plan to quickly roll all my VM's (which are techinically installed on SSD's separate from the Array because they were bare metal installs prior to them being VM's) to ESXi without much down time... As for switching speed I'm using Unifi hardware, and Jumbo Frames are already set up.
  3. All my pools say that compression is turned off. I did however reach 600-700MB/s transfers now. I apparently was using a network interface that wasn't reaching it's full potential (still troubleshooting this), but the other that I configured correctly is delivering reasonable speeds. On the working connection the iperf results are: 7.78 Gbits/sec To sum up my results, I have freenas running on a Virtual Machine and I'm getting 7.78 Gbits/sec. I have freenas also running on a bare metal install and I'm getting 8.25 Gbits/sec. Slightly better results on bare metal, but I'm curious what other virtualization platforms deliver. As of right now I'm using UNRAID to virtualize, but I've started to consider converting my system to Freenas, and virtualize with Bhyve after I do some testing on the bare metal build.
  4. I finally got around to running this... My results were: 51200+0 records in 51200+0 records out 107374182400 bytes transferred in 70.260040 secs (1528239696 bytes/sec) Any other suggestions? If this is accurate, this is saying that I'm getting 1500MB/s
  5. I attempted to run that command, but I got permission denied. Since I'm new at this, I want to be clear. In the GUI shell I navigated to my Test dataset (ignore the RAIDz2 for the pool, it was a mistake I caught too late) cd /mnt/RAIDz2/Test I then ran the command from above /dev/zero I got this message: "bash: /dev/zero: Permission denied" Any thoughts?
  6. I don't have time to try this tonight... Mostly because I'm not familiar with these commands, so I'll need to do my homework before I proceed.
  7. Ok, so I've done a fresh install of Freenas due to some lagging issues. I also figured a clean install and fresh settings would remove any unknowns. This made no change on the speed. I then changed the A Time for the whole pool and this improved my speeds to 350-400 on a large single file transfer. surprising, writes have jumped to the rates I'm getting on iperf tests. 750-820MB/s I created a Test data set, and did some large file copies of about 30-60GB. I disabled Sync, and I have compression disabled. I got on average 420-500MB/s transfers Following the tunables from 45 drives: http://45drives.blogspot.com/2016/05/how-to-tune-nas-for-direct-from-server.html... As of right now, these tuneables made absolutely not change.
  8. This same configuration has allowed over 600MB/s with less drives. The only change has been some network changes, and an increase in drives.
  9. 1. Trying to determine this. I didn't enable it, if it's something you need to enable. 2. I'm using all HDD 7200 RPM. 3. I am not using ZFS SLog or ZIL drive. 4. I don't believe so. I never made any configurations for multi-channel, and from what I've read it only applies to multiple aggregate connections I'm trying to figure out where the debug file outputs too, but I'm having trouble locating it. I'm able to get into shell and freenas-debug -c, and various other commands, but I just don't know where to find the actual reports.
  10. I'm reading and writing to a RAID 5 5 SSD array. Testing with a RAM disk I can reach 1500MB/s Reads and 900MB/s writes
  11. That is correct, all SMB share. Do you know of a way I could try an NFS share with windows? Should I create that on the PC side? Or can I create a RAM Disk on Freenas? If so, how do I do that?
  12. I'm here with another Freenas involved quest for reaching that 1GB/s (10gigabit) holy grail transfer. I have an unusual issue that has come up. I've expanded (well destroyed and recreated) my freenas drive setup from 6 disks to 10 disks, and I've changed it form RAIDZ2 to RAIDZ3 (actually on accident, and I transfered data before I caught it). Now here's the interesting question. Why am I getting better writes than reads? And why am I not getting faster results with 10 disks? My reads top out at about 230-300MB/s real word (copying a 30gig video file to 5 SSD array in RAID 5 on my local PC) over 10gigabit. I've Iperfed the connection and I get 9.12 Gbits/sec. I've run crystal mark and I get 637 sequential read and 571 sequential writes. But when it comes to real world tests I'm getting terrible reads (230MB/s to 300MB/s) and my writes hold stead hovering just above 400MB/s. This just doesn't make any sense to me. I'm using an intel x540 T1 in my PC and an intel x540 T2 in my Freenas box. And they are both direct connected at the moment, I had to RMA my 10G switch. The same issue was happening with the 10G switch as well. Currently the IP's are 192.168.2.1 and 192.168.2.2 On the PC side i have maxed out send and recieve buffers, I've disabled interrupt moderation, I have turned on jumbo frames, and I have 16 rss queues (in my testing there is no change between 8 and 16) I'm running a six core 3930K CPU and the intel card and RAID card are both in 16x slots running full speed (Both cards run at 8x speed just due to their individual architecture). On the freenas side I've only assigned it an IP and added the MTU 9014 to match the PC NIC. Under tunables I followed the 45 drives. Kern.ipc.maxsockbuf 1677216 sysctl net.inet.ip.intr_queue_maxlen 2048 sysctl net.inet.tcp.recbuf_inc 524288 sysctl net.inet.tcp.recvbuf_max 16777216 sysctl net.inet.tcp.recvspace 4194304 sysctl net.inet.tcp.sendbuf_inc 32768 sysct net.inet.tcp.sendbuf_max 16777216 sysctl net.inet.tcp.sendspace 2097152 sysctl net.route.netisr_maxqlen 2048 sysctl I've also bound SMB to the 10gig network interface I just don't get why these read speeds are so terrible (By the way I'm only accessing with one client on windows 10 pro).
  13. Hey Guys, I wan to get some feedback on where I can improve my Home Server Closet setup. I'll break down how I have my servers configured, what my future plans are, and how everything functions, and then I would like to see what you guys think about the whole arrangement. First, the hardware. I have a single PC, a handful of laptops and handheld devices, IOT devices, and 3 servers, and some soho networking gear. Server 1 (UnraidServer) - This server runs Unraid mainly for virtualization. I have a FreeNAS virtualized machine where I pass through 2 HBA cards so that Freenas has direct control of the Hard Drives. In addition I run a Windows 10 VM which runs my home automation system, Sync Tasks via SyncBack Pro, and runs my Plex Server. I also have a Windows 2012 R2 Essentials VM that I haven't had time to play with quite yet. This server is under utilized right now, but I'm not sure what else to add to it. Case: Norco RPC-3216 3U (https://www.amazon.com/gp/product/B00BQY38CG/ref=oh_aui_search_detailpage?ie=UTF8&psc=1) soon to be replaced with a Norco RPC-4220 4U (https://www.newegg.com/Product/Product.aspx?Item=N82E16811219033) Motherboard: Asus Z10PE-D16 Dual Socket Workstation Motherboard (https://www.amazon.com/gp/product/B00QC5DZEU/ref=oh_aui_search_detailpage?ie=UTF8&psc=1) CPU: 2 E5-2683 V3 14 core Xeon Processors (https://ark.intel.com/products/81055/Intel-Xeon-Processor-E5-2683-v3-35M-Cache-2-00-GHz-) RAM: 128GB via 2 64GB Quad Channel Registered DDR4 Memory (https://www.newegg.com/Product/Product.aspx?Item=N82E16820242019) SAS Card: LSI 9300-8i (https://www.newegg.com/Product/Product.aspx?Item=9SIAD5G5JG6403) Drives: 5x6TB WD Black (RAIDZ2 with 16TB usable), 2xSSD in RAID 1 to run the VM and various other applications, 1xSSD that runs my Windows 10VM (I had migrated it over from a baremetal install). I have 5x6TB that I'm planning to expand the RAIDZ2 volume soon. I'll explain below. Network Card: intel 10GbE intel X540-T2 Server 2 (Unraid2) - This system runs UNRAID and contains roughly 30TB of usable space. I use it to do a master backup of FreeNAS and my Synology Box Below. It's a, WTF just in case backup. It's also used for my expansion plan for FreeNAS. I'll explain further below. Case: Rosewill 4U Rackmount swapped out bays with 3x istar tooless bays (5 each) Motherboard: P9X79 Pro CPU: Intel i7 3930K six Core RAM: 32GB DDR3 SAS Card: LSI 9300-8i + intel Expander Card Network Card: intel 10GbE intel X540-T1 Video Card: GTX580 Super Clocked (left over from an upgrade Server 3 (Synology DS413) - This is used as a remote access NAS. It runs various apps to allow access to photos, audio, and various files. I have several users who use it to backup their photos, and this box essentially turned into a portal for that. I'll explain further below. Drives: 4x4TB WD Red PC - This is my workstation of 6 years with only a video card upgrade since then. I've added a RAID card that I was no longer using on the server side. Case: Corsair Obsidian 750D Motherboard: Asus Sabertooth X79 CPU: intel i7 3930K six core RAM: 32GB DDR3 Network Card: intel 10GbE intel x540-T1 RAID Card: LSI MegaRAID 9266-8i Video Card: GTX1080 Super Clocked Drives: 1TB Samsung 850 Pro, 4x250GB SSD in RAID0 (not really used for much, and nothing critical) Networking Gear: Ubiquiti USG Ubiquiti 150W 24 port switch Netgear 24 port Smart Switch Ubiquiti AC Pro Access Point 24 Port Cat 6a patch panel Cat6a Wiring throughout Home Explanation/Setup: First I'll explain my use case beyond just being an enthusiast. I'm a video editor, photographer, and tech nerd, so I need to store large files, and I would like them accessible at super fast speeds. I also backup photos for a few family members. I had some scares in the past on one of my servers running windows with a RAID card using RAID6 with consumer drives. This was backing up my synology DS413. I had drives fall out of the array due to TLER which is a self repair technology in consumer drives that caused them to time out and drop from the array. At one point I got so lucky to ensure all my data was backed up before the array became unusable. Thus the reason for so many servers. I'll explain their use cases below. First, I'll start with the Synology DS413. This was my original NAS and local storage. I upgraded the drives in the last year from 4x3TB consumer to 4x4TB NAS drives, and I've kept it in my workflow, with this device being the first point of contact for anyone but myself for file management. It contains folders for Photos, and music which can be accessed by users, and I can control access permissions pretty easily. It's a good little server box with plenty of features, so I kept it in the workflow. I think of it as a first point of screw up. If someone screws something up, it can be fixed. Second, we have UNRAIDserver which is a 16 bay soon to be upgraded to 20 bay beast. The hardware was sourced New and used via ebay (mainly processors), and it's mainly my virtualization machine. I have a freenas VM (because for some reason it wouldn't install on the baremetal hardware) which passes through HBA cards so freenas can function without any issues. I've run this for over a year flawlessly. Freenas is my local first point of contact. I use it primarily for the fast speeds with directly connected 10GbE. The RAIDZ2 volume gives me similar speeds to that of RAID 6. In addition to freenas, I have a Windows 10 VM that runs several applications. It runs Blue Iris for security cameras throughout my home, it runs Homeseer which is my home automation software, it runs Plex, and it runs SyncBackPro which I use to do all my back ups. It's a really handy program which allows you to used mapped drives, some Cloud services to do a verifiable backup. I mention Unraidserver second, because it's second in line in terms of backups. Using SyncBackPro I backup all the folders from the Synology DS413 to the folders on Freenas. I have it set up to not delete any files unless I run the backup manually. This way I can review weekly what files will be deleted from Freenas (thus synology can be screwed up, but it can be fixed if need be). Freenas also houses my Video Editing Projects folder and plex library which are too big for the SynologyNAS to manage. I run regularly scheduled snapshots so even the worst case scenarios can be fixed. Third, UNRAID 2 is my master backup. It backs up everything that was synced to Freenas, but it also backs up the project folder and plex library that live on freenas. Currently it's the largest of my arrays, and is easily expandable. I use mostly 10TB drives, which leaves bookoos of expansion down the road. Currently only 7 bays are filled (4x10TB 3x2TB) with a total of 15bays. This is my master on site backup. I back everything up from Freenas, creating a complete duplicate. I mentioned that this is also my expansion plan for freenas. I needed somewhere to dump all the data when I wanted to expand my RAIDZ2 array. To do that efficiently I need to have all my data backed up, destroy the RAIDZ2 data set, and then recreate it with the additional drives. This way instead of only gaining another 15TB I can gain the full 30TB from 5x6TB drives, and just add it to my current config. Thus the master back up solution. I won't loose any of my project data, plex library, or the already duplicated user data. Beyond that I haven't done anything with this server. I could potentially run a VM or two just to mess around with various operating systems, but why do that when I can do that on Unraidserver. Backup plan: So I've explained how everything kind of follows a chain of backups. SynologyNAS->FreeNAS->UNRAID2... This is all great for local backup, but what about offsite you make ask... Well originally I had a great solution! When I purchased my first intel X540T-2 and X540T-1 network cards on amazon, I got an offer for their cloud service. At the time it was unlimited backup of everything. That meant that the 6TB I had on hand at the time, could be backed up remotely on an easily accessible location. It took my a year on a 15megabit upload connection to get those 6TB on the cloud, but then not a month later Amazon announced that they were going to charge for that once free space. needless to say I was furious. So, as of right now I don't have an offsite solution EXCEPT for the essential photos and videos that I backup for family members. Luckily the photo side of all that is free, and the videos fall under the 1TB plan which is Amazon's cheapest plan. So, with all this explained, does anyone see where I can improve. I know a few things I would like to do. For instance I'm getting my UNRAIDserver hardware into a bigger case with better airflow since it has a lot of hot components. I would like a 10GbE network switch, but currently I have 2 clients that direct connect to that particular server, so it's not worth the $600 for a cheap one. I wouldn't mind cutting the electric bill a bit, and I had thought about making UNRAID2 server a more Green Friendly build. Any suggestions on hardware for that? CPU, Motherboard? Keep in mind it's a hot running intel 3930K cpu. I wouldn't mind getting the video card out of that system as well and move to something with onboard graphics. It would open a PCI express slot up. Let me know what you guys think
  14. Hey Guys, My work was getting rid of some interesting hardware that's pretty old, and of course, I bought all I could to play around with what ever will work. One thing that I thought was very interesting is that they are getting rid of a fibre Channel switch. To be exact: Qlogic SANbox 5600. I got the thing full of full duplex Fiber SFP Modules for $5. I've been doing some research, and I wanted to get some ideas on what I could do with this thing in a home network scenario. Currently I have several servers running, some utilizing direct connect 10GbE, and the rest on a 1GbE network. I wanted to know if there was any ability for these 4Gbs modules to be upgraded to 10Gbs. If so, that would be an amazing leap in the direction I want to go with my local storage solution. Also, with this being a fibre channel SAN switch, could I use any type of sfp module? For instance they make SFP to ethernet. Could I theoretically add this switch to my current network? Or could I use existing 10GbE cards to connect to the SAN? Thanks again guys! Obviously I don't want to go down the rabbit hole here with buying Fiber Optic cables and all, but I want to see if this could be a useful piece of gear in my server cabinet.
×