Jump to content

Downloadski

Member
  • Posts

    13
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Downloadski's Achievements

  1. Fully agree, i had issue with the 4 intel gigabit ethernet cards on my mainbord. Even when not using them you still run out of memory buffers.I used this thread on the pfsense forum for built example: https://forum.pfsense.org/index.php?topic=94399.0 and have like 40% load on 546 mpbs download speed. (Snort, squid)
  2. We need part 3 with the pfsense setup, and motivation why which extra package gets installed i think. Snort, squid and squidguard for me i think.
  3. Edit: 1 gigabit ethernet should be about 112 MByte/second. That is not 1 gigabyte which is measure for storage and not network speed.. English is not my native language, and if i respond late i am tired, which is a bit of a bad excuse..
  4. Lets hopy Linus gets the full gigabit through this with packages loaded. I have most hardware in and pfsense runs, will go and test with one pc first to be save
  5. No i want not to be in the list.. I like to read this topic for nice servers. My norco case had Original 5x 80mm fans very loud in the case, changed it to 3x120 mm fans (noctua) In server 3 i took out the fans plate and put in 3x140mm fans (noctua) without a plate. Noise is now better with server 3. So updates will follow for server 1 and 2, perhaps even the norco cases will be changed to rivier case. Had doa issues with 2 backplanes for the hdd's in the borco cages. They were swapped under warrantee, but still have 4 hdd's with a lot of cable errors
  6. Took the photo's offline. The anti piracy heat gets worse here.
  7. Nice thread. I run 4 nas servers with freebsd/zfsguru. 1 with 10x4 and 10x3 TB (2 raid-z2) 2 with 10x4 and 10x3 TB (2 raid-z2) 3 with 20x4 TB (2 raid z2) 4 with 5x4 TB (raid z1) The 4 TB drives are hgst 7k4000 The 3 TB drives are toshiba DT01ACA300 (7k3000 of hgst) Split it over multiple servers and pools as i not want everything online all the time. 10GE ethernet via x520-da2 cards with DA cupper cables.
  8. Some nice tips with the buffers, got to try that. I have intel 520-da2 cards with DA cables into a d-link 28 port switch with 4xSFP+ slots. Have better send than receive speed, so perhaps use some settings show in the video.
  9. Interesting, building a pfsense box myself at the moment. Hopefully less issues with hardware M350 case with atom2758 mini itx board and 12 volt power brick as psu. This is perhaps less redundant than with dual redundant psu. But how high is the availability of the single isp line ? Would dual firewall with carp and 2nd line not be better. (2 nd line other provider/technique)
  10. Another thing i found out late was the placement of hdd's in the case (in my case norco 4220) I did initally place of each pool 4 on lowest backplane, 4 on the 2 nd and than 2 on the fifth. Since norco is not known for backplane stability i re arranged it to 2 drives on each backplane. So a backplane fail does not bring pool down. No idea how this is in your storinator case. I see numbering from 1-1 to 1-24 and than 2-1 to 2-21 But it might help to think about how tomplace drive in relation with backplanes and sff-8087 connections to the controllers
  11. Also need to know with zfs is that if one of the vdevs under a pool fail you (3 disk loss in this scenario) the whole pool is gone, so also the other 2 vdevs. This could happen if one or 2 discs in a pool fail and you have to resilver, the rest of the disks have high load than and risk one more out of same batch fails is there.. Other ZFS disadvantage is that you cannot fill up to 100% of the pool size, as system gets very slow there.
  12. You can run a scrub on your pool and that gives an idea of what the max read and write speed will be. I run 10 disk raid-z2 pools (1 vdev) and get about 829 MB/sec with zfs send and receive via mbuffer over single 10GE link. (This is also the scrub speed) When i run test with dd via mbuffer i get about 1124 MB/sec over the 10GE link. So the network stack is not the issue with freebsd. I tested this with zfsguru 10.1, so freebsd 10.1 with intel x520-da2 cards with all settings default (so mtu 1500) Harddisks are 3 TB toshiba, 4TB seagate desktop 15 and hgst 7k4000. All pools with 7200 rpm disks are around that 880 MB/sec, the pool with the seagates is a bit slower. From Windows pc i am seeing also speeds of 350 MB/sec from a raid-5 volume. 4 GB file from ramdisk was about 425 MB/sec. But is is a 7 year old pc, so it might have i/o bottlenecks (pci-e and pci-x slots and i7-920)
×