Jump to content

iJarda

Member
  • Posts

    69
  • Joined

  • Last visited

Reputation Activity

  1. Like
    iJarda got a reaction from BSpendlove in CCNA certification.   
    I've passed 200-125 (the old one) and I can strongly agree, HW stuff isn't needed at all.
    Although I'm working in networking field for 7+ years, I started with non-Cisco gear (now stick mostly to Cisco), I would say certifications are about 50-75% about the experience, 20-40% about theoretical stuff and 5-10% of old stuff, that I think you will probably never see again in real world (ATM...) (I know that % doesn't match to 100 , cuz ranges... depends on your mix of questions).
     
    Lab, lab, lab. Practice, trying new things, HW will not save you. You can buy the newest stuff, but that will not solve, that you don't understand the basics of L1-L3 networking. Try to get some troubleshooting labs, they are the most valuable thing, that you can get. Let someone break it for you, and then try to fix it. We learn by making and fixing mistakes.
     
    Newer certifications will not make your old knowledge obsolete, they usually just contain wider range of knowledge in another fields (e.g. last refresh in Feb mostly added network automation, programmability etc...) it will not change what VLAN or IP address is.
     
    If you really want to spend some money and buy hw, old stuff isn't really worth it. If you can mess with your home network, great, buy it for real world use, not just lab - replace your home network. I would recommend ISR 1921 / 2901 or something like that + (E)HWIC switch module / small dedicated switch (if you care about power consumption, try to avoid large L3 switches, they could be really power hungry - try to start with what most fits you, e.g. L2 2960C or CG etc. or L3 variant of those small compact switches 3560C or CG). And also agree ebay instead of amazon...
     
    For setups, where you want to use multiple routers / multiple switches labbing, just use packet tracer (it is prety much all you need for CCNA and most of CCNP stuff) or for more real-world simulation use GNS3 or VIRL/VIRL2 (if you can access it). (remember: routing only!! not L2/L3 switches as their asics could not be simulated in same way as GNS3/VIRL does -> back to PT)
  2. Agree
    iJarda reacted to beersykins in CCNA certification.   
    You can get ISR2900 series stuff pretty cheap these days, although depending on license some of the feature sets vary (whereas 2800 you can run basically every feature in the image).
     
    I'd probably roll ebay instead of Amazon, you can find better deals (I picked up 2x 2960S-24p for $20/ea, for example).
     
    If you substitute another one of your switches for a L3 switch you can double the device as another routed segment device.
  3. Agree
    iJarda reacted to Lurick in CCNA certification.   
    For CCNA level stuff and even CCNP these days I'd suggest VIRL, GNS3, or something similar instead.
  4. Like
    iJarda reacted to Lurick in Network layout showoff   
    Yah, I think I made a typo and didn't make it clear the uplinks are fixed. It's the 9200L-48PXG
  5. Like
    iJarda got a reaction from Lurick in Network layout showoff   
    Are you using some pre-sale models? is that C9k2 really C9200-48PXG? I couldn't see that exact model in datasheet (with modular uplink, just fixed 9200L-48PXG)
  6. Informative
    iJarda reacted to Lurick in CCNP Help   
    If they are still on v2 (300 series) of the R&S then this book should do nice for a start:
    https://smile.amazon.com/Routing-Switching-Official-Guide-Library/dp/1587206633
     
    If that doesn't cover most of it then these two will
    https://smile.amazon.com/Routing-Switching-SWITCH-300-115-Official/dp/1587205602/
    https://smile.amazon.com/Routing-Switching-ROUTE-300-101-Official/dp/1587205599/
     
    Not sure about TSHOOT though
  7. Like
    iJarda reacted to gcs8 in LTT Storage Rankings   
    My old 2011 build still has me in 23ed place, let's see where my current build gets me. Let me know if I missed anything else you may want me to cover, this is just kinda everyday life for me, so if you wanna know more, just ask.
    Hardware
    768GB DDR4-2133(24x32GB) LRDIMM Supermicro Rackmount 4U w/ Red. 1280W Platinum P/S 24x: Samsung DDR4 2133MHzCL15 32GB (PC4 2133) Internal Memory M386A4G40DM0-CPB 2x: Intel 2.20GHz Xeon E5-2630 v4 Deca-Core (10-Core), 25MB Intel Smart Cache, Socket-2011-v3 (FC-LGA14A) 24x: HGST Hard Drive [HUH728080AL5200] 8TB SAS 12Gb/s 7200RPM 3.5in, 128MB Buffer, Internal Supermicro Motherboard S-2011 R3 for 2x E5-2600 v3 MFG Part Number: X10DRi-T4+ LSI Logic Controller Card H5-25573-00 9300-8i SGL SAS 8Port 12Gb/s PCIE3.0 HBA Brown Box QLE2562 dual 8Gbps Fibre Channel 2x: Supermicro 4U Active CPU Heatsink f/ X9 Socket 2011 MFG Part Number: SNK-P0050AP4 2x: ASUS HYPER M.2 x16 PCIe Expansion Card Chelsio T62100-SO-CR Software and Configuration:
    FreeNAS

    Usage:
    NAS and SAN usage, My desktop boots off of FiberChannel from the FreeNAS box so I can have snapshots every 5 min on my OS drive and I keep my main OS drive small enough to cache it in RAM on FreeNAS. It hosts a lot of VMs and Jails that do stuff and things, I try and keep my digital hoarding inside the rack.
     
    Photo's:

     

  8. Agree
    iJarda got a reaction from Razor Blade in Dell R720 12 bay backplane power harness question   
    These backplanes, especially SAS drives do that, as they don't power up till controller don't call them to spin up, on some systems it spin up drives one by one to not create current peaks.
  9. Informative
    iJarda got a reaction from Razor Blade in Dell R720 12 bay backplane power harness question   
    I have Dell R7910 (equivalent to R730) so I can measure that one for it's 2,5" inch backplane if that would help, but I think whole backplane should just need power
     
    sense pin is used also on 8-Pin for PCIe cards where it has 3.3V on that pin connected to pin where other PSUs or 8-pin graphics cards has ground which is internally connected ... so it just "shorts" 3.3V sense to ground and system then knows that this piece of HW is present or not
    (for example my R7910 knew about my backplane which throwed error when I haven't connected SAS cables and even if I have unplugged backplane control cable, it still throws error that theese cables are missing - it was based on that power cable with sense pin => so sense pin matter for system board, not whole backplane)
  10. Informative
    iJarda got a reaction from dalekphalm in Dell R720 12 bay backplane power harness question   
    I have Dell R7910 (equivalent to R730) so I can measure that one for it's 2,5" inch backplane if that would help, but I think whole backplane should just need power
     
    sense pin is used also on 8-Pin for PCIe cards where it has 3.3V on that pin connected to pin where other PSUs or 8-pin graphics cards has ground which is internally connected ... so it just "shorts" 3.3V sense to ground and system then knows that this piece of HW is present or not
    (for example my R7910 knew about my backplane which throwed error when I haven't connected SAS cables and even if I have unplugged backplane control cable, it still throws error that theese cables are missing - it was based on that power cable with sense pin => so sense pin matter for system board, not whole backplane)
  11. Agree
    iJarda reacted to dalekphalm in Need workstations for 50 people   
    I'm going to echo the others here.
     
    By doing a "combined station" for 8-10 users, you're creating a massive headache if something goes wrong. If a system component goes down, now you have 8 or 10 people unable to work, instead of one person.
     
    Furthermore, if you need to RMA a dead part, your entire system (and all 8-10 staff) are also down, potentially for weeks, until a replacement arrives.
     
    Go Dell or HPE. Their warranties are top notch. Next day parts w/ an on-site tech to install them. A single source for all warranty issues.
     
    I would contact a local IT Contracting company and get 2 quotes:
     
    1. For 50x workstations w/ the GPU included (You can ask them to source the cheaper between Titan XP vs 1080 Ti)
    2. For 50x workstations w/ no GPU.
     
    Then, look online for the price for 50x GPU's, and see if they are cheaper to source yourself.
     
    Additionally, I would contact both Dell and HPE directly, get yourself an account manager, and get quotes directly from each company.
     
    Frankly, for a business, doing it the way you want is a terrible idea.
  12. Funny
    iJarda reacted to sapage in Network layout showoff   
    Hopefully in the spirit of fun for this thread.
     
    https://imgur.com/a/XTorV
     
    I should really cable it up or something. 
  13. Like
    iJarda got a reaction from leadeater in Active Directory Domain Services - Reverse Lookup Zone   
    yes, they make excellent SOHO, WISP and even their CAPsMAN are great, but not for big deployments which requires reliability. I don't know how better are Edge Routers (they are just even more cleaner Linux based on Debian just with more "friendly" GUI) but as I have long experience with their UniFi products, it is one big piece of sh.. They make excellent WISP products for long time, then they decided to join "enterprise" market by creating UniFi ... we have growing since 2013 with 40 APs to now almost 90 APs and I really wish to get rid with it ... it is just like standalone OpenWRT SOHO APs with orchestration like Ansible/Puppet written in Java and central collectiong of statistics with multiple minutes delay  so ... also not recommended for big deployments. (also can explain -> PM). One thing that justifies that crap is that Robert Pera (Ubiquiti founder and CEO) says UniFi is "enterprise like" system. I wish if their marketing department think so.
  14. Like
    iJarda reacted to leadeater in Active Directory Domain Services - Reverse Lookup Zone   
    Yea we actually use Linux servers for DNS even though we are heavily Microsoft server place, same with DHCP.
  15. Like
    iJarda reacted to leadeater in SSD for VM server   
    Yea it'll be fine, it's the same SSD in our Nutanix servers that are running ESXi, we have a good 40 or so so I know it's good.
  16. Like
    iJarda reacted to H3llscr3am in LTT Storage Rankings   
    I wanna play
     
    Hardware
    CASE: U-NAS NSC-810A
    DRIVE CADDY: iStarUSA BPU-350SATA 5 x 3.5" Drive Cage
    PSU: SeaSonic 350M1U
    MB: Asrock Rack E3C224D4I-I4S
    CPU: Intel Xeon E3-1275L v3
    HS: Noctua NH-l9i
    RAM: 32GB Crucial (4 x 8GB) 240-Pin DDR3 SDRAM ECC Unbuffered DDR3L 1600 (PC3L 12800)
    RAID CARD 1: LSI 2308, Flashed with "IT" firmware
    SSD: Patriot PSK128GS25SSDR Spark Series 128GB SSD (L2ARC)
    HDDs: 13x 3TB HGST Ultrastar 7K4000 3TB 64MB Cache 7200RPM SATA 3 (one was a spare drive, but swapped it for the SSD, so it's lying around in bubble armor)
    Software and Configuration:
    OS: FreeNAS
    In Jails, I run Transmission, Plex, Headphones, and some other applications for media acquisition, management, and local streaming.
    Usage:
    I use the storage for movies, shows, and music mainly. But it also stores some educational/certification materials, and other junk.
    Photo's:
    Drives in Freenas UI:

    Server:


  17. Like
    iJarda reacted to unijab in LTT Storage Rankings   
    So...
     
    There is a really cool (IMO) change coming to my storage server soon... (aka its in the mail)
    I figured out that you can get a FC HBA in target mode using Fedora...
     
    So guess who got some really cheap 4Gb FC HBAs... and using linux multipathd.. really nice speeds!!! (not that my disk arrays will max anything out)
     
    So no more iSCSI for me.
     
    Also, been playing with ZFS (on linux)... so there's that also.
     

  18. Agree
    iJarda reacted to Mikensan in Home NAS - Best OS & File Permissions   
    English may not be your first language but you organized your thoughts and wrote clearer than a lot of others who natively speak english. Kudos!
     
    You will need some form of account management, which isn't a big deal not to have active directory. Almost any OS you chose will have local account management. The only advantage of active directory is single sign on / kerberos, so that when you access a share you do not have to type a password. 
     
    OS:
    unRaid is probably the most simple, works great if you only need to support a gigabit connection. I'm not personally a fan but it's very much loved by those who use it. I feel like there should be a lot of people using it, but I never come across many. I think half the battle is you have to pay for it.
    Windows Server is semi easy, however to cluster/raid the drives together you'll either need a raid controller | use Storage Spaces | buy FlexRAID(People seem to love FlexRAID) If you're interested in FlexRAID there's quite a few people on these forums that use it.
    FreeNAS Very popular but maybe takes a little more forethough to get going, is a memory hog (uses RAM for cache to speed up write speeds due to CoW / sync. writes). It will work with lower RAM but in certain scenarios it may underperform (NFS). If you're interested in FreeNAS I'd be happy to help, but there's also a lot of people using this too.
     
    Media Streaming:
    Plex is the most popular and widely supported media streaming. Transcoding can be tasking on the processor, but not really all that bad.
     
    Permissions:
    CIFS/SMB shares no matter your O/S. Most widely supported protocol and fairly stable. 
    Windows: Just create local accounts as needed, create your share and then limit the folders/files within the share under the Security tab
    BSD/Unix/Linux: Same thing, FreeNAS/unRaid/Linux all have user management in which you can create users. Once you create a SMB share just assign the permissions to the folders.
    In all cases you're going to be prompted as soon as you access the share to enter credentials. 
     
  19. Like
    iJarda got a reaction from Lurick in Where to buy Open Compute hardware   
    Yes I have found that post and yes in most of Europe we have 230V outlets so that is no problem, ... and cardboard also isn't problem 
  20. Agree
    iJarda got a reaction from bsodmike in FreeNAS Server Build   
    You are right! :-) I have bought X99-E WS about half year ago and I'm very happy with that. I was decided to go with X99, but can't decide whether to go with X99-E WS or X99-E-10G WS ... for me I was very excited when 10G version was introduced, but then I was disappointed - complete lack of thunderbolt (even onboard Type-C is just USB 3.1), fewer ports on back IO and the last thing was that Asus (and their distributors) tells me (in January 2017) that this product isn't destined to be available in the Czech Republic (center of Europe...) and when it finally was available in ~May 2017, I already had my X99-E WS and that X99-E-10G WS was more expensive than to buy extra 10G card from ebay... (I already have a couple of them)
  21. Like
    iJarda reacted to Lurick in Server-Showoff   
    I work for them and we somehow ended up with a couple extra after ordering a few for a lab build-out and I was allowed to bring one of the extras home We also ordered 4 packs of 10 of the 3802 APs recently by mistake instead of just 4 single APs so I have two of them powering the wireless in my house.
  22. Like
    iJarda got a reaction from a7mddiaa in WWDC   
    Some random stuff (I didn't pay much attention to that), but iMac Pro looks greeeeat !!  (except price, but I think it is justifiable for prosumers) - I think it is X299 based
    aaaand new iMacs (updated) aren't "broken" that much like new $h!tBooksPro - just as Apple said, too early, too far (change all IO at once to few USB-C) - new iMacs are great because in terms of IO they changed just TB2 -> TB3 and other USB-A remained
  23. Like
    iJarda got a reaction from leadeater in 10GbE on FreeNAS - Reaching 1GB/s   
    I don't operate in that high speeds, but I can share something similar. I have a theory about that...
    My specs are:
    ASRock Rack E3C224D4I-14S
    Core i5-4590S
    some ammount of memory (currently 24 Gigs of ECC DDR3)
    VMware ESXi 6.0 as hypervisor
    and mainly two WD Red 3TB [WD30ERFX] drives in RAID 1 (next to full case of other drives...  )
    Intel X520-DA1 (I bought it because I thought that these drives are capable to utilize more than 1 Gbps of bandwidth)
     
    Motherboard has LSI 2308 HBA/RAID onboard so I use it for VT-d passthrough to VM of Xpenology (open source bootloader for Synology DSM). Even if I use X520, there are performance issues, but focus now to onboard Intel I210 (there isn't performance change) ... when I try to copy / write from/to NAS (VM) (sequentially - one big file) it is sloooow... about 90 MB/s in peaks falling down to 60, 80, 60 etc... ridiculous....
     
    According to storagereview.com even if these drives are "IntelliPower" ~ 5400 RPM they are capable about 140-150 MB/s, so what is wrong?
    I'm sure that networking is ok. I use all three NICs (2 onboard + X520) for vSphere vSwitch and for VMs I use VMXNET3 (which is 10G capable) - I have confirmed that using iperf to another centos 7 VM passes more than 9,5 Gbps with Jumbo Frames (MTU 9000)...
     
    So I have theory, that it is caused by HBA passthrough itself or that i5 "isn't" Xeon so it doesn't have enough power to use this technology..., but when I see that even your Xeon E5-2683 has issues, I don't think that powerful processor or whole platform (C224 vs C612) will help...
     
    My next steps which I have to investigate are try to use SSD and if that will not work, then run it baremetal without hypervisor (I have tried another machine few years ago with less powerfull i5 (Sandy Bridge) and it worked like charm.)
  24. Like
    iJarda reacted to leadeater in 10GbE on FreeNAS - Reaching 1GB/s   
    A large scale FreeNAS server will out perform a hardware RAID card but in very small scale, around 8 HDDs and 16GB system RAM, a hardware RAID card will outperform FreeNAS.
     
    FreeNAS does have read/write caching and different types too: ARC and L2ARC.
     
    ARC:
    In basic terms ARC is a read and write cache stored in system memory, this will always exist on a ZFS system. ARC is the main reason why many people recommend large amounts of RAM for ZFS but in actuality it totally depends on your performance needs.
     
    L2ARC:
    The L2ARC serves two functions, one as a read cache and the other is data integrity for data in ARC not yet written to storage. L2ARC is placed on fast storage i.e. SSD and can be as big as you like. It is important to note that L2ARC is not a write cache so in a performance perspective L2ARC only helps read operations.
     
    See here for more details https://en.wikipedia.org/wiki/ZFS#Caching_mechanisms:_ARC_.28L1.29.2C_L2ARC.2C_Transaction_groups.2C_SLOG_.28ZIL.29
     
    Another factor in ZFS performance which is how it can perform so well in a very large configuration is that in a storage pool data is stripped across vdevs. For example if you have 32 HDDs and you put them in to 8 disk vdevs using RAIDZ2 you will have a stripe across 4 logical blocks or storage which will increase performance, mainly in IOPs which can be very important.
     
    Once a vdev starts going above a certain number of disks the performance gain per extra disk starts to drop off, as it does with hardware RAID, so if you know you are going to have many disks set an upper limit of the number of disks in a single vdev and stick to that.
     
    It is also important to realize that since vdevs are stripped in a pool every one you add increases the risk of failure because it only takes 1 vdev to fail for the entire pool to fail.
     
    Hard part which I covered earlier is the fact you can't add disks to an existing vdev so it is often best to buy as many disks, within reason, and never expand the pool. When you finally do run out of space create a new pool with new disks and migrate the data or double the number of disks in the existing pool by adding another vdev of the same configuration.
     
    I'm not a FreeNAS expert or even a user of FreeNAS/ZFS so I can't give you any specific advice on how to tune and debug it properly, I do have a strong background in enterprise storage so all the concepts are the same.
×