Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Falconevo

Member
  • Content Count

    904
  • Joined

  • Last visited

Reputation Activity

  1. Agree
    Falconevo got a reaction from kelvinhall05 in Can't convert MBR disk into GPT   
    Won't make any difference what so ever, you do not have a drive larger than 2TB so it is pointless having a GPT partition.

    Just because MBR is older, does not make it worse.  However if you really care about changing it which I personally wouldn't, you will need to make sure your boot configuration is set to UEFI and not legacy mode.  Booting in legacy mode will not allow the Windows UEFI boot loader to function correctly.
     
    It will make absolutely ZERO difference to your system, there is absolutely no point in changing it.
  2. Funny
    Falconevo reacted to aezakmi in Intel Comet Lake Packs Up to 10 Cores (Updated)   
    Looks like this masterpiece will never be outdated

     
  3. Agree
    Falconevo reacted to Electronics Wizardy in Port disk image into a VM?   
    core count won't matter.
     
    The big problem will be getting the bootloader and drivers to work right. Normally the premade tools should work, but there may be errors depending on your exact config.
  4. Agree
    Falconevo got a reaction from dalekphalm in Port disk image into a VM?   
    The process you are looking for is called a P2V, physical to virtual.

    If you want to use Hyper-V, you can use the Microsoft Virtual Machine Converter tool to accomplish it relatively easy.
    https://www.microsoft.com/en-us/download/details.aspx?id=42497

    If you want to use VMWare ESXi, you can use VMWare P2V Converter (requires you create a free account)
    https://www.vmware.com/products/converter.html
  5. Like
    Falconevo got a reaction from leadeater in 10GBase-T switches, which one to choose?   
    Haha, yea i kinda wanted to say do you want to future proof over 12 ports but I was half asleep.

    Either way, here are some better options depending on port requirements but i would advise looking at the second hand market as brand new these are big £££/$$$;

    In no particular order, here are some 10GBase-T switches that are 36 port and smaller, some have SFP+ capability also and/or combo ports.
     
    Arista 7050T-36 
    Cisco SG350XG-2F10
    Cisco SG350XG-24T
    D-Link DXS-1210-12TC
    D-Link DXS-1210-16TC
    Netgear XS724EM
  6. Agree
    Falconevo reacted to UrbanFreestyle in Building a FreeNAS rackmounted server. Need an appropriate chassis for home guest viewing.   
    http://www.norcotek.com/product/rpc-431/  ??
  7. Like
    Falconevo reacted to jooroth18 in Pfsense: WTF is going on?!?!?!?!   
    The ARP hint was extremely useful to me for another issue, as i had another problem where i couldn't find my UniFi access points, as they wouldn't show up in the DHCP leases, or on the Ubiquiti discovery tool. I saw them listed in the ARP tables and i was able to enter the correct IP's into the controller. I can finally control my access points again!
  8. Like
    Falconevo got a reaction from Fesoj in Download speed is slower than Upload   
    Run the test again with your task manager open, see if you are hitting 100% CPU while performing the download portion of the test.  This will skew your result and make it look worse than it is.  1G on SpeedTest.Net is quite CPU intensive due to them creating a large number of threads for download using TCP port 8080.
  9. Agree
    Falconevo reacted to BuckGup in PIAs prices are going up, what's a good alternative?   
    Making your own. It's cheaper, more secure, and is a lot more flexible. You have control of everything so you can install and tweak how you want. Right now I pay $20/year for 1TB/month datacap, 1gigabit/s connection speed, 20Gbs of SSD storage, and it's powerful enough to run a pihole plus VPN on top with 14 concurrent users
  10. Funny
    Falconevo reacted to IBAE in Post Linus Memes Here! << -Original thread has returned   
    Linus the destroyer
     
  11. Agree
    Falconevo got a reaction from myselfolli in ERR_NAME_NOT_RESOLVED and DNS_PROBE_FINISHED_NXDOMAIN error cant get it to work HELP!   
    If something is changing your HOSTS file, you have malware on your laptop.

    You can set the hosts file to read-only after clearing it to prevent any further changes but that won't stop your laptop being infested.
  12. Agree
    Falconevo got a reaction from Lurick in Cisco ASA 5512-X - Throughput   
    Use the 5512-X and if you are simply using it for WAN<>LAN filtering with minimal NAT configuration it will do around 650-700Mbit/s

    It can do more, just depends on the ruleset you have, the heavier the rules you have the lower the performance is going to be.  Turn off any IPS if you plan to perform this on another device.
  13. Like
    Falconevo got a reaction from dalekphalm in Still need help with RAID card   
    T410 can use a H200 RAID Controller which can be flashed to IT mode for use with any ZFS based appliance such as FreeNAS etc.  The H200 supports disks over 3TB and supports SATA3 speeds per port (H200 has 8 internal 6Gbit/s ports)
     
    You can pick up a H200 that is pre-flashed from places like eBay, below URL is an example;
    https://www.ebay.co.uk/itm/Dell-H200-6Gbps-SAS-HBA-LSI-9210-8i-9211-8i-P20-IT-Mode-ZFS-FreeNAS-unRAID/163142320696
     
    You will need to have SAS 8087 to SATA breakout cables, each SAS port will support up to 4 SATA devices.  You have the T410 with the inside bays rather than front loader hot-plug.  The drives can still be hot-plug but you have to take the chassis side panel off.
    Cable needed looks like this and are cheap as from eBay.  The cable is a Mini-SAS 8087 to 4i SATA breakout cable.

    With the above, you will get IT (HBA) mode with disks which work fine for ZFS, hot-plug capability and sufficient bandwidth for all disks and future expansion if needed.
  14. Agree
    Falconevo got a reaction from dalekphalm in RAID for 8 1TB Hard Drives   
    If you are using the RAID creation tool in Disk Manager, not only does it create the RAID5 array it also writes out zero's to every drive in the array.

    With 8 1TB SATA drives which are slow anyway, this process would take a large number of hours.

    If you open command prompt as an admin, type the following;
    diskperf -y
     
    Open Task Manager (If you had it open previously you will need to re-open it)
    Hit the performance tab
    You should see all 8 disks under the exact same load (assuming the process hasn't finished yet)
     
    Also I would NOT recommend using the build in RAID creation utility, it's prone to serious problems which is why Storage Spaces exists.

    Kill off the RAID5 you have created and use Storage Spaces to create a parity storage pool instead.  8 SATA disks in 'RAID5' is going to absolutely abysmal for write performance so do not expect to get any significant IOPS values out of the array.  Sequential read speeds will be good, sequential write speeds will be 'acceptable' but any random disk IO is going to make it crawl with no RAID cache present.
  15. Agree
    Falconevo reacted to leadeater in These Servers are TOO EXPENSIVE   
    And stop using Intel 750's that aren't actually that good.
  16. Informative
    Falconevo got a reaction from DezGalbie in How many cores can a Plex server utilise simultaneously for one stream?   
    The most I have ever seen a steam use is 8 cores for processing, haven't ever seen a transcode process use more than that.
  17. Agree
    Falconevo reacted to vrod in NIC Bonding with 2 different Networks   
    pfSense would be able to do what you need. Both load-balancing and failover as such.
  18. Informative
    Falconevo got a reaction from Kalm_Traveler in Ryzen 7 2700x server build - DHCP and DNS on Windows Server 2016 or PFSense?   
    lol sorry, that was me being retarded
     
    pfSense has a better DHCP interface than windows.  Use pfSense for DHCP and Windows for DNS (if you are using a domain)
    10 Pro clients can join a domain
  19. Agree
    Falconevo got a reaction from oasismin in File structure Hierarchy   
    Yea, Mikensan is bang on.  I would start with separation of each Job/Project in to it's own ID number so that can be easily located/referenced and archived if no longer required.
     
     
  20. Agree
    Falconevo reacted to Mikensan in File structure Hierarchy   
    You definitely want to automate archival process or else nobody will ever do it. Only gotcha is deciding on how and why things are archived.
     
    The engineering company I worked for kept it pretty simple, department > Active Projects > Project ID. The engineers did drawings (CAD) to apply the company's product to the customer's building (scaffolding / formwork / automatic-climbing-system). The coolest thing was their ACS, which freed up cranes typically used for formwork.
     
    From inside the Project ID folder you would break it out into whatever you needed. Otherwise you end up going in and out of other folders to pull resources just for a single project.
    The only problem this created was the engineers put way too much information into their folder names and back then Windows really did not like deep folders + long names.
  21. Informative
    Falconevo got a reaction from Kalm_Traveler in Ryzen 7 2700x server build - DHCP and DNS on Windows Server 2016 or PFSense?   
    pfSense has a far better DHCP interface than pfSense does so I would recommend using pfSense for DHCP.

    Are you planning on creating a Windows Domain, if so you will need to set the DNS in the DHCP pool to be the domain controller(s).
  22. Informative
    Falconevo got a reaction from TopHatProductions115 in NVMe RAID on ESXi ??   
    Yea, I've been doing testing with the tri-mode LSI controllers and they have some serious performance and support NVMe raid using U.2 drives.

    Just a pain how expensive the chassis are to support the stuff at the moment.

    I also have an Intel Ruler Eval system in if you want any info on that beast
  23. Informative
    Falconevo got a reaction from TopHatProductions115 in NVMe RAID on ESXi ??   
    You will require vROC (virtual raid on chip) to be supported on the motherboard to use NVMe drives in RAID.

    vROC only arrived on the latest eval units, the R730 as good of a chassis that it is doesn't have support for vROC.  In addition the vROC needs to be licensed for the requirement you have which is a premium license which costs £££ also  AMD's version is license free so keep that in mind.
     
    I'm also unsure about vROC driver support on ESXi.  I haven't tested vROC on ESXi myself as I've only tested Server 2016, Server 2019, CentOS and RedHat.
    Here's the vROC running in Windows using RSTe
     

     
    vROC also has some limitations with dual socket configurations, for example you can't create VMD (Volume Management Device) across multiple CPU sockets which is bootable.  As the PCIe devices are bound to a CPU for their PCIe lanes it doesn't allow for it.  You can create a VMD which spans multiple CPU's but it cannot be used to boot and some OS's have problems seeing the device correctly due to early driver implementation.
     
    Some caveats to using a VMD, currently nothing other than Intel software can correctly read the S.M.A.R.T data on the NVMe(s) behind the VMD.  This is something I have raised to Intel recently for their team to review which i'm sure they will sort out.
     
    Hope this helps a bit, give me a shout if you want any more specific info as I still have a vROC capable NVMe unit in storage eval testing.
  24. Like
    Falconevo got a reaction from leadeater in NVMe RAID on ESXi ??   
    The reason for the performance drop is quite simple, the PCIe LSI card is going to be on x8 or x16 PCIe 3.1 lanes.  For it to be able to provide up to 24 NVMe drives via a PCIe PLX switch which 24 x4 NVMe drives cannot communicate via all at the same time and the drives are reduced to x1 bandwidth.  Some reduction in performance is also present when you bring in hardware RAID to the NVMe stack as the SSD has its local drive cache dram disabled and the cache of the raid controller is used instead, most Tri-Mode adapters have an 4-8G cache which is large for RAID controller standards of the past years.
     
    This is why vROC exists because it eliminates the reduction in performance for the NVMe drives and allows them to have up to x4 PCIe lanes depending on the configuration, system layout and if it is single or dual socket.  vROC has some pitfalls not being able to do complex RAID configurations but for the most part it covers most requirements, RAID0, RAID1, RAID5 & RAID10.  You can also enable the dram cache on the SSD NVMe behind the vROC while in RAID but this comes with significant warning for potential data loss.
     
    Software defined storage does a much better job for getting performance out of NVMe's that are not behind a RAID controller.  As you are using ESXi, I would look at using vSAN and making use of PCIe HHHL NVMe for the cache drive and comodity mechanical HDD's for back end storage.
  25. Agree
    Falconevo got a reaction from leadeater in NVMe RAID on ESXi ??   
    Also just a note, you would be better off doing NVMe without hardware raid and using software technologies for redundant storage such as vSAN if you are using ESXi or if using alternate Windows would be Storage Spaces Direct and Linux based would be something like CEPH.
     
    Don't get me wrong, I'm in the process of testing NVMe RAID because it comes up converstation alot when discussing redundancy as most people are scared to death of 'software' based RAID tech even though that is the future.
     
    If you have 3 or more ESXi nodes, I would advise going with vSAN and getting a single NVMe (PCI Express HHHL) for the caching disk and use comodity stuff for the underlying storage.  Something that is better £/$ per GB
×