Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

mynameisjuan

Member
  • Content Count

    3,969
  • Joined

  • Last visited

Reputation Activity

  1. Informative
    mynameisjuan got a reaction from Radium_Angel in Gaming on STARLINK!!   
    1. Many people here are posting their latency but its dependent on the location of the endpoint and distance to it. 30ms on average for sites or gaming is not bad, 30ms to your ISPs gateway, then its on the higher side. But it varies based on where you are connecting to.
     
    2. DSL (ADSL/VDSL) actually has lower latency then fiber. Signal propagates faster through copper then it does fiber and ATM cells lacking inter-frame gaps means less latency. This of course is minimal, especially in the real world. The problem is since copper is trash, means DSL and COAX to an extent require interleaving which introduces a latency to account for RS and FEC corrections. The worse the SNR, the higher interleaving window and hence more latency.
    If you have DSL, if you can talk your ISP into switching from interleaved to fast you can drop your latency down to fiber levels but at the cost if you have a bad copper line you will experience some packet loss.
     
    ex-DSL engineer and I will never go back
  2. Like
    mynameisjuan got a reaction from Lurick in setting up a simple LAN   
    If they are connected via a 10g switch which is left defaulted, they would be in the same L2 domain so they can communicate directly with each other. Else, If the PCs are directly connected and their second ports are connected to the 1gig switch you need to configure a separate subnet in order for Windows to know which interface to send traffic out.
     
    Also in your first post, the "LAN" ports on consumer routers are bound to a switch chip which means as long as long as the devices are within the same subnet, it will have no performance hit on the router as the router's CPU is not handling it, instead the ASIC is.
  3. Like
    mynameisjuan got a reaction from Electronics Wizardy in setting up a simple LAN   
    If they are connected via a 10g switch which is left defaulted, they would be in the same L2 domain so they can communicate directly with each other. Else, If the PCs are directly connected and their second ports are connected to the 1gig switch you need to configure a separate subnet in order for Windows to know which interface to send traffic out.
     
    Also in your first post, the "LAN" ports on consumer routers are bound to a switch chip which means as long as long as the devices are within the same subnet, it will have no performance hit on the router as the router's CPU is not handling it, instead the ASIC is.
  4. Like
    mynameisjuan got a reaction from GhostRoadieBL in setting up a simple LAN   
    If they are connected via a 10g switch which is left defaulted, they would be in the same L2 domain so they can communicate directly with each other. Else, If the PCs are directly connected and their second ports are connected to the 1gig switch you need to configure a separate subnet in order for Windows to know which interface to send traffic out.
     
    Also in your first post, the "LAN" ports on consumer routers are bound to a switch chip which means as long as long as the devices are within the same subnet, it will have no performance hit on the router as the router's CPU is not handling it, instead the ASIC is.
  5. Like
    mynameisjuan got a reaction from dalekphalm in Looking for new Networking Equipment - Gateway/Firewall + AP   
    Its confusing because RouterOS allows you low level configuration, no different than any other enterprise CLI hierarchy and if you could see under the hood for Meraki and Unify, the configuration would look similar. Its just all hidden to you and automatically checks the hidden boxes that make it a smoother experience. For me, I rather know what my hardware is exactly doing but that doesn't mean it cannot be a pain point. Also a controller is not necessary.
     
    Most issues almost always revolve around Country band being incorrect, example, if you are in the US, its need to be set to united states 3, not, united states which would cause clients to constantly hop freq. leading to lost airtime hence a reboot fixing it for a period of time. Also another factor is using the capsman tunnel (which is how the wiki tutorial sets it up) for forwarding traffic but the device handling does not have the resources, its not going to be a fun time.
     
    The only downside of going Mikrotik APs is 802.1k (roaming) and passive PoE. Former is not as terrible as additional config will get it nearly to the same real world experience outside VOIP being the smoothest transition and the latter needing an injector or if you have another Mikrotik, most tend to have at least a single passive PoE out. Also don't expect Wifi6 for some time.
     
    Probably the best setups at the moment for your situation:
    - RB4011
    - hAP ac (setup as AP) that can be powered by the 4011
     
    The only additional points it checks with with the beta you can run Wireguard directly on the 4011 and due to its hardware can expect almost full gigabit, excluding overhead, and support basic DDNS. For another VPN service or IPS you are are going to have to go the Untangle/Opensense routes but I cannot justify the extra power, cost and troubleshooting time.
     
    Personally Id avoid Unify completely. Rittled with bugs with software and GUI, GUI is terribly unorganized IMO, unnecessary controller, their "IPS" is useless and now are destroying their customer base. APs are solid but the need for a controller is asinine.
     
    So its up to how much do you need in terms of configuration and depth
     
  6. Agree
    mynameisjuan got a reaction from LAwLz in Looking for recommendations - 8+ port 10gb switch   
    Multigig is the standard now for most deployments. The extra cost for 10g switches/routers as well as NICs is not worth it, with multigig  trickling in to the built in NICs now. Also the cost to re-run the cabling if Cat5e (which can do 10g up to 30m) is not cheap depending on the building layout or not allowed to depending on the lease. Unless you can see regular saturation at a gig, its a waste to upgrade for "future proofing" or why not.
     
    LTT is the definition of the type of customer that needs high throughput to the host. Businesses even today barely use more than a few megs and realistically, $1000+ can be spent elsewhere in the network that will provide a larger benefit.
     
    1. The switch/NICs will not last 20 years. You should be looking at your needs at this point in time as well as what your expectations are in the next 2 years
    2. What is your currently utilization? Are you seeing the desktops constantly hit a gig to the server? What type of traffic is this server utilizing?
    3. What do you have for a firewall/router?
     
    You mentioned 4,000 customers with consumer gear which most likely means its a flat network. I would spend the money on outsourcing an engineer to properly setup the network and towards a proper FW with proper policies with IPS (specifically anti-malware). No point in spending the money on a 10g switch when, not if, one of the PCs gets ransomware and compromises your entire business or malware unconstrained has full access to your network and now your customer information is leaked.
     
    Unless your business is being crippled by a gig, please don't upgrade because "why not?" and think of other weak points that would be better spent.
     
  7. Agree
    mynameisjuan got a reaction from NZLaurence in We Got 100 Gigabit Networking!... HOLY $H!T   
    This is the reasoning behind my point of why SM is becoming the primary and only option for many enterprises. Even inter-rack connections, the less cabling that needs to be replaced and instead can just need optics swapped eases migration especially at scale. When it comes to intra-rack connections this becomes especially important to not have to re-run fiber.
     
    There are many reasons where moving to SM only from a management, cost and inventory perspective make much more sense regardless of if MM/DAC just work.
    - Inventory: the need to have extra spares from SM/MM patches of multiple different lengths; 2-3x the count for replacement optics in case of a failure. Don't underestimate coordination during migrations and inventory
    - Upgrades: Upgrading optics, moving to DWDM, optic compatibility, etc. is seamless as SM is typically the first implementation and does not require different fiber
    - Cable management: SM is the cleanest way for cross connects inter and intra rack due to the thickness of the jacket. Even with MM, its a substantial difference. For inter-bay panels becoming a huge component over the past few years, being able to run 144 count SM fiber equivalent to two DAC cables in thickness between racks adds into the flexibility
    - Cost: SM optics and fiber cost has narrowed down to the point where it becomes cheaper in the long run to just implement SM from the start. This applies to inventory as mentioned which having additional stock; labor is not cheap and a cost to the company for rerunning fiber and cable management. Overtime the cost add up and SM will be cheaper.
     
    This doesn't apply to every situation but in ISP/DC/Enterprise SM only is becoming common place
  8. Agree
    mynameisjuan reacted to LAwLz in We Got 100 Gigabit Networking!... HOLY $H!T   
    I was hesitating to say that because I don't know how sensitive 100Gbps SFPs are. I only have experience with using Cisco's own AOC cables for 100Gbps, and those obviously work just fine.
    For 10Gbps though, yeah totally agree. Not as sensitive as some people think. Just clean them if they don't work and it should work just fine. Only a problem if you got a really long run with multiple ODFs between them. But I mean, then we're talking about distances that aren't possible with copper anyway so it's not like you have a choice.
     
     
    Seems like my info is a bit outdated. Last time I checked, Cisco did not offer 10Gbps copper SFPs because they thought they were not possible to make without going out of the SFP+ spec. Third parties still sold them, but they were strongly recommended against using. That's why I said they "weren't really a thing". Sure they existed, but it was a high chance that they did not conform to the spec and they had massive drawbacks (like heat and range as you said) and therefore were not recommended.
     
    Looking at Cisco's website now it does seem like they have some 10Gbps copper modules though. Not sure if efficiency has improved so that it is within spec or if they just said "fuck it".
     
     
    Probably just made it up on the spot to sound knowledgably. Linus tends to do that.
     
     
    Yes, DAC has higher latency.
    The rule of thumb is that each DAC link introduces around 300ns of latency while a SM/MM SFP+ link introduces 100ns.
    It's not a massive difference but let's say you have 6 jumps between two servers. With only DAC cables you will be adding about 1.8ms of latency, and with only MM fiber you would be adding 0.6ms.
    The propagation speed doesn't matter, and I am not even sure if the numbers you are quoting are correct. What I learned is that you can just assume 2/3 the speed of light for both mediums.
  9. Agree
    mynameisjuan got a reaction from LAwLz in We Got 100 Gigabit Networking!... HOLY $H!T   
    This is not a concern and I do not know why Linus brought this up. The only processing is of EC and signalling that only happens within the ASIC tied to the port prior to the switching ASIC. The processing is not tied to the switch ASIC and CPU and would like to know where Linus got that information from. The two are not at all related
     
    The actual reason it's not copper is because 10gig copper runs very hot and even more so with copper SFP+. 24/48 port enterprise 10gig and up switches will be SFP only just from a heat and power standpoint and even then have a limit on how many copper SFPs can be used in the switch.
     
    Fiber is more robust than people are making it out to be.  it takes a fair bit of dust before errors will build up and even then few clicks with the cleaner and that's it. Most issues I run into with customers, DACs are almost always related to the issue.
     
    Copper SFP+ are most definitely at thing. They're hot and the length is cut down to a few meters because of it.
  10. Like
    mynameisjuan got a reaction from Lurick in We Got 100 Gigabit Networking!... HOLY $H!T   
    Fiber is not as fragile as they are made out to be. No 90s, don't pinch and don't pull from the collar are pretty much the only rules and I have seen some pretty mangled patches that work flawlessly when hit with the OTDR. But people hear glass and refuse to touch it.
  11. Agree
    mynameisjuan got a reaction from leadeater in We Got 100 Gigabit Networking!... HOLY $H!T   
    Fiber is not as fragile as they are made out to be. No 90s, don't pinch and don't pull from the collar are pretty much the only rules and I have seen some pretty mangled patches that work flawlessly when hit with the OTDR. But people hear glass and refuse to touch it.
  12. Like
    mynameisjuan got a reaction from dogwitch in We Got 100 Gigabit Networking!... HOLY $H!T   
    Fiber is not as fragile as they are made out to be. No 90s, don't pinch and don't pull from the collar are pretty much the only rules and I have seen some pretty mangled patches that work flawlessly when hit with the OTDR. But people hear glass and refuse to touch it.
  13. Agree
    mynameisjuan got a reaction from leadeater in We Got 100 Gigabit Networking!... HOLY $H!T   
    Should be a termination for whoever made that decision. Working with and around even a handful of DACs is a nightmare and from a support standpoint with their failure rates, even more so.
     
    There is a reason why SM optics are becoming the only option in enterprise as companies are ditching MM and DAC as SM cost difference diminishes. No bulk, no replacing the entire DAC, no replacing MM runs when upgrading past what MM can do, patch cables are cheap if a kink occurs, less heat, less power... Yeah I can back up your hatred for DACs but I'm throwing MM in the same pile.
  14. Informative
    mynameisjuan got a reaction from scottyseng in We Got 100 Gigabit Networking!... HOLY $H!T   
    Should be a termination for whoever made that decision. Working with and around even a handful of DACs is a nightmare and from a support standpoint with their failure rates, even more so.
     
    There is a reason why SM optics are becoming the only option in enterprise as companies are ditching MM and DAC as SM cost difference diminishes. No bulk, no replacing the entire DAC, no replacing MM runs when upgrading past what MM can do, patch cables are cheap if a kink occurs, less heat, less power... Yeah I can back up your hatred for DACs but I'm throwing MM in the same pile.
  15. Informative
    mynameisjuan got a reaction from Donut417 in How to fix bufferbloat?   
    That is still all on the ISPs end. 
     
    All buffer bloat is is when a output queue gets maxed out on a switch or router. This is all upstream on customers. Buying a router to help with bufferbloat is nothing but smoke and mirrors. 
     
    Nothing a home router can do can prevent it. Sure, you can slow down TCP but that just help maintain connection. Speeds and latency are still going to be shit. 
     
    Take for example one of our backbones. We have about 700 customers off a DSL ring where the equipment is limited to 1gig. Thats it, no LAG, no bonding, no more. Finally with be replaced by the end of the month, but the problem is 1.6gigs of traffic is trying to get through at peak hours in the mean time. The SWITCH is suffering from full buffers, not the customers.
     
    No router with any sort of QoS feature will help. Sure you connection will still be active but its going to be shit service. Say even it was just cresting 1gig, the amount assistance you will get will be unnoticeable. Nothing you do will affect latency. 
     
    Sure some UK telcos under subscribe their lines but that definitely is not the case in the US and other parts of the world. The ideal is 6:1 for downstream compared to the total bandwidth an access device can utilize and many ISPs follow this rule. Access platforms are just starting to release 40gig access devices so soon over subscribing will be gone. But until then its your ISPs responsibility to maintain their devices and upgrade when necessary. 
     
    Bufferbloat is not a thing a router can fix. 
  16. Agree
    mynameisjuan got a reaction from DrKondziu in Analysis of the USB-C dongle/headset mess. Bring back the 3.5 mm jack!   
    USB C is a great port but a shit show when it comes to standards and compatibility. 
  17. Agree
    mynameisjuan got a reaction from InterstellarShield in Possibility of 3rd part app "hacking" through wifi   
    This is no different than when Teamviewer was breached and people began logging in remotely to people's machines. Blame the PCs?
     
    This isn't an IOT problem, this is a security problem. Sure there might be more pressing issues like Ring being breached and allowing people to watch them creepily, but IOT are the least of your worries. IOT devices tend to run locked down Linux dostros or limited custom OSes, general extremely limited in commands. Even if breached they can only tend to perform their few daily functions.
     
    You know what's worse? PCs and phones. Devices that have full functionality to execute, install, download, discover.. they can be used to infect other devices, used to act as a proxy for data gathering, listen in and watch just as much as a smart speaker. IOTs dont generally have this level of execution. PCs and phones are much more dangerous in the scope of things.
     
    In the end it's security and how companies and just now starting to take it somewhat seriously.
  18. Agree
    mynameisjuan got a reaction from beat_g in Mac? PC? You don’t have to choose..   
    One day LTT will realize Windows has more use than just gaming
  19. Agree
    mynameisjuan reacted to Lurick in Building a router for future isp?   
    I don't see 10Gb rolling out to the average consume for at least another 5 to 10 years. Sure there are sparse pockets but that's the exception and not the norm. If you're not getting 1Gb for another 2+ years then I wouldn't worry about 10Gb for a while.
  20. Like
    mynameisjuan got a reaction from leadeater in TCP vs UDP   
    Confused about a post about this but yes you are correct.
     
    But the current movement is actually slowly moving more heavily towards UDP for the exact reason of overhead. This has already been a thing with protocols such as RTP and RTSP which assist with data guaranteed delivery. For ex. Google's QUIC uses UDP with instead of TCP and controls packet reordering at L7 which gives you the benefits of UDP with little drawbacks.
     
    Protocols like QUIC are being developed and pushed towards UDP as TCP just cannot keep up and re-transmitted traffic (traffic lost due to drops or just latency) make up single digit percentages of total bandwidth on ISP backbones which is too much. Cloudflare is probably using something similar, I have not looked into it yet.
     
    But all in all UDP is NOT more resistant to drops without assistance. But when Linus said "built on a protocol that is more resistance to drop outs" he is referring to my point about QUIC and how it uses assistance to guarantee delivery
  21. Informative
    mynameisjuan got a reaction from imcaspar in Managed Switch to use 2 ISPs on multiple Sub-Networks?   
    To preface, this statement might put this out of your reach and even price range for this situation. Nothing against you, real load balancing across multiple WANs with proper router, firewall zones to avoid asymmetrical routing problems, etc... require experience and if you want to simplify it, SDWAN which will cost you big time. It becomes overwhelming pretty quick.
     
    If you are brave and on a budget  your best bet here is a used Juniper SRX240, or willing to spend a bit extra for modern supported hardware, SRX320. They will be able to support all your requested fields as these modules base have 6 ports and the 320 has a PoE version. There are 2 slots where you can add additional ports if you need them, even a SIM card slot if you need.
     
    They are solid firewalls, can route a gig no problem, almost full switching functionality and has IPS (intrustion prevention system) aka like an anti-virus if needed for a cost.
     
    This is quite a task and Juniper is not something you just jump right into. There is nothing else that comes right to mind other than a PFsense box of Ubiquiti but they are not routing and switching in one box and dont support load balancing on the WAN. 
     
    If you decide to go this route you can update us here and I can give you some templates to use if need be!
  22. Like
    mynameisjuan got a reaction from mtz_federico in How are IPv6 addresses allocated?   
    Boy that is a rabbit hole. 
     
    People need to know this and understand this because its actually a large reason why IPv6 is not being deployed, basically by fear. People who do not understand IPv6 and the job of a firewall cant wrap their head around their devices having public IPs.
     
    NAT is not security but NAT, more specifically PAT, has security LIKE side effects. Its not security nor a replacement for a firewall.
     
    Also to contribute to the amount of IP space. If taken account with our current ARIN addressing of 2000::/3, we could give every human on earth right now not one /64 but 2 billion /64s and still have some left over. 
  23. Like
    mynameisjuan got a reaction from Lurick in Managed Switch to use 2 ISPs on multiple Sub-Networks?   
    To preface, this statement might put this out of your reach and even price range for this situation. Nothing against you, real load balancing across multiple WANs with proper router, firewall zones to avoid asymmetrical routing problems, etc... require experience and if you want to simplify it, SDWAN which will cost you big time. It becomes overwhelming pretty quick.
     
    If you are brave and on a budget  your best bet here is a used Juniper SRX240, or willing to spend a bit extra for modern supported hardware, SRX320. They will be able to support all your requested fields as these modules base have 6 ports and the 320 has a PoE version. There are 2 slots where you can add additional ports if you need them, even a SIM card slot if you need.
     
    They are solid firewalls, can route a gig no problem, almost full switching functionality and has IPS (intrustion prevention system) aka like an anti-virus if needed for a cost.
     
    This is quite a task and Juniper is not something you just jump right into. There is nothing else that comes right to mind other than a PFsense box of Ubiquiti but they are not routing and switching in one box and dont support load balancing on the WAN. 
     
    If you decide to go this route you can update us here and I can give you some templates to use if need be!
  24. Agree
    mynameisjuan reacted to Lurick in How to change NAT type ? Done few things but still.   
    So because 99.something percent of all users don't have the issue the entire industry should cater to a small fraction of a percent of people and give them public IPv4 addresses so they can game?
    CGNAT is proper addressing, it might not be great for some people but for the rest of the population it works just fine.
  25. Agree
    mynameisjuan got a reaction from leadeater in Packet loss suddenly 126% !?   
    Testing packetoss with speed test will always get you packet loss. It's how TCP and their algorithm work. Test with the site and others like fast.com before jumping to conclusions.
     
    And no, calling them over packet loss in a speed test app is not the go to answer. Sure they can tell you when something is wrong but when it comes to packet loss it's not always a cut and dry reason why.
×