Jump to content

mynameisjuan

Member
  • Posts

    4,012
  • Joined

  • Last visited

Everything posted by mynameisjuan

  1. Would you mind adding the config in a code snippet instead? I tend to prefer not to download any files Diagram would be good if possible, but at a minimum, a few examples of source/destination IPs would be needed
  2. If they said the .230 is "usable", then that will be your IP with .229 as your GW. IP Address: 62.xx.xxx.230 Subnet Mask: 255.255.255.252 Default Gateway: 62.xx.xxx.229 .231 would be the BC address and that would not be assigned.
  3. The SRX configuration (sanitized) and a very simple diagram would help with example IPs.
  4. There will always be overhead at L1/L2. Sure, when you're dealing with sub-linerate, provisioning to compensate for overhead is not the end of the world and can be a cost measure. However, if you're paying for 300/50 and you run a speedtest and get ~285/45, you are still getting the speeds you payed for.
  5. The video feels nearly the same as every other LTT video discussing anything to do with networking. It would have been better to demonstrate real world tests with various simulated connections with the use of shaping/policing or forwarding through a linux VM and use TC to adjust loss/latency/jitter. I have been saying it for years and a majority of people overestimate how much throughput they actually use outside of raw downloads/uploads and they would be surprised how low you can go before you begin to notice the impact. My career has been entirely in the SP field and at all providers I worked at, we perform audits multiple times a year to observe traffic patterns. Not much has actually changed in recent years. Focusing on the customers with 1G plans, 99.5% peaks still sit ~125mbps with 99.9% ~500mbps. And those are just peak rates. Mean sits anywhere from as low as 5mbps to around 15mbps. It hasn't really changed because the goal is for a better experience and a service is going to of course optimize by focusing on reducing latency and throughput for it's traffic. I would recommend LTT change up this repeated format and focus more on real world demos with side by side comparisons of bandwidth. The primary reason is billing and the variety of hardware deployed. Yes, it sucks, but accounting is already a nightmare and there is administrative limit to the number of plans to offer for both the SP and the customer. Also 900/180 on a 1G/200 is expected with overhead.
  6. The irony in that statement in which Cisco is considered vendor locked where as Ubiquiti isn't. Learning IOS-* paves the way for ease of transition to a large majority of other vendors. Even the most regarded GUIs are meh at best and it becomes clear whenever you get familiar with any NOS' CLI. You will never get responsiveness, verbose/condensed output, more fluid configuration or a multitude of methods to interface with a GUI the way you do with CLI. CLI is still king and will be the go to for the foreseeable future. Don't fall in the trap that GUIs are easier as in most instances they are not (there are exceptions of course for specific configs). Some GUIs can make things more convoluted or tedious requiring a dozen or more steps across various unrelated directories where the same config would be a few lines at worse. Unifi is notorious for this and one reason why I consider it among one of the worse GUIs on the market. Pretty != good/easy. GUIs have their use cases but each one is easily replaced by almost any NMS if possible. show | compare commit check commit confirmed commit and-quit FTFY RFCs are not standards, they are strongly recommended guidelines to abide by.
  7. There are multiple level and scopes to resiliency and redundancy, and increase in cost and complexity for each level introduced. Just because the redundancy is not geo-diverse and geo-redundant does not mean it's not redundant. Redundant circuits from the same SP is redundancy, just more so on the basic level, and does provide benefits on it's own. Depending on the SP's topology/design, this may provide decent redundancy excluding the potential of a SPOF in a fiber bundle. At minimum, LTT should have a second circuit even if the only option is with the same provider. At least it will cover instances when their FW craps out or they need to perform work during hours. While costs of downtime are dependent on other factors, their business relies on the revenue which requires connectivity.
  8. I mean yes, but that is steering away from the underlying concept and purpose of the OSI and TCP/IP model. Again, they are models, not strict guidelines in the same vein that RFCs are not standards. It's really intended associate functions to the layers that they are involved in on the wire or in the forwarding domain. This is what I was saying and leads into my next statement. I said "almost every protocol" not every protocol which as you mentioned would leave only Eth, IP/ISO, TCP/UDP (debatable) and a handful of others. I mentioned L2/L3 headers because the model is really in the forwarding domain. Once you ignore that and encapsulation as the headers are stripped off with the final payload being the protocol in question being processed by the software (this is the entire concept of networking fundamentals), almost every protocol is now L7. Every control or routing protocol is processed in software and ignoring where it exist in the forwarding domain and focusing purely on the protocol itself, literally anything that does not consist of only an Eth and/or IP header cannot be anything other than L7 by this concept. STP, ARP, IGMP, etc., well they just have an Eth header which is stripped off and the software processes the protocol, now it's L7. ICMP/ICMPv6, now it's L7. And it goes on and on, hence why I said this argument always leads to this same conclusion. You cannot make that claim without applying it elsewhere. That is why the model has to be view from the scope of the forwarding domain else it would be essentially useless. And that's my point. A router/switch consist of a control-plane and forwarding-plane. The FP is a paper weight without the control-plane which is the software that manages it. Any protocol that the device is running would mean it would have to be also classified as L7 by this argument because even down at L2, STP is still software communication across the network.
  9. BGP and RIP/RIPv2 are the only protocols that use TCP/UDP respectively. OSPF, OSPFv3, EIGRP and even intermediate-system to intermediate-system (I don't know if the acronym would get flagged like the discord) do not use TCP/UDP, just L2/L3 and the protocol's headers, and thus are a "true" protocol. I very much disagree. This has been argued and always leads to if that is the case, then almost every protocol must also be consider L7. This is where the debate comes in as there are valid counter arguments which blurs the layer where it should fit in and focuses on soft-state (protocols that just use messages for keepalives, acknowledgements, etc and rely on timers) vs hard-state protocols (protocols that use reliable communication such as TCP). Protocols like BGP and LDP use TCP for reliable transport for the adjacency and it's valid to argue that the transport does not really contribute to what layer the protocol falls into. Because it uses TCP for ACKs for efficiency instead of re-announcements like soft-state it now falls into a completely different layer? Valid IMO. So yes, there are indeed arguments to be made but the majority settled on just placing it at L7 but also considering adjacent to the other routing protocols at L3.
  10. Not the forum I expected to see this sort of thread. This has been argued to death over the past two decades enough where IETF has an RFC stating that not everything will fall neatly in the OSI model and the same goes for TCP/IP. BGP is a routing protocol and an application and considered to be a L7. Because it leads to semantic arguments, a majority don't care if it's called L3 vs L7. Technically correct if the argument is that it's at the application layer. BGP uses the TCP/IP model not OSI and as such, the application layer falls at L4.
  11. The opening remarks is my honest reaction as well. If this is not stopped dead in the tracks now, it's going to spiral into a disaster. You need to push back and ask the simple question of "What are the doing that is wrong and what has been attempted to fix it?" I can guarantee the response will be along the lines they do not like a process that is BCP because of ignorance. That's completely understandable because that's not their position and why their IT is outsourced. However, until there is a list a valid reasons why they are "doing a bad job", going in blind means there is a certain chance that at minimum, the same issues are repeated and at worst, much larger issues will arise. You're a Marketing Exec and this should have never been a conversation. Push back on finding what the issues actually are and/or explore other options such hiring in-house IT or changing MSPs.
  12. This tool is one of the most valuable tool I have in my bag whenever I need to head to the CO/COLO. And yes, it works for removing SFPs. The upper overbite lip that is used to press on the tabs is used to catch the latch and pull out the SFP.
  13. @Sparseic Try bypassing Window's IP stack with the following: Open CMD Enter nslookup and hit enter Enter the following in order and record the output dns.google one.one.one.one server 8.8.8.8 dns.google one.one.one.one server 1.1.1.1 dns.google one.one.one.one This will give you a better idea of where the root cause may lie. At first it defaults to the DNS server(s) provided by DHCP so this will test request to your router. From there you are querying defined servers and looking for a response.
  14. It's actually relevant and almost always overlooked. Most speed tests utilize disk depending on memory and throughput and write performance can actually impact the results along with other factors such as CPU. It's essentially downloading many files of random data and cached until completed. Look at task manager when running a test and you'll see. If you want to remove as many variables as possible limiting to the CPU/memory/NIC and throughput, try again with iPerf3 between the PCs as it runs in memory and will give you more realistic results of network performance.
  15. As you know, anything on the host OS side is out of my expertise. That said, I know that Safe Mode w/ Networking does load the network drivers so I would have to assume this is a service/program causing this. I know Windows is terrible when it comes down to tshooing services/processes, but is there anything that stands out when performing the test in Task Manager/Resource Monitor?
  16. If you want to satisfy all the requirements, there is far more involved than just installing something on the DC. It would require aggregating data from multiple source and you would have to have policies to ensure hosts are not able to bypass any of the data points. I can be simplified with a proper NGFW. These requirements are better served with policies/applications on each host in the domain. I am not in the enterprise space so I cannot speak for specifics but there applications that can do what they need.
  17. 16 cores is not as much as the bottleneck as is the memory. If polling were expanded to a proper NMS, then CPU core count would be a factor. However, this device is pure marketing and makes little sense to exist other than to convince people into buying a re-branded dell server with glaring limitations. Enterprises/MSPs at the scale that this is targeting (ignoring the fact that at that scale, Unifi is rarely the option), they will 100% already have server infrastructure in place to host the controller in a VM. Not only is it easier to scale up/down, it also allows for proper redundancy. If there is a need to install additional CPU/RAM in this, then your device count has already reached a point where you have put too much faith in a single point of failure
  18. It falls under the J-series with is very old and I would agree, just pick up a cheap unmanaged gigabit switch as the SRX1xx is fast-ethernet. Also @Tarl72 if you do not have a Junos image and have it backed up somewhere, you are in for a rough time. Junos is freeBSD based and does not handle sudden power loss at all. If your SRX110 loses power and does not have a partition that is being upkeeped, 99.9% of the time it will be corrupted. If this happens and you do not have a backup image, the device is a brick. And no, you cannot obtain this images without a valid Juniper support contract. I don't believe this statement was even introduced in 12.x (latest supported revision) and is enabled by default. Just configure `interfaces fe-0/0/x unit 0 family ethernet-switching vlan members [ x ]`. Just bypassing security features for traffic being switched but you can still have some security in place via firewall filters. On the current branch SRX3xx, you can even utilize most security features with the use of secure-wire (transparent L2 firewall without putting the entire device into transparent-mode). Not stupid at all and very common on modern branch series. Even Junos has it's enabled by default and best to keep it enabled. You do not lose anything and can still utilize all security features with the use of IRB (or VLAN on older Junos) interfaces and/or secure-wire. For this use case, I would just get a cheap gig unmanaged switch because SRX110 is fast-ethernet and uses a bit more power.
  19. Networking wise, their content/knowledge is novice to adv. beginner and homelab/SMB at best. There a much better networking channels for getting into the fundamentals to mid-level designs. That said, the OP's project seems to already be in progress. Less time needs to be focused on the design itself and instead focus on looking for consultants. These scenarios almost never end well.
  20. Completely agree and the counter argument for that point and it's solutions are scarce. Like I said, there are valid points on both sides. The good news is they are breaking down every scenario imaginable as they know if they are not thorough, they have the potential of wiping 70% of sites off the internet overnight.
  21. Bandwidth is actually the lowest cost comparatively to every other cost in running an ISP. Sure, we're paying 6 figures/month on peering/transit for dozens of peers/transit providers, but bandwidth at the edge is a fraction of the cost of what bandwidth you're paying for. It's the access/core infrastructure. But that still does not compare to infrastructure cost. Last year's audit listed each of our customer's maintenance cost at $32/customer. This cost only includes average MRC of electricity, pole licensing, vendor hardware/software licensing for access platforms, state fees, database fees, etc. AKA, this is the bare minimum cost to keep fiber on the poles and an ONT operational. That's it. It excludes bandwidth (peering/transit), core infrastructure (hardware maintenance/software licensing/upgrades/expansion), operational cost (NOC/support/engineering), OSP infrastructure, installation, maintenance and repair (largest cost) and standard business practices (AKA running a business). So yes, there is indeed a significant cost to just have equipment "exist" and it's far greater than people realize. There was a discussion in the very early 2000s and in the last 1.5 years, this heavily debated discussion is coming back up in the various standards bodies. In the ATM/PSTN world, long distance calls were payed for by the caller. Makes sense, why should the callee pay for answering that call. The same concept is being debated for internet services. In the years following the pandemic as streaming services exploded and throughput is increasing as exponential rates combined with hardware shortages, providers took a step back to say "hey, why are we essentially being required to pay millions into infrastructure while these services are essentially using our infrastructure for free and profiting off it". I am on the edge of either side of that discussion but it has valid points. The consumer is paying for higher bandwidth packages in order to consume a service (which they also pay for). The provider is then essentially required to continuously upgrade infrastructure to handle the additional throughput which that cost is then passed down to the consumer in some fashion. The service pays essentially nothing for transit cost to the consumer. Summary, service profits from consumer, some of those profits are used to pay for their infrastructure/peering cost, all infrastructure cost for the intermediate providers to the consumer are up to the providers to pay for. Its an interesting but hot discussion and would (if providers are kept honest) lead to lower cost consumer bandwidth plans as sites would be paying MRC to transit providers based on bandwidth usage. If YT/Netflix consumer over 50% of all traffic, they are going to have to contribute the cost which is being forced on the providers. And yes, before the argument that the big 5 providers "make too much money", well they are only a small portion of infrastructure in the grand scheme of things. Major valid counter argument is that would of course just shift cost and streaming platforms would drastically increase costs and each service would essentially have a subscription. End of the "free internet". But this would essentially be a direct solution to the OPs problems. Everyone has a minimum base cost, you pay for your services which they charge in addition to their throughput fees.
  22. Your ISP's CPE (switch in the cabinet) is to your benefit as well. Sure it provides the basic delivery of your service, but it also provides an additional segment for monitoring and troubleshooting. From a SP perspective, this is incredibly valuable. If you call in an issue, they could verify if an outage is power related via dying-gasp or if your seeing degraded performance is on their end or your via RFC2544/Y.1564/RFC6349 to verify frame loss or indicator for damaged fiber. Without a CPE, your MTTR is drastically reduced as dispatches cost money and time which apply to the customer as well. Attenuators are not really needed and sometime introduce more problems. Most optics these days (even cheap ones) can easily handle back-to-back run up to 20km, even some 40km optics, as the Rx thresholds are typically greater than their Tx power. Unless your dealing with 40/80km+ or AMPs, there risk of burning out an optic is microscopic. You spent so much time determining when the simple solution was right in front of you. The Tx/Rx of the fibers when prepackaged duplex have color coded strain reliefs for each strand. That and using left/right didn't work because they are already flipped at one end. The effort that Jake put in to cable management made my stomach turn. Glad to see more networking vids regardless. But I really hope you got permission or notified prior to this because getting into a share NID or conduit has it's own legal liabilities.
  23. Technically, yes. However, you should always be in contact with the facility's IT/networking team before just plugging in equipment to their network. The common best practices are to harden the access layer so that users cannot just plug in other networking equipment as it's too common that it leads to issues/outages. Even if you have a scheduled event at a location, you should never just plug equipment in without authorization from their IT/networking team. They should always be informed of a BYOD being connected. They could be running 802.1x for security reasons and no matter what you connect, an unauthorized device will never work. The problem is not managed vs unmanaged, it's not understanding/knowing what is running on a manged switch and how the network that you are connecting to is configured. BPDU-guard is very common and hence why an unmanaged switch will tend to work because they are not running STP. But there are many other port-security features that are also common where managed/unmanaged will not work regardless. So the final points are: Pick a switch that works for you Never just plug a network device into a network that is not managed by you and expect it to work Always contact the IT/networking team of the facility/company/organization that the event is being hosted prior to the event Get authorization to connect your equipment and confirm that it you would be able to do so Confirm any additional configuration that may be needed on your equipment for it to work Provide details of what is being connected. All devices including the switch. Failing to do any of the above can result in their IT denying any support, prohibiting your equipment and putting a halt to your event. Regardless of your agreement/conversation with other members prior. Worse case is you introduce a problem and word spreads and you are suddenly barred from other venues for hosting events.
  24. Outside of 802.1x or general port security mechanisms, I am almost certain their IT is referring to BPDU-guard and nothing to do with their FW. Nearly all managed switches are running STP by default and it's best practice to run BPDU-guard on all access switches. If they plugged in a managed switch and STP was enabled, it would shutdown the interface to protect a user from causing a loop. First thing is what is the actual requirement for the need of minimum switching capacity of 120gbps? Second thing is why is your IT not handling this? Clearly they have control over the network and should be responsible for installing and managing all equipment. It's never advisable to have a user select and use their own equipment without being reviewed and approved nor have have no visibility (unmanaged) on what sound like a critical device in the network.
  25. Passive/active DACs have their limits, 7/15m respectively and pushing either near their max reduces reliability. AOC isn't worth mentioning because it has no benefits and more problems then simply going fiber. A DAC's reasonable limits should be within the rack and no more. Less power consumption and heat yes, those are legit and undeniable benefits. The only one I would add is cost, but that gap is shrinking. Lower latency ("faster" as the result of lower latency), not so much. With passive you're talking double to low triple digit picoseconds of propagation/serialization latency savings with P-DACs. Unless you're in the HFT field or DC where every propagation latency matters due to poorly coded applications and large 7+ stage Clos designs, the latency savings/performance improvements are essentially moot. So while you may save on a small bit of cost, heat and power, you now have to deal with other issues that DACs face: - Large, heavy and stiff cables with large bend radius'. Only gets worse going to 5m+ - Can introduce a lot of strain of the SFP cage - Doesn't take many before airflow can be impacted - Due to lack of cable management due to stiffness, replacement of intermediate equipment can require removing obstructing DACs. Pain to work with in general. - Many report issues with interop between vendors and difficulty get properly codeded ends. - P-DACs (only ones worth it) can be extremely sensitive to movement and environment - Upgrades require replacing the entire runs - Reduces any benefit of interbay panels Overall, it's better just to stick with fiber (better yet SM for seamless upgrades) if the budget allows. But when you're dealing with 100g and the aggregate cost of all equipment is well into 6-figures and beyond, is it really worth it to save $500-2000 in the end?
×