Jump to content

mynameisjuan

Member
  • Posts

    4,012
  • Joined

  • Last visited

Awards

This user doesn't have any awards

Profile Information

  • Interests
    ISP Network Engineer
  • Occupation
    ISP Network Engineer

Recent Profile Visitors

2,642 profile views
  1. Would you mind adding the config in a code snippet instead? I tend to prefer not to download any files Diagram would be good if possible, but at a minimum, a few examples of source/destination IPs would be needed
  2. If they said the .230 is "usable", then that will be your IP with .229 as your GW. IP Address: 62.xx.xxx.230 Subnet Mask: 255.255.255.252 Default Gateway: 62.xx.xxx.229 .231 would be the BC address and that would not be assigned.
  3. The SRX configuration (sanitized) and a very simple diagram would help with example IPs.
  4. There will always be overhead at L1/L2. Sure, when you're dealing with sub-linerate, provisioning to compensate for overhead is not the end of the world and can be a cost measure. However, if you're paying for 300/50 and you run a speedtest and get ~285/45, you are still getting the speeds you payed for.
  5. The video feels nearly the same as every other LTT video discussing anything to do with networking. It would have been better to demonstrate real world tests with various simulated connections with the use of shaping/policing or forwarding through a linux VM and use TC to adjust loss/latency/jitter. I have been saying it for years and a majority of people overestimate how much throughput they actually use outside of raw downloads/uploads and they would be surprised how low you can go before you begin to notice the impact. My career has been entirely in the SP field and at all providers I worked at, we perform audits multiple times a year to observe traffic patterns. Not much has actually changed in recent years. Focusing on the customers with 1G plans, 99.5% peaks still sit ~125mbps with 99.9% ~500mbps. And those are just peak rates. Mean sits anywhere from as low as 5mbps to around 15mbps. It hasn't really changed because the goal is for a better experience and a service is going to of course optimize by focusing on reducing latency and throughput for it's traffic. I would recommend LTT change up this repeated format and focus more on real world demos with side by side comparisons of bandwidth. The primary reason is billing and the variety of hardware deployed. Yes, it sucks, but accounting is already a nightmare and there is administrative limit to the number of plans to offer for both the SP and the customer. Also 900/180 on a 1G/200 is expected with overhead.
  6. The irony in that statement in which Cisco is considered vendor locked where as Ubiquiti isn't. Learning IOS-* paves the way for ease of transition to a large majority of other vendors. Even the most regarded GUIs are meh at best and it becomes clear whenever you get familiar with any NOS' CLI. You will never get responsiveness, verbose/condensed output, more fluid configuration or a multitude of methods to interface with a GUI the way you do with CLI. CLI is still king and will be the go to for the foreseeable future. Don't fall in the trap that GUIs are easier as in most instances they are not (there are exceptions of course for specific configs). Some GUIs can make things more convoluted or tedious requiring a dozen or more steps across various unrelated directories where the same config would be a few lines at worse. Unifi is notorious for this and one reason why I consider it among one of the worse GUIs on the market. Pretty != good/easy. GUIs have their use cases but each one is easily replaced by almost any NMS if possible. show | compare commit check commit confirmed commit and-quit FTFY RFCs are not standards, they are strongly recommended guidelines to abide by.
  7. There are multiple level and scopes to resiliency and redundancy, and increase in cost and complexity for each level introduced. Just because the redundancy is not geo-diverse and geo-redundant does not mean it's not redundant. Redundant circuits from the same SP is redundancy, just more so on the basic level, and does provide benefits on it's own. Depending on the SP's topology/design, this may provide decent redundancy excluding the potential of a SPOF in a fiber bundle. At minimum, LTT should have a second circuit even if the only option is with the same provider. At least it will cover instances when their FW craps out or they need to perform work during hours. While costs of downtime are dependent on other factors, their business relies on the revenue which requires connectivity.
  8. I mean yes, but that is steering away from the underlying concept and purpose of the OSI and TCP/IP model. Again, they are models, not strict guidelines in the same vein that RFCs are not standards. It's really intended associate functions to the layers that they are involved in on the wire or in the forwarding domain. This is what I was saying and leads into my next statement. I said "almost every protocol" not every protocol which as you mentioned would leave only Eth, IP/ISO, TCP/UDP (debatable) and a handful of others. I mentioned L2/L3 headers because the model is really in the forwarding domain. Once you ignore that and encapsulation as the headers are stripped off with the final payload being the protocol in question being processed by the software (this is the entire concept of networking fundamentals), almost every protocol is now L7. Every control or routing protocol is processed in software and ignoring where it exist in the forwarding domain and focusing purely on the protocol itself, literally anything that does not consist of only an Eth and/or IP header cannot be anything other than L7 by this concept. STP, ARP, IGMP, etc., well they just have an Eth header which is stripped off and the software processes the protocol, now it's L7. ICMP/ICMPv6, now it's L7. And it goes on and on, hence why I said this argument always leads to this same conclusion. You cannot make that claim without applying it elsewhere. That is why the model has to be view from the scope of the forwarding domain else it would be essentially useless. And that's my point. A router/switch consist of a control-plane and forwarding-plane. The FP is a paper weight without the control-plane which is the software that manages it. Any protocol that the device is running would mean it would have to be also classified as L7 by this argument because even down at L2, STP is still software communication across the network.
  9. BGP and RIP/RIPv2 are the only protocols that use TCP/UDP respectively. OSPF, OSPFv3, EIGRP and even intermediate-system to intermediate-system (I don't know if the acronym would get flagged like the discord) do not use TCP/UDP, just L2/L3 and the protocol's headers, and thus are a "true" protocol. I very much disagree. This has been argued and always leads to if that is the case, then almost every protocol must also be consider L7. This is where the debate comes in as there are valid counter arguments which blurs the layer where it should fit in and focuses on soft-state (protocols that just use messages for keepalives, acknowledgements, etc and rely on timers) vs hard-state protocols (protocols that use reliable communication such as TCP). Protocols like BGP and LDP use TCP for reliable transport for the adjacency and it's valid to argue that the transport does not really contribute to what layer the protocol falls into. Because it uses TCP for ACKs for efficiency instead of re-announcements like soft-state it now falls into a completely different layer? Valid IMO. So yes, there are indeed arguments to be made but the majority settled on just placing it at L7 but also considering adjacent to the other routing protocols at L3.
  10. Not the forum I expected to see this sort of thread. This has been argued to death over the past two decades enough where IETF has an RFC stating that not everything will fall neatly in the OSI model and the same goes for TCP/IP. BGP is a routing protocol and an application and considered to be a L7. Because it leads to semantic arguments, a majority don't care if it's called L3 vs L7. Technically correct if the argument is that it's at the application layer. BGP uses the TCP/IP model not OSI and as such, the application layer falls at L4.
  11. The opening remarks is my honest reaction as well. If this is not stopped dead in the tracks now, it's going to spiral into a disaster. You need to push back and ask the simple question of "What are the doing that is wrong and what has been attempted to fix it?" I can guarantee the response will be along the lines they do not like a process that is BCP because of ignorance. That's completely understandable because that's not their position and why their IT is outsourced. However, until there is a list a valid reasons why they are "doing a bad job", going in blind means there is a certain chance that at minimum, the same issues are repeated and at worst, much larger issues will arise. You're a Marketing Exec and this should have never been a conversation. Push back on finding what the issues actually are and/or explore other options such hiring in-house IT or changing MSPs.
  12. This tool is one of the most valuable tool I have in my bag whenever I need to head to the CO/COLO. And yes, it works for removing SFPs. The upper overbite lip that is used to press on the tabs is used to catch the latch and pull out the SFP.
  13. @Sparseic Try bypassing Window's IP stack with the following: Open CMD Enter nslookup and hit enter Enter the following in order and record the output dns.google one.one.one.one server 8.8.8.8 dns.google one.one.one.one server 1.1.1.1 dns.google one.one.one.one This will give you a better idea of where the root cause may lie. At first it defaults to the DNS server(s) provided by DHCP so this will test request to your router. From there you are querying defined servers and looking for a response.
  14. It's actually relevant and almost always overlooked. Most speed tests utilize disk depending on memory and throughput and write performance can actually impact the results along with other factors such as CPU. It's essentially downloading many files of random data and cached until completed. Look at task manager when running a test and you'll see. If you want to remove as many variables as possible limiting to the CPU/memory/NIC and throughput, try again with iPerf3 between the PCs as it runs in memory and will give you more realistic results of network performance.
  15. As you know, anything on the host OS side is out of my expertise. That said, I know that Safe Mode w/ Networking does load the network drivers so I would have to assume this is a service/program causing this. I know Windows is terrible when it comes down to tshooing services/processes, but is there anything that stands out when performing the test in Task Manager/Resource Monitor?
×