Jump to content

Wombo

Member
  • Posts

    657
  • Joined

  • Last visited

Reputation Activity

  1. Like
    Wombo got a reaction from leadeater in Once I Turn On PC Everything Else Loses Internet   
    I've seen issues in the past with some of the ASUS suite of tools. If you installed any of these I would try removing them for the time being. Specifically the issue I saw in the past was related to AISuite which was causing the router/AIO to reboot whenever the PC was on.
     
    An easy test is turning the PC on and enable safe mode with networking. If this works just fine it's likely an application or process causing the issue.
  2. Agree
    Wombo got a reaction from Lurick in BGP on a Cisco 7200   
    I would not recommend BGP for this. BGP is not meant to be an IGP and really shouldn't be looked at as a protocol that supplies any kind of reliable failover. BGP is for route replication across the entire internet, by design it is VERY slow because of this.
     
    If your network is segmented into spans no greater than 8 devices in a chain you should be good to stay with layer 2 protocols such as STP. If you branch out to anything larger I would recommend going to a routed layer 3 design with a good IGP such as OSPF, IS-IS or even EIGRP if you find enough platforms that support it.
     
    If complexity is low you should stay with Layer2. As complexity increases, move to a layer 3 design that bridges the layer 2 segments together and handles the routing/redundancy. As spans increase, implement optics (layer 1). As you start to get into a larger network designs all 3 elements will have to be considered. The most typical modern service provider designs will involve layer 2 for transport and layer 3 for routing/redundancy with optics spanning the long haul.
     
    As an example, modern service provider MPLS networks involve layer 2 transport, typically Ethernet, with routing choices for these layer 2 frames performed via higher levels protocols such as OSPF at layer 3 for finding the best path between nodes. One way I like to think about it is every time you step closer in the network view you go deeper by 1 layer. At a high-level you have the layer 3 routed design showing the paths between nodes and peers (Ex. router-router). As you go in deeper you see the layer 2 paths between the network elements, typically the MPLS paths for forwarding traffic between layer 3 devices. Going deeper yet you find the layer 1 paths between your devices, these would be the physical optical links, or in this case, wireless shots.
     
    BGP, as its name implies, should really only be used at the network border for route advertisement and replication to the greater internet. BGP is not an IGP.
  3. Agree
    Wombo reacted to leadeater in BGP on a Cisco 7200   
    @Wombo Advice?
     
    You don't have to use BGP there are other routing protocols that you could use, OSPF/IS-IS etc, but picking the correct one leads to many detailed network design questions.
     
    If your going for a routed network design then yes each end of each link will need a routing capable device. Also be careful since we are talking about wireless links which from a networking standpoint are unreliable and you could end up with many link down/route convergence events that are unnecessary which is a bad thing.
     
    I would actually suggest talking this to the Ubnt forums, keep us in the loop as I'd like to hear how this goes.
  4. Informative
    Wombo got a reaction from Kookieman in BGP on a Cisco 7200   
    I would not recommend BGP for this. BGP is not meant to be an IGP and really shouldn't be looked at as a protocol that supplies any kind of reliable failover. BGP is for route replication across the entire internet, by design it is VERY slow because of this.
     
    If your network is segmented into spans no greater than 8 devices in a chain you should be good to stay with layer 2 protocols such as STP. If you branch out to anything larger I would recommend going to a routed layer 3 design with a good IGP such as OSPF, IS-IS or even EIGRP if you find enough platforms that support it.
     
    If complexity is low you should stay with Layer2. As complexity increases, move to a layer 3 design that bridges the layer 2 segments together and handles the routing/redundancy. As spans increase, implement optics (layer 1). As you start to get into a larger network designs all 3 elements will have to be considered. The most typical modern service provider designs will involve layer 2 for transport and layer 3 for routing/redundancy with optics spanning the long haul.
     
    As an example, modern service provider MPLS networks involve layer 2 transport, typically Ethernet, with routing choices for these layer 2 frames performed via higher levels protocols such as OSPF at layer 3 for finding the best path between nodes. One way I like to think about it is every time you step closer in the network view you go deeper by 1 layer. At a high-level you have the layer 3 routed design showing the paths between nodes and peers (Ex. router-router). As you go in deeper you see the layer 2 paths between the network elements, typically the MPLS paths for forwarding traffic between layer 3 devices. Going deeper yet you find the layer 1 paths between your devices, these would be the physical optical links, or in this case, wireless shots.
     
    BGP, as its name implies, should really only be used at the network border for route advertisement and replication to the greater internet. BGP is not an IGP.
  5. Like
    Wombo got a reaction from leadeater in BGP on a Cisco 7200   
    I would not recommend BGP for this. BGP is not meant to be an IGP and really shouldn't be looked at as a protocol that supplies any kind of reliable failover. BGP is for route replication across the entire internet, by design it is VERY slow because of this.
     
    If your network is segmented into spans no greater than 8 devices in a chain you should be good to stay with layer 2 protocols such as STP. If you branch out to anything larger I would recommend going to a routed layer 3 design with a good IGP such as OSPF, IS-IS or even EIGRP if you find enough platforms that support it.
     
    If complexity is low you should stay with Layer2. As complexity increases, move to a layer 3 design that bridges the layer 2 segments together and handles the routing/redundancy. As spans increase, implement optics (layer 1). As you start to get into a larger network designs all 3 elements will have to be considered. The most typical modern service provider designs will involve layer 2 for transport and layer 3 for routing/redundancy with optics spanning the long haul.
     
    As an example, modern service provider MPLS networks involve layer 2 transport, typically Ethernet, with routing choices for these layer 2 frames performed via higher levels protocols such as OSPF at layer 3 for finding the best path between nodes. One way I like to think about it is every time you step closer in the network view you go deeper by 1 layer. At a high-level you have the layer 3 routed design showing the paths between nodes and peers (Ex. router-router). As you go in deeper you see the layer 2 paths between the network elements, typically the MPLS paths for forwarding traffic between layer 3 devices. Going deeper yet you find the layer 1 paths between your devices, these would be the physical optical links, or in this case, wireless shots.
     
    BGP, as its name implies, should really only be used at the network border for route advertisement and replication to the greater internet. BGP is not an IGP.
  6. Informative
    Wombo got a reaction from kirashi in Canada (as some of you know it, great white north) Internet.   
    Despite being called "Optik" the services provided by Telus under that branding are not guaranteed to be transported over fibre optics and Telus will tell you that no such correlation is being implied despite the naming convention. Got to love marketing...
     
    Telus infrastructure for residential (and even enterprise sadly) is mostly old copper telephone lines. While it is true these do give you a slightly higher degree of separation from other users they are also a lot more susceptible to issues caused by the legacy technologies used for transport over these lines, not to mention the fact the lines are extremely dated and are prone to physical issues themselves. Additionally, your traffic is almost guaranteed to be aggregated with others past what is called the HLU, so the separation doesn't go too far.
     
    Cable technologies, or DOCSIS, have come a long way over the years. It is true the bandwidth is shared however the number of channels available on DOCSIS 3.1 systems makes this somewhat of a mute point. You're dealing with bandwidths in the tens of Gbps on a single multi channel bonding. Don't look for this being offered anytime soon, but DOCSIS 3.1 is boasting throughput of over 10Gbps to individual subscribers over short/medium distances.
     
    As a bit of information I've gathered over the years, all of Shaw's cable nodes are FTTN, this is often called hybrid fibre/coax. Each node then has redundant fibre uplinks to a larger cable chassis that routes traffic over a larger fibre backbone, and ultimately to the internet. The individual coaxial cable runs themselves are not that long.
     
    To go back to the OP's point about the offerings, if we want to compare Shaw to Telus, the two major ISP's in Western Canada, Telus can barely offer 150 to 15% of their users. Shaw is reporting the service is available on over 98% of their network, with the total number of nodes showing signs of saturation at 0.03%.
     
    Enjoy
     
  7. Like
    Wombo got a reaction from leadeater in Double NAT solutions   
    Sadly, that's he nature of CGN. Sure there are some options, but really it comes down to what your ISP is willing to do for you. The easiest thing to do is unfortunately just to pay the $10. Perhaps if you explain the nature of your needs to your ISP and explain their service doesn't support you int he right ways, they may be willing give you discount.
  8. Informative
    Wombo reacted to leadeater in Nginx - Can I do this?   
    I had to be vague as your diagram/post doesn't fully show all the details needed to make a proper detailed response, it's a little confusing as @WaxyMaxy pointed out.
     
    The simple answer is what @WaxyMaxy showed you, but remember load balancers like ngix alone are not the answer to all problems. Getting the reliability and up time of someone like Square Space is both more complex but also more simple than you might expect, depends on what resources you have.
     
    Normally you have a redundant/resilient border to you network which can be done a few ways or a mix of more than one.
    Round robin DNS to multiple IP addresses of different border entry paths HA or VRRP router/firewall on each border entry path BGP peering so if a path goes down the route tables are updated and traffic comes in a new path Remember some of the above is usually done by a hosting provider so if this is the case you don't even have to worry about it.
     
    After you have created a resilient border you then start looking at load balancing web sites/applications. What you can do is have multiple ngix load balancers which you set the public DNS records to these IP addresses, a DNS record can have more than one IP. If a load balancer goes down you will want that IP address to then become live on another ngix server. The reason for this is DNS TTL, and even with a low value people will still experience moderate down time.
     
    What I outlined above is fairly similar to what we run at work. We have 3 data centers each with multiple 10Gb/40Gb connections with resilient routers and firewalls and we have multiple /16 public IP spaces which we control with BGP. There is also backend connectivity between each data center so if a public entry point goes down traffic can enter from another data center and travel down the backend. For load balancers we use virtual Citrix NetScalers.
     
    I am by no means a networking expert, I'm in the systems engineers team so my work stops at firewall configuration. If you want someone who is much more of a master at this then @Wombo is the person you want.
  9. Agree
    Wombo reacted to leadeater in Nginx - Can I do this?   
    Unfortunately I haven't used WHMCS so can't answer with certainty but automation is a big part of WHMCS so I would say yes, you could practically do anything you want so long as you can write the script/code to do it and call that as part of the provisioning processes in WHMCS.
     
    You could point the filter servers to a downstream reverse proxy type server yes but the logic behind it seems to be not that ideal. If you picture a configuration with 1 front end load balancer which hands off to many filter servers then they hand off to 1 backend server you have two single point of failures which would effect all customers. Also the load that these front and backend servers would be under may be an issue, if you need to have multiple filter servers even if they do more work could single servers alone handle this raw traffic?
     
    Personally I would make the front end server as basic a possible and then combine the filtering and hand off to customer web servers on the same logical layer, this would also scale out much simpler if you need it to. Spin up a new filter server and then add this to the load balancer configuration then update the WHMCS configuration workflow to account for this new server.
     
    I don't know how big of a service you are going to be offering but one thing I always try and avoid are single points of failure.
  10. Agree
    Wombo got a reaction from DSD27 in quick question about speeds   
    The 200Mbps refers to the total switching capacity of the switch, often called "back plane". This is the bandwidth of the internal bus that moves the packets between each interface. So while it has 5 10/100 ports, so theoretically you could put 1Gbps (500Mbps full-duplex) through it, the switch can only only actually handle "switching" up to 200Mbps of packets.
     
    Another way to think about the 200Mbps of this product is that each interface can do 100Mbps in and 100Mbps out, that's already your 200Mbps. This switch can't even handle two interfaces communicating with each other at full throughput.
     
    The switch does look very sub par, get a gigbit switch. The only thing I'm noticing is that there is no power cable shown, if the device is passively powered or PoE powered, then this device does look kind of neat. However a gigiabit switch will get you further.
  11. Like
    Wombo got a reaction from ilyas001 in network and host bits in a ip   
    There's aren't really "private" IPs in IPv6.
     
    You have Link Local which are used for local communication and will be automatically assigned by the host even if there is no subnet assigned or DHCP available. This allows for easy plug-and-play when it comes to IPv6, as far as local networks are concerned anyway. These addresses are always used for communication over the local subnet and will always be in the FE80::/10 range.
     
    Site Local addresses are what most people refer to as the private addresses of IPv6, but these are highly contested and are by no means necessary, or even required. These addresses have been added and removed from the standard many times, and the debate is still up on their validity within the standard. Essentially when coming from IPv4 to IPv6 the notion that we wouldn't use NAT was scary to some. This forced the adoption of a quasi private address space in IPv6 however unless you go out of your way to use these addresses and implement NAT on top of them they can basically be ignored. They serve no inherit purpose within the standard aside from fulfilling some people's inherent want/need for NAT that does not need to exist in the IPv6 world. These addresses will always be within the FD00::/8 range, which is part of the FC00::/7 subnet space currently allocate for site local, or "unique local" addresses. The only currently assigned block (for private use) from this space is FD00::/8 however.
     
    Lastly your Global Unicast addresses are your real, routable, IPv6 address, with Link Local and Site Local being non-routable. In other words, Global Unicast addresses are the only address that are routable on the open internet and allow for public inter-network communication. The current allocated space for Global Unicast addresses is 2000::/3. Addresses within this space are allocated via regional IP authorities, just link public IPv4 addresses. Allocation of these addresses are restricted to minimum allocations of /48. Anything smaller must come from a local LEC, carrier, or other institution who will "lend" you some of their assigned address space, much like how your current ISP lends you one of their IP's so you can reach the internet.
     
    It is important to note that while you can subnet IPv6 addresses below /64, ie. /123 for wan links, this is not recommended for larger network segments as lowering the subnet to blow the 64th bit will cause EUI64, and the variants thereof, to stop functioning, requiring manual assignment of IPv6 addresses or the use of DHCP for IP assignment.
     
    The main reason we use NAT in IPv4 is for address preservation. There are far too many devices that need IP addresses and nowhere near enough IPv4 addresses to go around. This lead to the wide-spread adoption of NAT which up until that point, was more-or-less only used in specific situations requiring the obfuscation of IP's and a few other fringe usecases. In the IPv6 world we have more than enough addresses for everyone to have millions, so we just don't need to use NAT in the IPv6 world.
     
    NAT in IPv6 does serve an important function in inter-compatibility with IPv4 however in the future when everything is IPv6, this will no longer be a factor.
     
    As for the network and host bits of address, they are just used to denote which parts belong to the network (assigned by regional authorities and controlled) and the host bits, which are entirely user controlled. Combined with the subnet mask any computer knows instantly if the address it is trying to reach is part of its own local subnet (network bits matching) or if it is outside its local subnet and requires its default gateway to reach it. Without subnet masks, network bits and host bits, our computers wouldn't have a way to know where they need to send their network traffic.
  12. Informative
    Wombo got a reaction from Kilovice in Network Teaming   
    It won;t help you at all with your WAN connection, if anything it might get worse. It could theoretically increase your connectivity over LAN in certain applications, such as accessing/being accessed by multiple hosts. The issue is tho, you will still only have the same single connection coming in from your ISP. This i your real bottleneck.
     
    So to answer simply, no your throughput will not increase on your WAN connection.However your throughput may increase slightly, depending on workload, over the LAN.
  13. Agree
    Wombo got a reaction from GoodBytes in High Download Low upload   
    This is entirely because of delivery methods such as xDSL and DOCSIS. There is only a limited amount of bandwidth available for signalling. Most consumers care far more for the consumption (notice the similarity) of information on the web and not in the production or supply of said information. Thus the bandwidth is divided into a much bigger portion for download, and a smaller portion for upload. Symmetric throughput is available on these types of delivery methods however they are typically reserved for more fringe cases such as business that host some type of content, even for internal use, or even for VOIP between offices.
     
    Additionally when you give a lager frequency band to one flow of data you can actually push even more information as you can increase the size and frequency of the waveform. Thus if your service was symmetrical, you'd be lucky to get 40/40 as the wave-forms for symmetric throughput would be more constricted.
  14. Agree
    Wombo got a reaction from Donut417 in High Download Low upload   
    This is entirely because of delivery methods such as xDSL and DOCSIS. There is only a limited amount of bandwidth available for signalling. Most consumers care far more for the consumption (notice the similarity) of information on the web and not in the production or supply of said information. Thus the bandwidth is divided into a much bigger portion for download, and a smaller portion for upload. Symmetric throughput is available on these types of delivery methods however they are typically reserved for more fringe cases such as business that host some type of content, even for internal use, or even for VOIP between offices.
     
    Additionally when you give a lager frequency band to one flow of data you can actually push even more information as you can increase the size and frequency of the waveform. Thus if your service was symmetrical, you'd be lucky to get 40/40 as the wave-forms for symmetric throughput would be more constricted.
  15. Like
    Wombo got a reaction from TigerBoy in Best Network Latency Analogy   
    When they inevitably ask you a question about latency, wait 10-15 seconds before answering, just stand there awkwardly.... waiting.
     
    One part informative, one part comedy.
  16. Like
    Wombo got a reaction from Unhelpful in Best Network Latency Analogy   
    When they inevitably ask you a question about latency, wait 10-15 seconds before answering, just stand there awkwardly.... waiting.
     
    One part informative, one part comedy.
  17. Like
    Wombo got a reaction from Lurick in Help! QoS problem   
    QoS itself is just a tagging method to let QoS aware networking devices know which packets are more important than others. I you are trying to us a QoS feature on your home all-in-one device, I would suggest turning it off as your ISP doesn't care if your traffic is tagged, they mark it as "best effort" when it enters their network regardless of any tagging the traffic already has (unless you pay them thousands of dollars for guaranteed services). This means it will only help with congestion internal to your network. With that said tho, consumer grade devices lack the ASIC hardware to do QoS at a hardware level, this causes higher latency and really should only be used in situations where the bandwidth is being fully saturated at all times.
     
    Just turn QoS off, the increased latency on your traffic is not worth your time. It is possible it may cause the device to more accurately drop non-priority traffic at times of congestion, however it will do nothing for your traffic once it leaves your network and will serve little to no purpose unless you are completely saturating your link nearly all the time and would benefit from certain types of traffic being dropped, in favor of others being transmitted.
     
    Edit: If you are looking for bandwidth limiting based on traffic types/QoS markings, you most likely need to configure a policer, which consumer gear is almost guaranteed not to have.
  18. Agree
    Wombo got a reaction from leadeater in Bridged Layer 2 VPN and DHCP Servers   
    I would just create a layer 3 VPN, it will be the easiest and most flexible. If you want a security layer for the VPN just use regular IPsec, by default it doesn't allow multicast/broadcast traffic to pass through. If you want to allow some multicast/broadcast traffic but not others, I'd use GRE, and IPsec if you want the security layer.
  19. Informative
    Wombo got a reaction from Speedbird in Bridged Layer 2 VPN and DHCP Servers   
    I would just create a layer 3 VPN, it will be the easiest and most flexible. If you want a security layer for the VPN just use regular IPsec, by default it doesn't allow multicast/broadcast traffic to pass through. If you want to allow some multicast/broadcast traffic but not others, I'd use GRE, and IPsec if you want the security layer.
  20. Like
    Wombo got a reaction from leadeater in Subnetting   
    Simplest way is to:
     
    1. Find your block (subnet) size
    2. Assign A subnet mask
    3. Determine your network and broadcast addresses.
    (2 and 3 are interchangeable really, you could do any one first)
     
    As an example; Let's split a /24 into 4 subnets.
    A /24, or 255.255.255.0, has 256 addresses, so splitting it into 4 you you get 64.
    256/4=64
     
    So now we know we have 64 total addresses per subnet. Now we need to calculate a subnet mask.
    If you're not confident in this, I highly recommend breaking it into binary, write up a cheat-sheet if you need one to help with your learning. Don't forget that there are 4 octets! (groups of 8)
     
      0     0     0     0    0    0    0   0
    128  64   32   16   8    4    2   1
     
    So if we need 64, we can flip the 64's bit to a 1. When Calculating it this way, we need to invert the 1's and 0's, giving us 10111111. Adding this up we get 192. This brings our mask to 255.255.255.192.
    Note: Standard subnet masks can ONLY contain the following values; 255,254,252,248,240,192,128,0 If you see any other values, the mask is wrong.
     
    Now we have 192.168.0.0 255.255.255.192
    You can also get the CIDR or "slash" notation, by counting to the first 0 in the subnet mask (from left to right). Which is /26 in this case.
     
    Now the easy part. We know our mask, we know our block (subnet) size, now it's just a matter of writing out the 4 subnets.  In this case, we add 64 each time.
    Note: Subnet addresses are always even and broadcast addresses are always odd.
     
    The first subnet starts at 192.168.0.0
    The second subnet starts at 192.168.0.64 (64 higher than the previous subnet)
    The third subnet starts at 192.168.0.128 (another +64)
    The fourth subnet starts at 192.168.0.192 (adding the final +64)
    All four of these subnets have the mask of 255.255.255.192.
     
    These are all our subnet addresses, now we can determine the broadcast addresses, by just going back one from the next address space.
     
    The first subnet is always one less than the block size, so 64-1 here.
    192.168.0.63 (192.168.0.64 minus one).
    To determine the rest we can just add the block size, in this case 64.
    192.168.0.127
    192.168.0.191
    192.168.0.255
    Alternatively, you can just subtract one from the next subnet network address.
     
    Putting it all together we get our four subnet ranges,
     
    192.168.0.0 - 192.168.0.63 (/26 or 255.255.255.192)
    192.168.0.64 - 192.168.0.127 (/26 or 255.255.255.192)
    192.168.0.128 - 192.168.0.191 (/26 or 255.255.255.192)
    192.168.0.192 - 192.168.0.255 (/26 or 255.255.255.192)
     
    I know it seems long, but working through it is the best way to learn and once you have learned you can start using all the shortcuts that are out there. Everyone will also develop their own way of solving subnet maths, it just comes with practice. And once you are good at it, it becomes almost second nature and you can perform the whole of the calculations in seconds.
  21. Informative
    Wombo got a reaction from as96 in Subnetting   
    Simplest way is to:
     
    1. Find your block (subnet) size
    2. Assign A subnet mask
    3. Determine your network and broadcast addresses.
    (2 and 3 are interchangeable really, you could do any one first)
     
    As an example; Let's split a /24 into 4 subnets.
    A /24, or 255.255.255.0, has 256 addresses, so splitting it into 4 you you get 64.
    256/4=64
     
    So now we know we have 64 total addresses per subnet. Now we need to calculate a subnet mask.
    If you're not confident in this, I highly recommend breaking it into binary, write up a cheat-sheet if you need one to help with your learning. Don't forget that there are 4 octets! (groups of 8)
     
      0     0     0     0    0    0    0   0
    128  64   32   16   8    4    2   1
     
    So if we need 64, we can flip the 64's bit to a 1. When Calculating it this way, we need to invert the 1's and 0's, giving us 10111111. Adding this up we get 192. This brings our mask to 255.255.255.192.
    Note: Standard subnet masks can ONLY contain the following values; 255,254,252,248,240,192,128,0 If you see any other values, the mask is wrong.
     
    Now we have 192.168.0.0 255.255.255.192
    You can also get the CIDR or "slash" notation, by counting to the first 0 in the subnet mask (from left to right). Which is /26 in this case.
     
    Now the easy part. We know our mask, we know our block (subnet) size, now it's just a matter of writing out the 4 subnets.  In this case, we add 64 each time.
    Note: Subnet addresses are always even and broadcast addresses are always odd.
     
    The first subnet starts at 192.168.0.0
    The second subnet starts at 192.168.0.64 (64 higher than the previous subnet)
    The third subnet starts at 192.168.0.128 (another +64)
    The fourth subnet starts at 192.168.0.192 (adding the final +64)
    All four of these subnets have the mask of 255.255.255.192.
     
    These are all our subnet addresses, now we can determine the broadcast addresses, by just going back one from the next address space.
     
    The first subnet is always one less than the block size, so 64-1 here.
    192.168.0.63 (192.168.0.64 minus one).
    To determine the rest we can just add the block size, in this case 64.
    192.168.0.127
    192.168.0.191
    192.168.0.255
    Alternatively, you can just subtract one from the next subnet network address.
     
    Putting it all together we get our four subnet ranges,
     
    192.168.0.0 - 192.168.0.63 (/26 or 255.255.255.192)
    192.168.0.64 - 192.168.0.127 (/26 or 255.255.255.192)
    192.168.0.128 - 192.168.0.191 (/26 or 255.255.255.192)
    192.168.0.192 - 192.168.0.255 (/26 or 255.255.255.192)
     
    I know it seems long, but working through it is the best way to learn and once you have learned you can start using all the shortcuts that are out there. Everyone will also develop their own way of solving subnet maths, it just comes with practice. And once you are good at it, it becomes almost second nature and you can perform the whole of the calculations in seconds.
  22. Informative
    Wombo got a reaction from as96 in Subnetting   
    Close, just remember that network addresses are always even and broadcast addresses are always odd.
     
    When adding on the block size to get the next subnet, you add the entire block, including the network and broadcast addresses. So while it is true we only have 62 "usable" addresses in the example above, the network and broadcast addresses do still exist within the range.
     
    Just look at it very simply, and it will come to you easily. Start at the beginning of the very first subnet, so 0 in this case, and just add the block size. Every addition of the block size is the network address of the next subnet. Then just subtract 1 from the next subnet, and you have your ranges. Add your mask and you're done.
     
    0 - 63
    64 - 127
    128 - 191
    192 - 255 (alternatively you can remember the broadcast will always be 255 if it is the last subnet in an octet)
    256 (doesn't really exist, would roll over to 0 and flip the next high-order bit, only present for demonstration)
     
  23. Like
    Wombo got a reaction from schizznick in Almost 400 ping when it is raining!?!?!?!?!   
    If you have DSL, it is wet cable. This happens all to often and would be the most likely culprit.
  24. Agree
    Wombo got a reaction from m0k in ISP giving wrong geolocation   
    Geolocation is handled by the IP owner and typically most ISP's just match their geolocation data to their ARIN (for north america) WHOIS information. Generally speaking geolocation based on IP is mostly superfluous and should have little to no affect on latency, leaning far more towards the later.
  25. Like
    Wombo got a reaction from Unhelpful in Ethernet Switch creating a bottleneck?   
    This is true of hubs, but not un-managed switches. There are plenty of un-managed switches that contain enough backplane to feed all ports at maximum bandwidth. Even with that said, the devision of bandwidth doesn't work like that outside of hubs (dumb switches as they occasionally called).
     
    However if the device in question is a relatively cheap device, it is most likely the device does not contain enough backplane to feed all the ports. With that said tho, keep in mind a vast majority of networking devices, even enterprise and carrier grade ones, do not contain enough backplane to feed all ports at maximum bandwidth while fully utilized.
     
    Generally speaking a modern 4 or 8 port switch, even unmanaged, should contain enough backplane to feed at least 50% of the combined total, more than enough for the vast majority of consumers. There are always outliers however.
     
    A brief google search of the device listed shows a multitude of complaints regarding poor service while utilizing the device. Your recommendation looks good, replace the device with something better.
     
    Edit: The Linksys device looks to be heavily bottlenecked based on a large amount of CPU based forwarding, among other limitations. Looks to be a very poorly engineered device.
     
     
×