Jump to content
  • entries
    9
  • comments
    3
  • views
    2,854

DMVPN - Basic lab and theory

BSpendlove

1,849 views

DMVPN is mentioned in the official CCNA guide and also in the CCNP (specifically Routing and Switching I'm talking here) but it isn't really listed to configure in the exam topics for the CCNP route. The exam blueprints state you need to 'Describe' but if you've ever attempted a Cisco exam before then you might know, that doesn't mean you might get a question related to the configuration side. We are going to be looking at a simple lab with some theory behind DMVPN without the encryption, but a basic explanation what DMVPN is:

 

DMVPN (Dynamic Multipoint VPN) isn't a protocol within itself, but is crafted by the various protocols used together to achieve what DMVPN does. It allows us to create a hub-spoke like topology with spokes being able to dynamically form a VPN between other remote spokes and the Hub. The protocols that create DMVPN:

 

-Multipoint GRE

-NHRP

-A dynamic routing protocol (common: EIGRP or OSPF)

 

IPSec is also a common protocol used but it isn't actually a requirement (although it is preferred since running plain GRE isn't the best idea...). Technically you don't actually need to run a dynamic routing protocol and have static routes but again it is very common to see a dynamic routing protocol. Before moving onto a basic introduction to configuration and the design, DMVPN can scale very large (thousands of remote sites) and not only allows our spokes with dynamic IP addresses to participate in the design but also the configuration is very effective instead of creating static tunnels for loads of remote sites.

 

The single hub topology design

image.png.08b0f10f498e8d33fb71698184f39dfc.png

 

This topology will use the internet as the underlay to transport our packets, although we will create an 'overlay' using multipoint GRE to carry our site traffic (10.x.x.x) using EIGRP. In DMVPN, we use the terms 'underlay' and 'overlay' a bit similar to GRE over IPSec where IPSec is used as the protocol to transport GRE otherwise we will have no protection. GRE is normally used to transport different traffic since IPSec itself can only carry unicast traffic, it you want to take advantage of multicast and other types of traffic then you can encapsulate with GRE and then send it over the IPSec tunnel as a unicast packet. In our case, we could even just use IPSec without GRE and just define the neighbors in our routing protocol so our updates and hellos etc.. are sent via unicast instead of multicast, that bypasses the learning and fun we'll see in this post!

 

Multipoint GRE
 

Why not use typical GRE point to point tunnels? Firstly, this defeats the whole purpose what DMVPN achieves, it allows us to manage our design with ease and dynamically form tunnels with remote spokes and with the HUB. If we have a static tunnel configuration, think about it we need X amount of tunnels configured on the HUB depending how many spokes are in our design and then a tunnel from the spoke to the HUB, and then finally a tunnel from SpokeX to every single other spoke that exist if you need Spoke-Spoke communication without traffic traversing through the HUB.

 

Multipoint GRE allows a single tunnel configuration to then dynamically form tunnels without the need of loads of 'interface tunnel x' in the configuration. It can take the configuration of the single interface and then use NHRP to dynamically form tunnels to other routers.

 

NHRP

 

Next Hop resolution protocol is the protocol in DMVPN which makes it possible for spokes to register their public IP address according to their tunnel interface IP address whether the public facing interface is static or dynamic. Everyone explains NHRP like ARP but on the internet instead of within a local LAN. The protocol works as a server-client model where clients would point to a server to register their address (more specifically their NBMA aka Non Broadcast Multi Access). We will look at NHRP in more detail not only with configuration but also verification commands and more theory when we actually see outputs.

 

Dynamic Routing Protocol


As I've mentioned, a routing protocol isn't actually a requirement for DMVPN although as you may know, a dynamic routing protocol makes routing more scalable when working with a large amount of subnets/networks. We will be using EIGRP in this example.

 

IPSec

 

There are many design guides and generic guides on the web which show different methods such as using an IPSec profile directly in IOS or even having a firewall which offloads the resources for IPSec tunnels and then a router performing the GRE/NHRP etc.. In our example, I won't be using IPSec since the ipsec configuration is straight forward to lab but also very easy to setup using preshared keys, it gets more interesting when you begin to introduce a PKI server for certificates and IPSec enrollment instead of using keys/shared secrets...

 

Basic configuration

Starting with the basic configuration of all the routers so you can follow along:

Spoiler

!HUB
interface Loopback0
 ip address 10.0.0.1 255.255.255.0
!
interface Loopback1
 ip address 10.0.1.1 255.255.255.0
!
interface Loopback2
 ip address 10.0.2.1 255.255.255.0
!
interface Loopback3
 ip address 10.0.3.1 255.255.255.0
!
interface GigabitEthernet0/0
 ip address 20.0.0.1 255.255.255.252
!
ip route 0.0.0.0 0.0.0.0 20.0.0.2

!ISP
interface GigabitEthernet0/0
 ip address 20.0.0.2 255.255.255.252
!
interface GigabitEthernet0/1
 ip address 1.0.0.2 255.255.255.252
!
interface GigabitEthernet0/2
 ip address 2.0.0.2 255.255.255.252
!
interface GigabitEthernet0/3
 ip address 3.0.0.2 255.255.255.252

!Spoke-1
interface Loopback0
 ip address 10.10.1.1 255.255.255.0
!
interface GigabitEthernet0/0
 ip address 1.0.0.1 255.255.255.252
!
ip route 0.0.0.0 0.0.0.0 1.0.0.2

!Spoke-2
interface Loopback0
 ip address 10.10.2.1 255.255.255.0
!
interface GigabitEthernet0/0
 ip address 2.0.0.1 255.255.255.252
!
ip route 0.0.0.0 0.0.0.0 2.0.0.2

!Spoke-3
interface Loopback0
 ip address 10.10.3.1 255.255.255.0
!
interface GigabitEthernet0/0
 ip address 3.0.0.1 255.255.255.252
!
ip route 0.0.0.0 0.0.0.0 3.0.0.2

 

Starting with a basic check, we can ping each spoke from the HUB:

HUB#ping 1.0.0.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 1.0.0.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 5/5/6 ms
HUB#ping 2.0.0.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 2.0.0.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 4/5/6 ms
HUB#ping 3.0.0.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 3.0.0.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 5/5/6 ms

Firstly, lets start with some basic tunnel configuration. What we need to configure, an overlay which will use the 192.168.254.0/24 network for the tunnels to communicate. Lets go ahead and actually configure some other important commands on our HUB which will also act as the 'Next Hop Server aka NHS' for NHRP.

 

HUB Configuration (Phase 1)

interface Tunnel0
 ip address 192.168.254.1 255.255.255.0
 no ip redirects
 ip nhrp map multicast dynamic
 ip nhrp network-id 10
 tunnel source GigabitEthernet0/0
 tunnel mode gre multipoint
 tunnel key 1

ip nhrp map multicast dynamic

On the hub, this command serves to map multicast packets to the mappings that are created within the NHRP database.

ip nhrp network-id 10

This is similar to the tunnel key command, where we can identify specific NHRP networks but this must match on all routers, this is required in a NHRP configuration.

tunnel key 1

The tunnel key command in a tunnel configuration mode allows us to define which tunnel specific packets belong to, this is important when we have multiple tunnels on the interface and as a best practice I like to specify this even with a single tunnel configuration. 

 

Spoke Configuration (Phase 1)

interface tunnel 0
 ip address 192.168.254.(x) 255.255.255.0 !Spoke-1 .10, Spoke-2 .20 and Spoke-3 as .30
 no ip redirects
 ip nhrp map 192.168.254.1 20.0.0.1
 ip nhrp network-id 10
 ip nhrp nhs 192.168.254.1
 tunnel source GigabitEthernet0/0
 tunnel mode gre multipoint
 tunnel key 1

 

Let's capture some packets! If I shut down the tunnel interface on Spoke-1 and turn it back on, this looks like the things thing that happens relating to NHRP, which also reflects the configuration we have done.

image.png.a411d7ad8bdb621e5131ff250bca1d4a.png

Let's look into the NHRP packet itself and then see what conversation is going on. We'll look into the interesting stuff without getting into too much depth:

image.png.90a6ece6a414ef76e5fd95714af433a5.png

 

Firstly, Spoke-1 sends a NHRP Registration request (to 20.0.0.1 which is the HUB), you can see this request holds some information which will build the NHRP database we will see shortly. Spoke-1 actually announces its own NBMA address and the protocol address (in our case its our tunnel: 192.168.254.10, destination to 192.168.254.1 the tunnel interface on the HUB). These NHRP requests will be sent every 1/3rd of the Hold timer which by default is 7200s (found under the 'Client Information Entry'). The client expects a reply and will keep sending out NHRP requests double time (from 1, 2, 4 etc.. to 32... that is the theory for those CCNP exam takers!)

 

Next, we receive a reply from 20.0.0.1 (HUB), which looks like:

image.png.0b9d2de2853c28e3168519fbb068a363.png

 

If we take a quick look at RFC2332, its states that Code 0 is indeed a successful register with the NHS. The next 2 packets were actually a repeated request/successful request which we won't dive into because they look the same as the above 2 request and reply NHRP packets.

 

With all the spokes configured, this process happens fairly quickly in our lab environment and we can now see a populated NHRP database which can be found using:

HUB#show dmvpn

Interface: Tunnel0, IPv4 NHRP Details 
Type:Hub, NHRP Peers:3, 

 # Ent  Peer NBMA Addr Peer Tunnel Add State  UpDn Tm Attrb
 ----- --------------- --------------- ----- -------- -----
     1 1.0.0.1          192.168.254.10    UP 00:16:59     D
     1 2.0.0.1          192.168.254.20    UP 00:15:08     D
     1 3.0.0.1          192.168.254.30    UP 00:14:54     D
HUB#ping 192.168.254.10
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.254.10, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 6/6/8 ms
HUB#ping 192.168.254.20
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.254.20, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 6/6/8 ms
HUB#ping 192.168.254.30
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.254.30, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 6/6/7 ms

Do you think we would be able to ping Spoke-1 (192.168.254.10) from Spoke-2?

Spoke-2#ping 192.168.254.10
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.254.10, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 6/12/25 ms

The answer is yes! Although something happens behind the scenes. How could Spoke-2 possibly know how to get to 192.168.254.10? What happened was Spoke-2 actually send an NHRP request to its NHS (192.168.254.1). Because we have mapped the public IP address 20.0.0.1 to reach the HUB/NHS we can instantly send a request for 192.168.254.10.

 

image.png.9ccaff017220af8c75de2651b7f76a2d.png

 

You can see above, we sent our NBMA and the Tunnel address, but the destination is 192.168.254.10. We are going to practically be asking, what is the NMBA address for 192.168.254.10? Now this is the part where NHRP gets interesting, try to see if something looks different below:

image.png.45ab9d3c5a486ffb0508b34aaa87cd28.png

 

If we just explain a quick overview, we send an NHRP request for 192.168.254.10 to 20.0.0.1 (which is our NHS). When the request hits the NHS, it will actually send it to the NMBA which is registered in the NHRP database (being 1.0.0.1). Spoke-1 (1.0.0.1) actually replies with its information (NMBA and Tunnel address 192.168.254.10). If we do a traceroute from Spoke-2 when the NHRP table is cleared on Spoke-2, have a look at the results that prove this:

 

Spoke-2#traceroute 192.168.254.10
  1 192.168.254.1 9 msec
    192.168.254.10 7 msec 6 msec

Spoke-2#show dmvpn               

Interface: Tunnel0, IPv4 NHRP Details 
Type:Spoke, NHRP Peers:2, 

 # Ent  Peer NBMA Addr Peer Tunnel Add State  UpDn Tm Attrb
 ----- --------------- --------------- ----- -------- -----
     1 20.0.0.1          192.168.254.1    UP 00:27:00     S
     1 1.0.0.1          192.168.254.10    UP 00:00:23     D

Spoke-2#traceroute 192.168.254.10
  1 192.168.254.10 8 msec 7 msec * 

If the entry is not in our NHRP database, then the first few packets/traffic will traverse through the HUB until we receive the reply with the NBMA address of Spoke-1. This is the dynamic part of DMVPN already in action, because we learn the address to send traffic to if we want to directly communicate with that Spoke.

 

When we start advertising our networks from the spokes, this will change and then we can start talking about the different phases that can change the flow of traffic and how routes are propagated throughout this DMVPN design. We are going to configure EIGRP to setup a relationship which each neighbor but also advertise the loopbacks into EIGRP.

router eigrp 1
 network 10.0.0.0 0.255.255.255
 network 192.168.254.0 0.0.0.255

We can put a more granular network statement to chose what participates into EIGRP but let us keep it simple and sweet. We'll look at the phases in DMVPN which can change our traffic flow and how we learn routes. Before moving on, we can come across an issue with EIGRP neighbor flapping with the tunnels, we must include a command in our tunnel configuration on each spoke which allows us to map multicast traffic to the NBMA address of the Hub.

interface tunnel 0
 ip nhrp map multicast 20.0.0.1

Confirming EIGRP neighbors on the HUB:

HUB#sh ip eigrp ne
EIGRP-IPv4 Neighbors for AS(1)
H   Address                 Interface              Hold Uptime   SRTT   RTO  Q  Seq
                                                   (sec)         (ms)       Cnt Num
2   192.168.254.30          Tu0                      14 00:02:02   12  1506  0  5
1   192.168.254.20          Tu0                      13 00:02:07  624  3744  0  5
0   192.168.254.10          Tu0                      11 00:02:16    9  1506  0  6

EIGRP issues

If we have a look at the routes that the HUB has dynamically learned via EIGRP:

HUB#sh ip route eigrp
      10.0.0.0/8 is variably subnetted, 11 subnets, 2 masks
D        10.10.1.0/24 [90/27008000] via 192.168.254.10, 00:05:46, Tunnel0
D        10.10.2.0/24 [90/27008000] via 192.168.254.20, 00:05:38, Tunnel0
D        10.10.3.0/24 [90/27008000] via 192.168.254.30, 00:05:30, Tunnel0

There is an issue that can occur because of the default behaviour with EIGRP, if we take a look at the routing table for Spoke-3:

Spoke-3#show ip route eigrp
      10.0.0.0/8 is variably subnetted, 6 subnets, 2 masks
D        10.0.0.0/24 [90/27008000] via 192.168.254.1, 00:06:29, Tunnel0
D        10.0.1.0/24 [90/27008000] via 192.168.254.1, 00:06:29, Tunnel0
D        10.0.2.0/24 [90/27008000] via 192.168.254.1, 00:06:29, Tunnel0
D        10.0.3.0/24 [90/27008000] via 192.168.254.1, 00:06:29, Tunnel0

We can see routes behind the HUB (eg. loopbacks) that can successfully be reached via the Tunnel interface, the issue is with routes from other spokes. The default behaviour with EIGRP is to not advertise a route out of an interface which it was received on (eg. Tunnel 0), this is a very good example of Split Horizon which is also apart of RIP and how that protocol works. We can simply solve this with an interface command on the HUB:

interface tunnel 0
 no ip split-horizon eigrp 1

Looking back at the routing table for Spoke-3:

Spoke-3#show ip route eigrp
      10.0.0.0/8 is variably subnetted, 8 subnets, 2 masks
D        10.0.0.0/24 [90/27008000] via 192.168.254.1, 00:09:07, Tunnel0
D        10.0.1.0/24 [90/27008000] via 192.168.254.1, 00:09:07, Tunnel0
D        10.0.2.0/24 [90/27008000] via 192.168.254.1, 00:09:07, Tunnel0
D        10.0.3.0/24 [90/27008000] via 192.168.254.1, 00:09:07, Tunnel0
D        10.10.1.0/24 [90/28288000] via 192.168.254.1, 00:00:12, Tunnel0
D        10.10.2.0/24 [90/28288000] via 192.168.254.1, 00:00:12, Tunnel0

 

DMVPN Phases

The phases are kind of steps during the DMVPN process when you have:

Phase 1) Only Hub-Spoke traffic

Phase 2) Spokes can then dynamically form tunnels with other spokes, no need to go through the HUB (firstly initial traffic will go through HUB because of the NHRP request)

Phase 3) Spokes can dynamically reply to a NHRP request and spokes can work together without the HUB to initiate traffic between them

 

Phase 1

During phase 1, our traffic will ALWAYS go through the HUB because although we have turned off 'split horizon', the HUB will advertise the routes from other spokes via itself. The next hop IP address in the routing table will show the HUBs IP address as shown below: (Notice all routes are reachable via 192.168.254.1)

 

Spoke-1#show ip route eigrp
      10.0.0.0/8 is variably subnetted, 8 subnets, 2 masks
D        10.0.0.0/24 [90/27008000] via 192.168.254.1, 00:49:16, Tunnel0
D        10.0.1.0/24 [90/27008000] via 192.168.254.1, 00:49:16, Tunnel0
D        10.0.2.0/24 [90/27008000] via 192.168.254.1, 00:49:16, Tunnel0
D        10.0.3.0/24 [90/27008000] via 192.168.254.1, 00:49:16, Tunnel0
D        10.10.2.0/24 [90/28288000] via 192.168.254.1, 00:40:05, Tunnel0
D        10.10.3.0/24 [90/28288000] via 192.168.254.1, 00:40:05, Tunnel0

If we simply use a command on the HUB, we can allow the routes to be pushed out without the HUB adding itself as the next hop to reach the network. This is also moving the DMVPN into phase 2 where direct communication between spokes don't need to transverse the HUB all the time.

interface Tunnel0
 no ip next-hop-self eigrp 1

Before looking into what this does, now we will take another look at the routing table:

Spoke-1#show ip route eigrp
      10.0.0.0/8 is variably subnetted, 8 subnets, 2 masks
D        10.0.0.0/24 [90/27008000] via 192.168.254.1, 00:00:21, Tunnel0
D        10.0.1.0/24 [90/27008000] via 192.168.254.1, 00:00:21, Tunnel0
D        10.0.2.0/24 [90/27008000] via 192.168.254.1, 00:00:21, Tunnel0
D        10.0.3.0/24 [90/27008000] via 192.168.254.1, 00:00:21, Tunnel0
D        10.10.2.0/24 [90/28288000] via 192.168.254.20, 00:00:21, Tunnel0
D        10.10.3.0/24 [90/28288000] via 192.168.254.30, 00:00:21, Tunnel0

We can now see, 10.10.2.0/24 via 192.168.254.20 and 10.10.3.0/24 via 192.168.254.30. This command will not make the HUB advertise the routes via itself. Back to Phase 3, the spoke itself can reply directly to a request because currently the request is being sent to the HUB and then the HUB is forwarding that request towards the destination.

 

Here is an example of a basic packet capture when Spoke-1 tries to ping 10.10.3.1 (Spoke-3):

image.png.2f7882ef3a47b5062df7056c412538bc.png

 

You can see, the original source (1.0.0.1 - Spoke-1) is sent towards 20.0.0.1(HUB) and then, 20.0.0.1(HUB) sends it to 3.0.0.1(Spoke-3). To make this into Phase 3, we can simply add 2 commands on the hub and then a command on each spoke:

!HUB
interface tunnel 0
 ip nhrp redirect
 ip nhrp shortcut

!SPOKES
interface tunnel 0
 ip nhrp shortcut

Its 3:34AM and I need sleep (said this an hour ago...) so will update this when I get some time tomorrow...

0 Comments

There are no comments to display.

×