What Is DMVPN and Why Does Phase Matter?
Dynamic Multipoint VPN (DMVPN) is one of those Cisco technologies that, once you truly understand it, changes how you think about WAN design. The three DMVPN phases—Phase 1, 2, and 3—aren’t just marketing labels. They describe fundamentally different traffic-forwarding behaviors that have real performance and scalability consequences for your network.
If you’ve already read the IOS vs IOS-XE vs IOS-XR breakdown, you know that modern enterprise deployments almost universally run IOS-XE. This guide targets IOS-XE 17.x running on platforms like ISR 4000 series, CSR 1000v, and Catalyst 8000 routers—but the concepts apply equally to older IOS 15.x deployments.
Here’s the quick distinction between phases:
- Phase 1: Classic hub-and-spoke. All spoke-to-spoke traffic transits the hub. Simple, predictable, doesn’t scale well with high spoke counts.
- Phase 2: Spokes can build direct tunnels to each other on demand. Hub still handles the first packet; after that, spokes go direct. This is what most engineers mean when they say “DMVPN.”
- Phase 3: Introduces NHRP Redirect and NHRP Shortcut, allowing even better scalability and hierarchical DMVPN designs. Traffic shortcuts happen faster, and summarization on the hub becomes possible without breaking spoke-to-spoke.
This guide covers Phase 2 end-to-end: architecture, hub config, spoke config, routing protocol quirks, and verification. Every command block is production-grade IOS-XE syntax.
DMVPN Phase 2 Architecture
DMVPN combines three technologies:
- mGRE (Multipoint GRE): A single tunnel interface on the hub that can accept connections from any number of spokes without a per-peer tunnel interface.
- NHRP (Next Hop Resolution Protocol): The “phonebook” of DMVPN. Spokes register their public NBMA IP (typically the internet-facing interface IP) with the hub’s NHRP server. When Spoke A wants to reach Spoke B directly, it queries NHRP to resolve Spoke B’s NBMA address.
- A dynamic routing protocol: Usually EIGRP or OSPF (with caveats—see below). Carries routes and enables the network to dynamically learn prefixes as spokes come and go.
In Phase 2, the hub uses an mGRE tunnel. Spokes can use either mGRE or point-to-multipoint GRE—mGRE is strongly preferred because it’s required for spoke-to-spoke dynamic tunnels to work.
The spoke-to-spoke flow works like this:
- Spoke A sends traffic destined for Spoke B’s subnet. The first packet hits Spoke A’s routing table—the route points to the hub (next-hop is hub’s tunnel IP).
- The packet traverses the hub. Meanwhile, Spoke A sends an NHRP Resolution Request to the hub asking: “What’s the NBMA address for Spoke B’s tunnel IP?”
- The hub responds with Spoke B’s NBMA address (its public IP).
- Spoke A builds a direct IPsec/GRE tunnel to Spoke B. Subsequent traffic flows directly, bypassing the hub entirely.
- The spoke-to-spoke tunnel has a configurable hold-down timer. If no traffic flows for that period, the dynamic tunnel tears down and the process repeats next time.
Lab Topology
We’ll use this topology throughout the guide:
Internet/NBMA Cloud
|
10.0.0.1 (Hub: R1)
/ \
10.0.0.2 10.0.0.3
(Spoke: R2) (Spoke: R3)
Tunnel network: 172.16.0.0/24
Hub tunnel IP: 172.16.0.1
Spoke R2 tunnel: 172.16.0.2
Spoke R3 tunnel: 172.16.0.3
LAN behind R2: 192.168.2.0/24
LAN behind R3: 192.168.3.0/24
Hub LAN: 192.168.1.0/24
For IPsec protection, we’ll use a pre-shared key profile (IKEv2). In production you’d use certificates—but PSK is clearer for a config walkthrough.
Hub Configuration (R1)
IKEv2 and IPsec
! IKEv2 Keyring - wildcard PSK accepts any peer
crypto ikev2 keyring DMVPN-KEYRING
peer ANY
address 0.0.0.0 0.0.0.0
pre-shared-key local Str0ngP@ssw0rd
pre-shared-key remote Str0ngP@ssw0rd
! IKEv2 Profile
crypto ikev2 profile DMVPN-PROFILE
match identity remote address 0.0.0.0
authentication local pre-share
authentication remote pre-share
keyring local DMVPN-KEYRING
! IPsec Transform Set
crypto ipsec transform-set DMVPN-TS esp-aes 256 esp-sha256-hmac
mode transport
! IPsec Profile (referenced by tunnel)
crypto ipsec profile DMVPN-IPSEC
set transform-set DMVPN-TS
set ikev2-profile DMVPN-PROFILE
mGRE Tunnel Interface
interface Tunnel0
description DMVPN-HUB
ip address 172.16.0.1 255.255.255.0
no ip redirects
ip nhrp authentication NHRPKEY1
ip nhrp map multicast dynamic
ip nhrp network-id 100
ip nhrp holdtime 300
ip nhrp server-only
tunnel source GigabitEthernet0/0/0
tunnel mode gre multipoint
tunnel key 100
tunnel protection ipsec profile DMVPN-IPSEC
Key hub-specific commands explained:
ip nhrp map multicast dynamic— Allows the hub to dynamically track which spokes should receive multicast/broadcast (used by routing protocol hellos).ip nhrp server-only— Tells IOS-XE this router is the NHRP server and will not initiate NHRP resolution requests itself.no ip redirects— Critical in Phase 2. Prevents the hub from sending ICMP redirects when forwarding spoke-to-spoke packets, which would confuse the NHRP process.tunnel key 100— Optional but recommended when multiple DMVPN tunnels share the same source interface.
Spoke Configuration (R2 and R3)
Both spokes are nearly identical. The only differences are the tunnel IP and local LAN prefix.
! ---- Spoke R2 ----
crypto ikev2 keyring DMVPN-KEYRING
peer ANY
address 0.0.0.0 0.0.0.0
pre-shared-key local Str0ngP@ssw0rd
pre-shared-key remote Str0ngP@ssw0rd
crypto ikev2 profile DMVPN-PROFILE
match identity remote address 0.0.0.0
authentication local pre-share
authentication remote pre-share
keyring local DMVPN-KEYRING
crypto ipsec transform-set DMVPN-TS esp-aes 256 esp-sha256-hmac
mode transport
crypto ipsec profile DMVPN-IPSEC
set transform-set DMVPN-TS
set ikev2-profile DMVPN-PROFILE
interface Tunnel0
description DMVPN-SPOKE
ip address 172.16.0.2 255.255.255.0
no ip redirects
ip nhrp authentication NHRPKEY1
ip nhrp map 172.16.0.1 10.0.0.1
ip nhrp map multicast 10.0.0.1
ip nhrp network-id 100
ip nhrp holdtime 300
ip nhrp nhs 172.16.0.1
tunnel source GigabitEthernet0/0/0
tunnel mode gre multipoint
tunnel key 100
tunnel protection ipsec profile DMVPN-IPSEC
Spoke-specific commands:
ip nhrp map 172.16.0.1 10.0.0.1— Static NHRP mapping: “The router with tunnel IP 172.16.0.1 has NBMA address 10.0.0.1.” This is how spokes know how to reach the hub before dynamic NHRP is working.ip nhrp map multicast 10.0.0.1— Tells the spoke to send multicast (routing protocol hellos) toward the hub’s NBMA address.ip nhrp nhs 172.16.0.1— Designates the hub as the NHS (Next Hop Server). The spoke registers itself with this server on startup.
For R3, simply change the tunnel IP to 172.16.0.3 and the source interface to R3’s WAN IP (10.0.0.3). The NHRP static mappings remain the same—they always point to the hub.
Routing Protocol: EIGRP for DMVPN Phase 2
This is where engineers get burned. DMVPN Phase 2 has a non-obvious interaction with routing protocols: the next-hop for spoke routes must be preserved as-is, not changed to the hub’s tunnel IP.
Why? Because when Spoke A learns a route to Spoke B’s LAN via EIGRP, the next-hop in the routing table must be Spoke B’s tunnel IP (172.16.0.3), not the hub’s. If the hub rewrites next-hops to itself (default EIGRP behavior on a multipoint interface), Spoke A will think it needs to reach 192.168.3.0/24 via 172.16.0.1, will never trigger NHRP resolution for 172.16.0.3, and spoke-to-spoke tunnels will never form.
Hub EIGRP Config
router eigrp 100
network 172.16.0.0 0.0.0.255
network 192.168.1.0
no auto-summary
interface Tunnel0
no ip split-horizon eigrp 100
no ip next-hop-self eigrp 100
The two tunnel interface commands are the secret sauce:
no ip split-horizon eigrp 100— By default, EIGRP won’t advertise a route back out the same interface it was learned on. On an mGRE hub, this would prevent spoke routes from being propagated to other spokes. Disabling split-horizon on the hub fixes this.no ip next-hop-self eigrp 100— Preserves the original next-hop when the hub re-advertises spoke routes. Spokes see each other’s tunnel IPs as next-hops, enabling NHRP resolution.
Spoke EIGRP Config
router eigrp 100
network 172.16.0.0 0.0.0.255
network 192.168.2.0
no auto-summary
No special tunnel interface tweaks needed on spokes—split-horizon behavior is fine since spokes only have one DMVPN peer at startup (the hub).
Using OSPF instead? Set ip ospf network point-to-multipoint non-broadcast on all tunnel interfaces and manually configure neighbor statements on the hub. OSPF’s DR/BDR election on broadcast networks causes similar next-hop issues. Point-to-multipoint non-broadcast avoids them entirely. OSPF for DMVPN is a bigger topic—check the OSPF troubleshooting guide for adjacency pitfalls that are even more pronounced in DMVPN environments.
Verification Commands and Expected Output
1. NHRP Cache — Is Registration Working?
R1# show ip nhrp detail
172.16.0.2/32 via 172.16.0.2
Tunnel0 created 00:04:12, expire 00:04:47
Type: dynamic, Flags: registered used nhop
NBMA address: 10.0.0.2
172.16.0.3/32 via 172.16.0.3
Tunnel0 created 00:03:55, expire 00:05:04
Type: dynamic, Flags: registered used nhop
NBMA address: 10.0.0.3
Both spokes are registered. The registered flag confirms successful NHRP registration. expire counts down from the holdtime (300s by default)—if this hits zero without renewal, the entry is purged.
2. DMVPN Tunnel Status
R1# show dmvpn detail
Legend: Attrb --> S - Static, D - Dynamic, I - Incomplete
N - NATed, L - Local, X - No Socket
# Ent --> Number of NHRP entries with same NBMA peer
NHS Status: E --> Expecting Replies, R --> Responding, W --> Waiting
UpDn Time --> Up/Down Time for a Tunnel
==========================================================================
Interface: Tunnel0, IPv4 NHRP Details
Type:Hub, NHRP Peers:2,
# Ent Peer NBMA Addr Peer Tunnel Add State UpDn Tm Attrb
----- --------------- --------------- ----- -------- -----
1 10.0.0.2 172.16.0.2 UP 00:04:12 D
1 10.0.0.3 172.16.0.3 UP 00:03:55 D
3. Spoke-to-Spoke Resolution in Action
On Spoke R2, before triggering any traffic:
R2# show dmvpn
Interface: Tunnel0, IPv4 NHRP Details
Type:Spoke, NHRP Peers:1,
# Ent Peer NBMA Addr Peer Tunnel Add State UpDn Tm Attrb
----- --------------- --------------- ----- -------- -----
1 10.0.0.1 172.16.0.1 UP 00:05:30 S
Only one peer (the hub). Now ping Spoke R3’s LAN from R2:
R2# ping 192.168.3.1 source 192.168.2.1 repeat 10
Type escape sequence to abort.
Sending 10, 100-byte ICMP Echos to 192.168.3.1, timeout is 2 seconds:
!.!!!!!!!
Success rate is 90 percent (9/10), round-trip min/avg/max = 2/4/7 ms
That dot is the expected miss—it drops during the brief IKEv2 negotiation window as the direct spoke-to-spoke IPsec tunnel is being established. The first packet reaches R3 via the hub successfully; the drop happens when NHRP resolution is complete and the spokes are mid-handshake on the direct tunnel. Once the direct tunnel is up, all subsequent packets fly through. Now check R2’s DMVPN table again:
R2# show dmvpn
Interface: Tunnel0, IPv4 NHRP Details
Type:Spoke, NHRP Peers:2,
# Ent Peer NBMA Addr Peer Tunnel Add State UpDn Tm Attrb
----- --------------- --------------- ----- -------- -----
1 10.0.0.1 172.16.0.1 UP 00:06:15 S
1 10.0.0.3 172.16.0.3 UP 00:00:04 D
The dynamic entry for R3 (10.0.0.3 / 172.16.0.3) confirms a direct spoke-to-spoke tunnel is now active.
4. Routing Table Sanity Check
R2# show ip route eigrp
D 192.168.1.0/24 [90/27008000] via 172.16.0.1, 00:06:00, Tunnel0
D 192.168.3.0/24 [90/28288000] via 172.16.0.3, 00:05:58, Tunnel0
This is what you want. The route to R3’s LAN (192.168.3.0) has a next-hop of 172.16.0.3 (Spoke R3’s tunnel IP), not 172.16.0.1. If you see the hub’s IP as next-hop for all routes, the no ip next-hop-self command is missing or wasn’t applied correctly.
Common DMVPN Phase 2 Pitfalls
Pitfall 1: NAT Breaking NHRP Registration
If spokes are behind NAT (common in home-office or small-branch deployments), the NBMA IP registered with the hub will be the public NAT IP, not the spoke’s local WAN IP. DMVPN handles this via NHRP NAT extension—add ip nhrp registration no-unique on spokes behind NAT to allow re-registration when the public IP changes (DHCP scenarios).
For full NAT traversal, IKEv2 handles NAT-T automatically. Verify with:
R2# show crypto ikev2 sa detail | include NAT
NAT-T is used
Pitfall 2: MTU and Fragmentation
GRE adds 24 bytes of overhead. IPsec in transport mode adds another ~50-70 bytes depending on cipher suite. On a standard 1500-byte MTU uplink, your effective payload is around 1400 bytes. Set tunnel MTU and TCP MSS clamping:
interface Tunnel0
ip mtu 1400
ip tcp adjust-mss 1360
Failure to do this causes silent black-holing of large packets (PDF downloads, large file transfers) while small packets (pings, DNS) work fine.
Pitfall 3: IKEv2 Profile Mismatch
IKEv2 profile matching is strict. If the hub and spoke have different match identity statements that don’t complement each other, IKEv2 negotiation will fail with a confusing “no proposal chosen” error even when the proposal is actually fine. Verify:
R1# debug crypto ikev2 error
R1# debug crypto ikev2 packet
Look for IKEv2 SA EXCHANGE FAILED with reason NO_PROPOSAL_CHOSEN vs AUTHENTICATION_FAILED—they point to different root causes.
Pitfall 4: NHRP Authentication Mismatch
The ip nhrp authentication key must match exactly on hub and all spokes (case-sensitive, max 8 characters). A mismatch causes spokes to fail registration silently on older IOS versions. Check:
R2# debug nhrp error
*Apr 26 14:22:07.453: NHRP: Receive Registration Request via Tunnel0 vrf 0, packet size: 116
*Apr 26 14:22:07.453: NHRP: NHRP authentication failed for 10.0.0.2
Phase 2 vs Phase 3: When to Upgrade
Phase 2 works well for up to ~100 spokes. Beyond that, the hub becomes a bottleneck during the initial spoke-to-spoke setup phase (every new dynamic tunnel still generates NHRP traffic through the hub). Phase 3 addresses this with:
ip nhrp redirecton the hub (sends a redirect message to spokes to trigger direct tunnel building without the second round-trip)ip nhrp shortcuton spokes (installs host routes for active spoke-to-spoke sessions)- Allows summarization on the hub without breaking spoke-to-spoke discovery
Phase 3 also integrates cleanly with SD-WAN overlays—many Cisco SD-WAN (Viptela) deployments use DMVPN Phase 3 principles under the hood for data-plane forwarding. If you’re doing any SD-WAN evaluation work, understanding Phase 2 and Phase 3 deeply will make the vEdge/cEdge architecture much more intuitive.
For more Cisco IOS-XE automation and operational tooling around your DMVPN deployment, the EEM scripting guide covers how to automatically alert on NHRP registration failures or tunnel flaps using embedded event manager—extremely useful in large DMVPN deployments where manual monitoring doesn’t scale.
Quick Reference: Essential DMVPN Phase 2 Commands
| Command | Purpose |
|---|---|
show dmvpn detail |
Tunnel state and peer list with NBMA mappings |
show ip nhrp detail |
Full NHRP cache with flags and expiry |
show ip nhrp nhs detail |
Spoke-side view of NHS status and registration state |
show crypto ikev2 sa |
Active IKEv2 security associations |
show crypto ipsec sa peer X.X.X.X |
IPsec SA detail for a specific NBMA peer |
debug nhrp error |
NHRP error messages (auth failures, registration rejects) |
debug nhrp packet |
Verbose NHRP packet-level trace |
clear ip nhrp |
Flush entire NHRP cache (use with caution in production) |
clear dmvpn session peer X.X.X.X |
Tear down a specific dynamic spoke-to-spoke tunnel |
Wrapping Up
DMVPN Phase 2 gives you the best of both worlds: the operational simplicity of a hub-and-spoke design with the performance benefits of direct spoke-to-spoke communication. The key takeaways are:
- Use mGRE on both hub and spokes for full Phase 2 capability
- Apply
no ip next-hop-self eigrpandno ip split-horizon eigrpon the hub’s tunnel interface—these are the most commonly missed configs - Set MTU and MSS on the tunnel interface to avoid silent large-packet drops
- Verify NHRP registration with
show ip nhrp detailbefore testing spoke-to-spoke traffic - Plan for Phase 3 if your deployment will exceed ~100 spokes or requires summarization at the hub
Got a specific DMVPN scenario you’re troubleshooting—NAT traversal at scale, dual-hub redundancy, or integrating with an existing BGP WAN? Drop it in the comments. Adding BGP as the routing protocol under your DMVPN overlay is another common next step, and the same next-hop preservation logic applies there too.