R&S

Multicast “MPLS” VPNs

In this post, we’ll look at basic, but still interesting, subject of what we call MPLS VPNs.

Initial Configuration

As usual, we’ll need test network to work with. The diagram below is the network we’ll work with.

Diagram

We will start the configuration with the fully functional MPLS network between PE1, P1, P2 and PE2 routers using OSPF as IGP of choice. We will also have a fully functional unicast MPLS VPN for our two VRFs called “CE” on PE1 and PE2. This means that CE1 and CE2 should have fully functional unicast connectivity.

Let’s take a quick look at that configuration before we proceed.

CE1:

CE1#show ip route | begin ^Gateway
Gateway of last resort is not set

      10.0.0.0/8 is variably subnetted, 3 subnets, 2 masks
C        10.1.1.0/24 is directly connected, Ethernet0/0
L        10.1.1.11/32 is directly connected, Ethernet0/0
D        10.2.2.0/24 [90/307200] via 10.1.1.1, 00:49:21, Ethernet0/0

CE1#ping 10.2.2.22

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.2.2.22, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/4 ms

PE1:

PE1#show mpls ldp discovery
 Local LDP Identifier:
    192.168.0.1:0
    Discovery Sources:
    Interfaces:
        Ethernet0/1 (ldp): xmit/recv
            LDP Id: 192.168.0.11:0
PE1#show mpls ldp neighbor
    Peer LDP Ident: 192.168.0.11:0; Local LDP Ident 192.168.0.1:0
        TCP connection: 192.168.0.11.44273 - 192.168.0.1.646
        State: Oper; Msgs sent/rcvd: 64/64; Downstream
        Up time: 00:47:49
        LDP discovery sources:
          Ethernet0/1, Src IP addr: 192.168.11.11
        Addresses bound to peer LDP Ident:
          192.168.21.11   192.168.11.11   192.168.0.11    
PE1#show ip route vrf CE | begin ^Gateway
Gateway of last resort is not set

      10.0.0.0/8 is variably subnetted, 3 subnets, 2 masks
C        10.1.1.0/24 is directly connected, Ethernet0/0
L        10.1.1.1/32 is directly connected, Ethernet0/0
B        10.2.2.0/24 [200/0] via 192.168.0.2, 10:13:01

P1:

P1#show mpls ldp discovery
 Local LDP Identifier:
    192.168.0.11:0
    Discovery Sources:
    Interfaces:
        Ethernet0/0 (ldp): xmit/recv
            LDP Id: 192.168.0.22:0
        Ethernet0/1 (ldp): xmit/recv
            LDP Id: 192.168.0.1:0
P1#show mpls ldp neighbor
    Peer LDP Ident: 192.168.0.22:0; Local LDP Ident 192.168.0.11:0
        TCP connection: 192.168.0.22.45958 - 192.168.0.11.646
        State: Oper; Msgs sent/rcvd: 64/64; Downstream
        Up time: 00:47:50
        LDP discovery sources:
          Ethernet0/0, Src IP addr: 192.168.21.22
        Addresses bound to peer LDP Ident:
          192.168.21.22   192.168.22.22   192.168.0.22    
    Peer LDP Ident: 192.168.0.1:0; Local LDP Ident 192.168.0.11:0
        TCP connection: 192.168.0.1.646 - 192.168.0.11.44273
        State: Oper; Msgs sent/rcvd: 64/64; Downstream
        Up time: 00:47:49
        LDP discovery sources:
          Ethernet0/1, Src IP addr: 192.168.11.1
        Addresses bound to peer LDP Ident:
          192.168.11.1    192.168.0.1     

P2:

P2#show mpls ldp discovery
 Local LDP Identifier:
    192.168.0.22:0
    Discovery Sources:
    Interfaces:
        Ethernet0/0 (ldp): xmit/recv
            LDP Id: 192.168.0.11:0
        Ethernet0/1 (ldp): xmit/recv
            LDP Id: 192.168.0.2:0
P2#show mpls ldp neighbor
    Peer LDP Ident: 192.168.0.11:0; Local LDP Ident 192.168.0.22:0
        TCP connection: 192.168.0.11.646 - 192.168.0.22.45958
        State: Oper; Msgs sent/rcvd: 64/64; Downstream
        Up time: 00:47:50
        LDP discovery sources:
          Ethernet0/0, Src IP addr: 192.168.21.11
        Addresses bound to peer LDP Ident:
          192.168.21.11   192.168.11.11   192.168.0.11    
    Peer LDP Ident: 192.168.0.2:0; Local LDP Ident 192.168.0.22:0
        TCP connection: 192.168.0.2.646 - 192.168.0.22.31549
        State: Oper; Msgs sent/rcvd: 64/65; Downstream
        Up time: 00:47:50
        LDP discovery sources:
          Ethernet0/1, Src IP addr: 192.168.22.2
        Addresses bound to peer LDP Ident:
          192.168.22.2    192.168.0.2     

PE2:

PE2#show mpls ldp discovery
 Local LDP Identifier:
    192.168.0.2:0
    Discovery Sources:
    Interfaces:
        Ethernet0/1 (ldp): xmit/recv
            LDP Id: 192.168.0.22:0
PE2#show mpls ldp neighbor
    Peer LDP Ident: 192.168.0.22:0; Local LDP Ident 192.168.0.2:0
        TCP connection: 192.168.0.22.31549 - 192.168.0.2.646
        State: Oper; Msgs sent/rcvd: 65/64; Downstream
        Up time: 00:47:50
        LDP discovery sources:
          Ethernet0/1, Src IP addr: 192.168.22.22
        Addresses bound to peer LDP Ident:
          192.168.21.22   192.168.22.22   192.168.0.22    
PE2#show ip route vrf CE | begin ^Gateway
Gateway of last resort is not set

      10.0.0.0/8 is variably subnetted, 3 subnets, 2 masks
B        10.1.1.0/24 [200/0] via 192.168.0.1, 10:13:02
C        10.2.2.0/24 is directly connected, Ethernet0/0
L        10.2.2.2/32 is directly connected, Ethernet0/0

CE2:

CE2#show ip route | begin ^Gateway
Gateway of last resort is not set

      10.0.0.0/8 is variably subnetted, 3 subnets, 2 masks
D        10.1.1.0/24 [90/307200] via 10.2.2.2, 00:49:21, Ethernet0/0
C        10.2.2.0/24 is directly connected, Ethernet0/0
L        10.2.2.22/32 is directly connected, Ethernet0/0

CE2#ping 10.1.1.11

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.1.1.11, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms

Very nice, let’s implement multicast “MPLS” VPN.

Implementing Multicast – Core

The first thing we need to do is make sure our core multicast configuration is in place. Core is of course our MPLS network, consisting of PE1, P1, P2 and PE2. We can implement multicast any way we like. It’s very common to use source-specific multicast for this purpose, but I’m old fashioned and I’ll do it using sparse mode PIM, with P1 being static RP. Here is the configuration.

PE1:

ip multicast-routing
!
interface Loopback0
 ip pim sparse-mode
!
interface Ethernet0/1
 ip pim sparse-mode
!
ip pim rp-address 192.168.0.11

P1:

ip multicast-routing
!
interface Loopback0
 ip pim sparse-mode
!
interface Ethernet0/0
 ip pim sparse-mode
!
interface Ethernet0/1
 ip pim sparse-mode
!
ip pim rp-address 192.168.0.11

P2:

ip multicast-routing
!
interface Loopback0
 ip pim sparse-mode
!
interface Ethernet0/0
 ip pim sparse-mode
!
interface Ethernet0/1
 ip pim sparse-mode
!
ip pim rp-address 192.168.0.11

PE2:

ip multicast-routing
!
interface Loopback0
 ip pim sparse-mode
!
interface Ethernet0/1
 ip pim sparse-mode
!
ip pim rp-address 192.168.0.11

Let’s verify if our multicast core works. I will join groups 239.0.0.1 and 239.0.0.2 on Loopback0 interfaces of PE1 and PE2 respectively.

PE1:

interface Loopback0
 ip igmp join-group 239.0.0.1
!

PE2:

interface Loopback0
 ip igmp join-group 239.0.0.2
!

Let’s see if pinging these groups works from PE1 and PE2. If it does, multicast in the core works like a charm.

PE1:

PE1#ping 239.0.0.2

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 239.0.0.2, timeout is 2 seconds:

Reply to request 0 from 192.168.0.2, 20 ms
Reply to request 0 from 192.168.0.2, 20 ms

PE2

:

PE2#ping 239.0.0.1

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 239.0.0.1, timeout is 2 seconds:

Reply to request 0 from 192.168.0.1, 28 ms
Reply to request 0 from 192.168.0.1, 28 ms
Reply to request 0 from 192.168.0.1, 28 ms

Wonderful! It’s now time to configure actual multicast VPN between CE routers.

Implementing Multicast – VPN

Implementing multicast in the VPN is very similar to implementing it in the core. There is only one extra step – enabling default multicast distribution tree (MDT) group for the VPN. This group is the “global” group that will carry all multicast VPN traffic across the core. We could optionally configure data MDT which will carry “high rate” traffic, but there is no need for it here. Same as with the core multicast configuration, we will need RP. Again, we’ll use static RP and it will be Ethernet0/0 interface on PE1. Let’s go.

CE1:

ip multicast-routing
!
interface Ethernet0/0
 ip pim sparse-mode
!
ip pim rp-address 10.1.1.1

PE1:

ip vrf CE
 mdt default 239.1.1.1
!
ip multicast-routing vrf CE
!
interface Ethernet0/0
 ip pim sparse-mode
!
ip pim vrf CE rp-address 10.1.1.1

PE2:

ip vrf CE
 mdt default 239.1.1.1
!
ip multicast-routing vrf CE
!
interface Ethernet0/0
 ip pim sparse-mode
!
ip pim vrf CE rp-address 10.1.1.1

CE2:

ip multicast-routing
!
interface Ethernet0/0
 ip pim sparse-mode
!
ip pim rp-address 10.1.1.1

As you can see above, there is nothing to be done on P1 and P2. Let’s verify if this works. I will join multicast group 239.22.22.22 on CE2′s Ethernet0/0.

CE2:

interface Ethernet0/0
 ip igmp join-group 239.22.22.22
!

I should be able to ping this group from CE1.

CE1:

CE1#ping 239.22.22.22

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 239.22.22.22, timeout is 2 seconds:

Reply to request 0 from 10.2.2.22, 1 ms

Very nice. There is one thing I’d like to show you on PE1, though. Take a look at the multicast routing table for VRF CE on PE1.

PE1:

PE1#show ip mroute vrf CE 239.22.22.22 | begin ^\(
(*, 239.22.22.22), 00:03:51/00:02:34, RP 10.1.1.1, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Tunnel1, Forward/Sparse, 00:03:51/00:02:34

(10.1.1.11, 239.22.22.22), 00:03:47/00:02:34, flags: T
  Incoming interface: Ethernet0/0, RPF nbr 10.1.1.11
  Outgoing interface list:
    Tunnel1, Forward/Sparse, 00:03:47/00:02:39

Outgoing interface is Tunnel1? We didn’t configure any tunnel interfaces! What’s this all about. Let’s examine a little bit more.

PE1:

PE1#show derived-config interface Tunnel1
Building configuration...

Derived configuration : 173 bytes
!
interface Tunnel1
 ip unnumbered Loopback0
 no ip redirects
 ip mtu 1500
 ip pim sparse-mode
 tunnel source Loopback0
 tunnel mode gre multipoint
 no routing dynamic
end

Interesting… Multipoint GRE tunnel. That’s how multicast VPNs are built – using full mesh of multipoint GRE tunnels with the source of local Loopback interface and the destination of the default MDT group configured in VRF part.

The Twist

I hope that you as aspiring (and current) CCIEs knew all of the above. Seriously, I do. The Twist is why I wanted to write this blog in the first place. I was unsure whether to write is as a regular blog, or as part of my myth busting series, but I decided to go this way because not many people consider this to be a myth… “This” in previous sentence is the following: You need to have MPLS for multicast MPLS VPN to work. Yeah, that’s right… I am now going to show you how above configuration is going to work (traffic from source is going to reach the client) with MPLS in the core disabled. If you are confused at this point, I don’t blame you. So was I when I first played with this some five years ago…

The easiest way to kill MPLS is to disable it globally on all MPLS routers. We will enter “no mpls ip” global command on all 4 MPLS routers and observe no labels being exchanged. We will also verify that CE1 and CE2 cannot exchange any unicast traffic any longer…

All MPLS routers:

no mpls ip

CE1:

CE1#ping 10.2.2.22

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.2.2.22, timeout is 2 seconds:
.....
Success rate is 0 percent (0/5)

PE1:

PE1#show mpls ldp bindings
LIB not enabled

PE1#show mpls forwarding-table
Local      Outgoing   Prefix           Bytes Label   Outgoing   Next Hop    
Label      Label      or Tunnel Id     Switched      interface              
21         No Label   10.1.1.0/24[V]   0             aggregate/CE 

PE2:

PE2#show mpls ldp bindings
LIB not enabled

PE2#show mpls forwarding-table
Local      Outgoing   Prefix           Bytes Label   Outgoing   Next Hop    
Label      Label      or Tunnel Id     Switched      interface              
21         No Label   10.2.2.0/24[V]   0             aggregate/CE 

CE2:

CE2#ping 10.1.1.11

Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.1.1.11, timeout is 2 seconds:
.....
Success rate is 0 percent (0/5)

Now for the shocker. I am going to enable “debug ip packet” for ping traffic on CE2 and then ping 239.22.22.22 from CE1. Enjoy the show.

CE2:

access-list 100 permit icmp any any echo
access-list 100 permit icmp any any echo-reply
!
debug ip packet 100

CE1:

CE1#ping 239.22.22.22

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 239.22.22.22, timeout is 2 seconds:
.

We get no response. Of course we don’t, since replies are unicast, but… take a look at CE2.

CE2:

 IP: s=10.1.1.11 (Ethernet0/0), d=239.22.22.22, len 100, input feature, MCI Check(64), rtype 0, forus FALSE, sendself FALSE, mtu 0
 IP: s=10.2.2.22 (local), d=10.1.1.11 (Ethernet0/0), len 100, sending
 IP: s=10.2.2.22 (local), d=10.1.1.11 (Ethernet0/0), len 100, output feature, MFIB Adjacency(63), rtype 1, forus FALSE, sendself FALSE, mtu 0
 IP: s=10.2.2.22 (local), d=10.1.1.11 (Ethernet0/0), len 100, sending full packet

Our multicast ping reached CE2 without any issues. CE2 even tried to send back the response, but due to absence of MPLS in the core, it never reached CE1. But why?

The reason for this is that multicast is not label-switched on Cisco routers at this point. While there are some solutions that are aiming to change this, vast majority of deployed routers and IOS versions don’t do it. Multicast VPN traffic is being tunneled across the MPLS core as regular IP traffic. How’s that for 3am troubleshooting call?

By the way, this is the reason for quotes in the title of this post…

Challenge for You…

I must admit I cheated a little bit. No, I didn’t lie in any of the above, but the RP placement was a deliberate choice. Post in the comments below why you think I chose the one I did… Were there, perhaps, some other deliberate choices in the configuration that made this an easy thing to prove?


Marko Milivojevic – CCIE #18427
Senior Technical Instructor – IPexpert
Join our Online Study List

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>