PIM Sparse Mode

From CT3

Jump to: navigation, search

PIM Dense Mode (PIM-DM) floods multicast traffic throughout a network by default, and downstream routers not serving any members for a multicast group must send prune requests upstream toward the source to stem the flow. In contrast, PIM Sparse Mode (PIM-SM) operates in an opposite manner; multicast traffic is only forwarded to group members which explicitly request it via join requests.

Image:PIM-SM_operation.png

However, a PIM router needs to know how to get those join requests to the source router. In PIM-DM this could be easily accomplished by inspecting the source IP address of incoming multicast traffic, but PIM-SM doesn't allow for this as multicast traffic isn't forwarded until after a join message has been received and processed. So how do join requests make it to the source in the first place?

PIM-SM uses three components to solve this chicken-and-egg scenario: source trees, shared trees, and rendezvous points.

Contents

Rendezvous Points

When configuring PIM-SM on a network, at least one router must be designated as a rendezvous point (RP). The RP could be configured manually, or dynamically through Cisco's Auto-RP or PIMv2's Bootstrap Router (BSR) method. Regardless of which method is used, an RP performs a critical function: it establishes a common reference point from which multicast trees are grown.

Consider the following topology:

Image:PIM-SM_example_topology.png

PIM-SM is enabled on all router interfaces, and R2's loopback address of 2.2.2.2 has been statically configured as the RP on all routers in the network, including R2 itself, with the ip pim rp-address command.

R1(config)# ip pim rp-address 2.2.2.2

With an RP established, we can observe what happens when a source begins to transmit multicast traffic.

Source Trees

Assume a multicast server connected to R1 begins sending multicast traffic for group 239.1.2.3. When R1 receives this traffic, it recognizes it as destined for a multicast group because the destination IP address (239.1.2.3) resides in the 224.0.0.0/4 range. R1 automatically installs two routes in its multicast routing table:

R1# show ip mroute 239.1.2.3
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.1.2.3), 00:00:13/stopped, RP 2.2.2.2, flags: SPF
  Incoming interface: FastEthernet0/0, RPF nbr 10.0.12.2
  Outgoing interface list: Null

(192.168.1.100, 239.1.2.3), 00:00:13/00:02:58, flags: PFT
  Incoming interface: FastEthernet1/0, RPF nbr 0.0.0.0, Registering
  Outgoing interface list: Null

The (*, 239.1.2.3) route represents the a shared tree rooted at the RP (notice the incoming interface listed as FastEthernet0/0, from R2). This tree hasn't actually been built yet; think of the route as a placeholder. The (192.168.1.100, 239.1.2.3) route represents the source tree, rooted at the multicast source (from FastEthernet1/0).

R1 does not immediately begin forwarding the multicast traffic; note that the outgoing interface list (OIL) for both routes is null. Instead, R1 begins encapsulating multicast packets from the source into PIM register messages and forwards them toward the RP. Note that the register messages are addressed to the group (239.1.2.3), not to the RP itself.

When the RP receives the first register message, it creates its own entries for the two trees:

R2# sh ip mroute 239.1.2.3

(*, 239.1.2.3), 00:03:56/stopped, RP 2.2.2.2, flags: SP
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list: Null

(192.168.1.100, 239.1.2.3), 00:00:05/00:02:54, flags: P
  Incoming interface: FastEthernet0/0, RPF nbr 10.0.12.1
  Outgoing interface list: Null

Notice that the source tree is listed as incoming from R1, while the shared tree has no incoming interfaces, as it isn't built from the RP until at least one member has joined the group. Maintaining a source tree from the source to the RP ensures the RP knows the address of the multicast source(s) for the group.

Image:PIM-SM_source_tree.png

After creating the two routes in its multicast routing table, the RP sends a register stop message to R1, informing it to stop sending register messages. The delay between register and register stop messages is typically only a fraction of a second.

Routes for both trees will remain in the tables of both routers as long as multicast traffic is being sent to the group. At this point, neither R3 nor R4 have any knowledge of the 239.1.2.3 group:

R3# show ip mroute 239.1.2.3
Group 239.1.2.3 not found

Shared Trees

Enter a group member on R3. The multicast client indicates to R3 it wants to receive traffic for the 239.1.2.3 group via IGMP. R3 annotates the IGMP join in its multicast routing table and sends a PIM join request for the group to the RP (R2). The RP receives the join request from R3, and adds FastEthernet0/1 (to R3) in the outgoing interface lists for both mroutes:

R2# show ip mroute 239.1.2.3

(*, 239.1.2.3), 00:00:30/00:03:17, RP 2.2.2.2, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet0/1, Forward/Sparse, 00:00:12/00:03:17

(192.168.1.100, 239.1.2.3), 00:00:30/00:03:23, flags: T
  Incoming interface: FastEthernet0/0, RPF nbr 10.0.12.1
  Outgoing interface list:
    FastEthernet0/1, Forward/Sparse, 00:00:12/00:03:17

In this manner, the source and shared trees are joined. However, because the RP didn't previously have any outgoing interfaces for either tree, it issues its own join request up the source tree to R1, requesting that multicast traffic for the group be forwarded to the RP.

Upon receiving the RP's join request on the source tree, R1 removes the prune (P) flag from its (192.168.1.100, 239.1.2.3) route and adds FastEthernet0/0 (to R2) as an outgoing interface:

R1# show ip mroute 239.1.2.3

(*, 239.1.2.3), 00:00:33/stopped, RP 2.2.2.2, flags: SPF
  Incoming interface: FastEthernet0/0, RPF nbr 10.0.12.2
  Outgoing interface list: Null

(192.168.1.100, 239.1.2.3), 00:00:33/00:03:20, flags: FT
  Incoming interface: FastEthernet1/0, RPF nbr 0.0.0.0
  Outgoing interface list:
    FastEthernet0/0, Forward/Sparse, 00:00:15/00:03:14

Multicast traffic is now flowing from the source on R1 to the group member on R3.

Image:PIM-SM_shared_tree1.png

Compare the table of R1 (on the source tree) to that of R3 (on the shared tree):

R3# show ip mroute 239.1.2.3

(*, 239.1.2.3), 00:00:22/00:02:59, RP 2.2.2.2, flags: SCL
  Incoming interface: FastEthernet0/1, RPF nbr 10.0.23.2
  Outgoing interface list:
    FastEthernet1/0, Forward/Sparse, 00:00:22/00:02:55

Notice that R3 has only a single route, (*, 239.1.2.3), for the shared tree rooted at the RP; it has no knowledge of the source tree between R1 and R2.

When additional members join the multicast group, the shared tree is simply extended through additional join requests between PIM routers:

Image:PIM-SM_shared_tree2.png
R4# show ip mroute 239.1.2.3

(*, 239.1.2.3), 00:00:08/00:02:59, RP 2.2.2.2, flags: SCL
  Incoming interface: FastEthernet0/0, RPF nbr 10.0.34.3
  Outgoing interface list:
    FastEthernet1/0, Forward/Sparse, 00:00:08/00:02:55

One final note: after Cisco PIM-SM routers have determined the source of multicast traffic for a group, they will by default switch over to a source tree in order to more efficiently forward traffic. For example, assuming all links have an equal cost, multicast traffic has a more favorable route to R4 via their direct link. PIM is able to detect this by inspecting the unicast routing table, and R4 will switch over to a source tree by sending a PIM join request to R1:

Image:PIM-SM_new_source_tree.png
R4# show ip mroute 239.1.2.3

(*, 239.1.2.3), 00:00:22/00:02:38, RP 2.2.2.2, flags: SJCL
  Incoming interface: FastEthernet0/0, RPF nbr 10.0.34.3
  Outgoing interface list:
    FastEthernet1/0, Forward/Sparse, 00:00:22/00:02:37

(192.168.1.100, 239.1.2.3), 00:00:21/00:02:58, flags: LJT
  Incoming interface: FastEthernet0/1, RPF nbr 10.0.14.1
  Outgoing interface list:
    FastEthernet1/0, Forward/Sparse, 00:00:21/00:02:38

This behavior can be disabled with the ip pim spt-threshold infinity command.

PIM-SM Configuration

PIM-SM configuration includes three simple steps:

  1. Enable multicast routing
  2. Designate a PIM router to act as the rendezvous point (RP)
  3. Enable PIM-SM interfaces

The first two steps are accomplished with single commands in global configuration on all routers in the multicast domain:

R1(config)# ip multicast-routing
R1(config)# ip pim rp-address 172.16.34.1

Note that there exist other means of configuring RP routers, namely Cisco's proprietary Auto-RP and PIMv2's Bootstrap Router (BSR) methods. In our example, only manual configuration is used.

PIM is enabled per interface:

R1(config)# interface f0/0
R1(config-if)# ip pim sparse-mode

Believe it or not, this is all the configuration necessary to get a bare bones multicast network up and running. After enabling PIM, routers will form adjacencies with other PIM routers and multicast routes will be exchanged:

R1# show ip pim neighbor
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      S - State Refresh Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
10.0.12.2         FastEthernet0/0          00:06:39/00:01:30 v2    1 / DR S
10.0.14.4         FastEthernet0/1          00:06:40/00:01:30 v2    1 / DR S
Personal tools

CT3

Main menu