Multicast RPF – Basis

We all know how unicast routing works, when the router receives the packet, the decision on how to forward the packet is made on the destination address, in Multicast the routing decision where to forward the multicast depends on where the packet came from. So the routers much know the origin of the packet, instead of its destination, just the opposite in unicast routing, in multicast, the IP address denotes the know sources (the origin of the multicast traffic), and the destination denotes the group of unknown receivers.

The routers running multicast, use a mechanism called RPF (Reverse Path Forwarding) to prevent forwarding loops and ensure the Shortest path from the source to receivers. Said this we can say that A router forwards multicast traffic only if the received traffic is on the upstream interface to the source, so when a router receives a multicast packet, it is checked against the routing table (unicast), to determine whether the interface the packet arrived provides the shortest path back to the source.

If the interface provides the shortest path to the source the router will forward the packet

A

If the interface is NOT the shortest path to the source, the router will discard the packet silently

B

In resume: after the router receives a multicast packet it performs an RPF check. if the RPF check succeeds, the packet is forwarded; otherwise, it is silently discarded,

Now if RPF Succeeds the router will forward this packet out of each interface that is in the Outgoing interface list (OIL), this entries list the interfaces to the current router downstream multicast neighbors

C

The incoming interface (or RPF Interface) on which the packet was received is never in the OIL, therefore the packet is never forwarded back out this interface

D

In the above example, each router forwards received multicast packets to each of its neighbor routers. (The arrows that indicate the initial multicast traffic flow from source 10.10.10.10 throughout the network indicate this activity.) Observe that the two routers in the middle of the picture are each receiving multicast packets through the most direct path.
E

These received packets arrived through the RPF interface, so both routers forward the multicast packets to all neighbors; in this case, each other. This activity results in the two routers receiving packets via the non-RPF interface (that is, an interface that is not on the shortest path to the source) as   This result causes the RPF check to fail and the packets to be silently discarded.

Lets do another test, say that the RPF interface is f1/1

F

now

G

An RPF check is always done regarding the incoming interface – the RPF interface. The RPF check will succeed if the incoming interface is the shortest path to the source.

The RPF interface is determined either by the underlying unicast routing protocol or the dedicated multicast routing protocol (for example, Distance Vector Multicast Routing Protocol [DVMRP], Multiprotocol BGP Extensions for IP Multicast [MBGP], and so on).

Note that changes in the unicast topology will not necessarily immediately reflect a change in RPF, if the multicast routing protocol relies on underlying unicast routing tables. Such an RPF change depends on how frequently the RPF check is performed on a multicast forwarding entry — every five seconds is the current Cisco default.

A Networker Blog

IP Multicast addresses to Ethernet Addresses

 

For mapping IP Multicast addresses to Ethernet Addresses the lower 23 bits of a Class D IP Address are copied to one of the IANA Designated Ethernet Addresses

Ethernet addresses reserved fotr this purpose are in the range of 01:00:53:00:00:00 through 01:00:5e:7f:ff:ff

Ethernet Addresses have a 48 bit address filed (Source and Destination 48 bits each one)

Expressed in hexadecimal numbering, the first 24 bits of an Ethernet multicast address are 01:00:5e, this indicates the frame as multicast , the next bit in the ethernet address is always 0, leaving 23 bit for the multicast address, because IP Multicast groups are 28 bits (1110XXXX XXXXXXXX XXXXXXXX XXXXXXXX) long and there are only 23 bits available the mapping cannot be one to one, so only 23 low order bits of the multicast group ID are mapped onto the ethernet address. The 5 higg order bit remain in the multicast group are ignored.

With this mapping, each Ethernet Multicast address correspings to 32 different IP Multicast Addresses (2^5)
This means that a host in one multicast group may have to filter out multicast that are intended for other groups sharing the same ehthernet address.
An example

Having the following IP Multicast Address 224.192.16.1 convert it to the appropriate Ethernet MAC Representation.

224.     192.      16.       1
11100000.11000000.00010000.00000001

Multicast Ehternet
0100.5e
00010000.01011110.0XXXXXXX.XXXXXXXX.XXXXXXXX
11100000.11000000.00010000.00000001

Final=
00010000.0101 1110.0100 0000.0001 0000.0000 0001
01   00   5    E    4    0    1    0    0   1

0100:5E40:1001

Verification

R1(config-if)#ip igmp join 224.192.16.1
R1(config-if)#
Sw1#show mac-address-table multicast
Vlan    Mac Address       Type       Ports
—-    ———–       —-       —–
1    0100.5e40.1001    IGMP       Fa0/1

IGMP Profile

If your action is permit then everything else is going to be denied, if your action is deny then everything else would be permitted

Lets then test the above

The topology looks like

R3 is acting like a Server, running in ip pim dense for simplicity only on R3 Ethernet

R4 is configured to Join the 224.1.1.1 and the 224.2.2.2 Multicast Groups

R3#show ip pim neigh
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR
Priority,
      S - State Refresh Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address
Prio/Mode
R3#
R4#show ip pim inter
Address          Interface                Ver/   Nbr    Query  DR     DR
                                          Mode   Count  Intvl  Prior
R4#show ip igmp grou
IGMP Connected Group Membership
Group Address    Interface                Uptime    Expires   Last
Reporter
224.1.1.1        FastEthernet0/0          00:01:58  stopped 150.34.34.4
224.2.2.2        FastEthernet0/0          00:01:57  stopped 150.34.34.4

With out any IGMP Profile Configured at the Switch1, let’s see if it works

R3#clear ip mroute *
R3#ping 224.1.1.1

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:

Reply to request 0 from 150.34.34.4, 8 ms
R3#ping 224.2.2.2

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 224.2.2.2, timeout is 2 seconds:

Reply to request 0 from 150.34.34.4, 8 ms

Cool it works

Now With this configuration at Sw1

Sw1(config)#ip igmp profile 6
Sw1(config-igmp-profile)#permit
Sw1(config-igmp-profile)#range 224.1.1.1
Sw1(config-igmp-profile)#end
Sw1(config)#do show ip igmp profile 6
IGMP Profile 6
    permit
    range 224.1.1.1 224.1.1.1
Sw1(config)#int f0/3
Sw1(config-if)#ip igmp filter 6

Now let’s test at R3 Again

R3#clear ip mroute *
R3#ping 224.1.1.1

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:

Reply to request 0 from 150.34.34.4, 4 ms
R3#ping 224.2.2.2

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 224.2.2.2, timeout is 2 seconds:
.
R3#

Now with the Deny Action:

Sw1(config)#ip igmp profile 2
Sw1(config-igmp-profile)#deny
Sw1(config-igmp-profile)#range 224.2.2.2
Sw1(config-igmp-profile)#end
Sw1(config)#do show ip igmp profile 2
IGMP Profile 2
    range 224.2.2.2 224.2.2.2
Sw1(config)#int f0/3
Sw1(config-if)#ip igmp fil 2

Let’s try it again:

R3#clear ip mroute *
R3#ping 224.1.1.1   

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:

Reply to request 0 from 150.34.34.4, 8 ms
R3#ping 224.2.2.2   

Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 224.2.2.2, timeout is 2 seconds:
.
R3#

And this works very nice!!

A Networker Blog

Sparsed in Many to One..

Internet Protocol Multicast is an Internet routing protocol designed to provide efficient data transmission to multiple users. Multicast uses Class D addressing to identify and route multicast traffic and Protocol Independent Multicast (PIM) to configure and structure the multicast network.

IP Multicast assembles users who wish to receive multicast traffic into multicast groups and assigns each group a specific Class D IP address. The Class D IP address range reserved for multicast addresses is 224.0.0.0 to 239.255.255.255. PIM is enabled on interfaces to provide the routing mechanism to structure the multicast traffic. When a message is sent to a multicast group, the sending host forwards a single copy of the data packet over the network. The intermediate routers replicate these data packets and distribute them to the multicast group members.

Enables/disables Protocol Independent Multicast (PIM) sparse-mode on an interface.
Syntax: ip pim sparse-mode
Description: The ip pim sparse-mode command enables PIM sparse-mode on the interface. Modes in multicast denote specific methods of routing multicast traffic.

Enabling PIM on an interface also enables IGMP operation on that interface. An interface can be configured to be in dense mode, sparse mode, or sparse-dense mode. The mode determines how the router populates its multicast routing table and how the router forwards multicast packets it receives from its directly connected LANs. You must enable PIM in one of these modes for an interface to perform IP multicast routing.
In populating the multicast routing table, dense-mode interfaces are always added to the table. Sparse-mode interfaces are added to the table only when periodic Join messages are received from downstream routers, or when there is a directly connected member on the interface. When forwarding from a LAN, sparse-mode operation occurs if there is an RP known for the group. If so, the packets are encapsulated and sent toward the RP. When no RP is known, the packet is flooded in a dense-mode fashion. If the multicast traffic from a specific source is sufficient, the receiver’s first-hop router may send joins toward the source to build a source-based distribution tree.
There is no default mode setting. By default, multicast routing is disabled on an interface.

If you configure sparse-dense mode, the idea of sparseness or denseness is applied to the group on the router, and the network manager should apply the same concept throughout the network. Another benefit of sparse-dense mode is that Auto-RP information can be distributed in a dense mode manner; yet, multicast groups for user groups can be used in a sparse mode manner. Thus, there is no need to configure a default RP at the leaf routers.

To test this out we are going to configure on R4, R5, R3, R1 all interfaces of these routers in pim sparse mode.

1untitled.jpg

R1#conf ter
Enter configuration commands, one per line.  End with CNTL/Z.
R1(config)#ip multicast-routing
R1(config)#int lo0
R1(config-if)#ip pim sparse
R1(config-if)#int f0/0
R1(config-if)#ip pim sparse
R1(config-if)#int s0/0/0
R1(config-if)#ip pim sparse

 
R3#conf ter
Enter configuration commands, one per line.  End with CNTL/Z.
R3(config)#ip multicast-routing
R3(config)#int lo0
R3(config-if)#ip pim sparse
R3(config-if)#int f0/0
R3(config-if)#ip pim sparse
R3(config-if)#int s0/0/0
R3(config-if)#ip pim sparse
R3(config-if)#^Z
R4#conf ter
Enter configuration commands, one per line.  End with CNTL/Z.
R4(config)#ip multicast-routing
R4(config)#int f0/0
R4(config-if)#ip pim sparse
R4(config-if)#int f0/1
R4(config-if)#ip pim sparse
R4(config-if)#int lo0
R4(config-if)#ip pim sparse
R4(config-if)#int s0/0/0.45
R4(config-subif)#ip pim sparse
R4(config-subif)#exit
R4(config)#int s0/0/0.134
R4(config-subif)#ip pim sparse
R4(config-subif)#ip pim nbma
R4(config-subif)#exit
R4(config)#int lo0
R4(config-if)#ip pim sparse
R4(config-if)#exit
R5#conf ter
 Enter configuration commands, one per line.  End with CNTL/Z.
 R5(config)#ip multicast-routing
 R5(config)#int lo0
 R5(config-if)#ip pim sparse
 R5(config-if)#int f0/0
 R5(config-if)#ip pim sparse
 R5(config-if)#int s0/0/0.45
 R5(config-subif)#ip pim sparse
 R5(config-subif)#exit

R1 should be the RP for this group only, configures R4 as the Mapping Agent (AUTO-RP) also prevent from RP for this group in particular.

With Auto-RP, you configure the RPs themselves to announce their availability as RPs and mapping agents. The RPs send their announcements using 224.0.1.39. The RP mapping agent listens to the announced packets from the RPs, then sends RP-to-group mappings in a discovery message that is sent to 224.0.1.40. These discovery messages are what the rest of the routers use for their RP-to-group map. You can use one RP that also serves as the mapping agent, or you can configure multiple RPs and multiple mapping agents for redundancy purposes. Generally Auto-RP is used with sparse-dense mode, since then the Auto-RP information can be propagated in dense mode. If your routers are configured with pure sparse-mode on the interfaces, then you can shift to sparse-dense-mode, so we need to configure on the router ip auto-rp listener

ip pim autorp listener is a way of overiding the interface configuration and allowing dense mode operation. Therefore even if you have ip pim sparse mode configured it will override this command and allow the dense mode operation for the groups 224.0.1.39 and 224.0.1.40 to be distributed in dense mode.

If a CCIE Lab question restricted you to using ip pim sparse mode only yet still required Auto-RP then this could be the solution for you

R1#conf ter
 Enter configuration commands, one per line.  End with CNTL/Z.
 R1(config)#ip pim autorp list
 R1(config)#
R3#conf ter
 Enter configuration commands, one per line.  End with CNTL/Z.
 R3(config)#ip pim autorp list
 R3(config)#
R4(config)#ip pim autorp list
R5(config)#ip pim autorp list
 R5(config)#

The RP itself would have “send-rp-discovery,” while the Mapping Agent has “send-rp-announce.”

R1(config)#ip pim send-rp-announce Loopback0 scope 10 group-list 10 bidir
 R1(config)#access-list 10 permit 224.1.1.1
 R1(config)#ip pim bidir-enable
R3(config)#ip pim bidir-enable
R4(config)#ip pim bidir-enable
R5(config)#ip pim bidir-enable

PIM-SM cannot forward traffic in the upstream direction of a tree, because it only accepts traffic from one Reverse Path Forwarding (RPF) interface. This interface (for the shared tree) points toward the RP, therefore allowing only downstream traffic flow. In this case, upstream traffic is first encapsulated into unicast register messages, which are passed from the designated router (DR) of the source toward the RP. In a second step, the RP joins an SPT that is rooted at the source. Therefore, in PIM-SM, traffic from sources traveling toward the RP does not flow upstream in the shared tree, but downstream along the SPT of the source until it reaches the RP. From the RP, traffic flows along the shared tree toward all receivers.

To influence which router is the RP for a particular group, when two RPs are announcing for that group, you can configure each router with a loopback address. Place the higher IP address on the preferred RP, then use the loopback interface as the source of the announce packets; for example, ip pim send-RP-announce loopback0. When multiple mapping agents are used, they listen to each other’s discovery packets and the mapping agent with the highest IP address wins and becomes the only forwarder of 224.0.1.40.

To configure bidir-PIM, use the following commands in global configuration mode, depending on which method you use to distribute group-to-RP mappings:

Command Purpose
Router(config)# ip pim rp-address rp-address [access-list] [override] bidir Configures the address of a PIM RP for a particular group, and specifies bidirectional mode. Use this command when you are not distributing group-to-RP mappings using either Auto-RP or the PIMv2 BSR mechanism.
Router(config)# ip pim rp-candidate type number [group-list access-list] bidir Configures the router to advertise itself as a PIM Version 2 candidate RP to the BSR, and specifies bidirectional mode. Use this command when you are using the PIMv2 BSR mechanism to distribute group-to-RP mappings.
Router(config)# ip pim send-rp-announce type number scope ttl-value [group-list access-list] [interval seconds] bidir Configures the router to use Auto-RP to configure for which groups the router is willing to act as RP, and specifies bidirectional mode. Use this command when you are using Auto-RP to distribute group-to-RP mappings.

PIM-SM constructs uni-directional shared trees that are used to forward data from senders to receivers of a multicast group. PIM-SM also allows the construction of source specific trees, but this capability is not related to the protocol described in this document.

The shared tree for each multicast group is rooted at a multicast router called the Rendezvous Point (RP). Different multicast groups can use separate RPs within a PIM domain.

In unidirectional PIM-SM, there are two possible methods for distributing data packets on the shared tree. These differ in the way packets are forwarded from a source to the RP:

Initially when a source starts transmitting, its first hop router encapsulates data packets in special control messages (Registers) which are unicast to the RP. After reaching the RP the packets are decapsulated and distributed on the shared tree.

A transition from the above distribution mode can be made at a later stage. This is achieved by building source specific state on all routers along the path between the source and the RP. This state is then used to natively forward packets from that source.
Both these mechanisms suffer from problems. Encapsulation results in significant processing, bandwidth and delay overheads. Forwarding using source specific state has additional protocol and memory requirements. Bi-directional PIM dispenses with both encapsulation and source state by allowing packets to be natively forwarded from a source to the RP using shared tree state. In contrast to PIM-SM this mode of forwarding does not require any data-driven events.

Auto-RP relies on a router designated as RP mapping agent. Potential RP’s announce themselves to the mapping agent, and it resolves any conflicts. The mapping agent then sends out the multicast group-RP mapping information to the other routers.

R4(config)#ip pim send-rp-discovery Loopback0 scope 10

• There is a client at Vlan 173 that is joining group 224.1.1.1

R1(config)#int f0/0
 R1(config-if)#ip igmp jopi224.1.1.1
 R1(config-if)#ip igmp join 224.1.1.1

Verify the Multicast configuration by pinging the IGMP group address.

R5(config)#do ping 224.1.1.1

Type escape sequence to abort.
 Sending 1, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:

Reply to request 0 from 192.168.134.1, 132 ms
Reply to request 0 from 192.168.134.1, 156 ms

In regards to prevent RP for this group in particular we configure a test RP with a higher IP Address (preferred by AUTORP) to test the configuration.

R5(config)#access-list 10 permit 224.1.1.1
 R5(config)#ip pim send-rp-announce Loopback0 scope 10 group-list 10 bidir
R4#show ip pim rp map
 PIM Group-to-RP Mappings
 This system is an RP-mapping agent (Loopback0)

Group(s) 224.1.1.1/32
 RP 110.110.5.5 (?), v2v1, bidir
 Info source: 110.110.5.5 (?), elected via Auto-RP
 Uptime: 00:00:07, expires: 00:02:52
 RP 110.110.1.1 (?), v2v1, bidir
 Info source: 110.110.1.1 (?), via Auto-RP
 Uptime: 00:11:13, expires: 00:02:44
R1#show ip pim rp map
 PIM Group-to-RP Mappings
 This system is an RP (Auto-RP)

Group(s) 224.1.1.1/32
 RP 110.110.5.5 (?), v2v1, bidir
 Info source: 110.110.4.4 (?), elected via Auto-RP
 Uptime: 00:03:23, expires: 00:02:32
R4(config)#do show ip access-list

R4(config)#!No Access-list Configured
 R4(config)#
 R4(config)#access-list 1 deny 110.110.1.1
 R4(config)#access-list 2 deny 224.1.1.1
 R4(config)#ip pim rp-announce-filter rp-list 1 group-list 2
R1#show ip pim rp map
 PIM Group-to-RP Mappings
 This system is an RP (Auto-RP)

Group(s) 224.1.1.1/32
 RP 110.110.1.1 (?), v2v1, bidir
 Info source: 110.110.4.4 (?), elected via Auto-RP
 Uptime: 00:00:00, expires: 00:02:56
 R1#
R3#show ip pim rp map
 PIM Group-to-RP Mappings

Group(s) 224.1.1.1/32
 RP 110.110.1.1 (?), v2v1, bidir
 Info source: 110.110.4.4 (?), elected via Auto-RP
 Uptime: 00:00:05, expires: 00:02:50
R4#show ip pim rp map
 PIM Group-to-RP Mappings
 This system is an RP-mapping agent (Loopback0)

Group(s) 224.1.1.1/32
 RP 110.110.5.5 (?), v2v1, bidir
 Info source: 110.110.5.5 (?), elected via Auto-RP
 Uptime: 00:00:13, expires: 00:02:42
 RP 110.110.1.1 (?), v2v1, bidir
 Info source: 110.110.1.1 (?), via Auto-RP
 Uptime: 00:00:19, expires: 00:02:36

!

R4#show ip access-list
 Standard IP access list 1
 10 deny   110.110.1.1 (4 matches)
 Standard IP access list 2
 10 deny   224.1.1.1
 R4#


A Networker Blog

IPV6 & Multicast Routing

This configuration is based on this link

1.jpg

The requirement is to configure a Loopback0 on R3 and configure the following IPv6 Networks on R3 and R4.

The configuration on R3 is:

Loopback0    3:3:3:33::/64
Fast0/1    3:3:3:30::/64
Serial0/0/1    3:3:3:34::/64

The configuration on R4 is:
Fast0/0        3:3:3:40::/64
Serial0/0/1        3:3:3:34::/64

R3#conf ter
Enter configuration commands, one per line.  End with CNTL/Z.
R3(config)#
R3(config)#ipv6 unicast-routing
R3(config)#interface Loopback0
R3(config-if)#ipv6 address 3:3:3:33::/64 eui-64
R3(config-if)#interface FastEthernet0/1
R3(config-if)#ipv6 address 3:3:3:30::/64 eui-64
R3(config-if)#interface Serial0/0/1
R3(config-if)#ipv6 address 3:3:3:34::/64 eui-64
R3(config-if)#exit
R3(config)#^Z

R4#conf ter
Enter configuration commands, one per line.  End with CNTL/Z.
R4(config)#
R4(config)#ipv6 unicast-routing
R4(config)#interface FastEthernet0/1
R4(config-if)#ipv6 address 3:3:3:40::/64 eui-64
R4(config-if)#interface Serial0/0/1
R4(config-if)#ipv6 address 3:3:3:34::/64 eui-64
R4(config-if)#

Now we are going to configure RIPng and make sure R3 and R4 can ping all IPv6 networks

R3#conf ter
Enter configuration commands, one per line.  End with CNTL/Z.
R3(config)#ipv6 unicast-routing
R3(config)#interface Loopback0
R3(config-if)#ipv6 rip Lab3 enable
R3(config-if)#interface FastEthernet0/1
R3(config-if)#ipv6 rip Lab3 enable
R3(config-if)#interface Serial0/0/1
R3(config-if)#ipv6 rip Lab3 enable
R3(config-if)#^Z

R4#conf ter
Enter configuration commands, one per line.  End with CNTL/Z.
R4(config)#
R4(config)#ipv6 unicast-routing
R4(config)#interface FastEthernet0/1
R4(config-if)#ipv6 rip Lab3 enable
R4(config-if)#interface Serial0/0/1
R4(config-if)#ipv6 rip Lab3 enable
R4(config-if)#^Z
R4#

We can check the routing and then perform a test ping from R4’s LAN to R3’s loopback.

R3#sh ipv6 route
IPv6 Routing Table – 9 entries
Codes: C – Connected, L – Local, S – Static, R – RIP, B – BGP
U – Per-user Static route
I1 – ISIS L1, I2 – ISIS L2, IA – ISIS interarea, IS – ISIS summary
O – OSPF intra, OI – OSPF inter, OE1 – OSPF ext 1, OE2 – OSPF ext 2
ON1 – OSPF NSSA ext 1, ON2 – OSPF NSSA ext 2
C   3:3:3:30::/64 [0/0]
via ::, FastEthernet0/1
L   3:3:3:30:217:EFF:FE64:5B19/128 [0/0]
via ::, FastEthernet0/1
C   3:3:3:33::/64 [0/0]
via ::, Loopback0
L   3:3:3:33:217:EFF:FE64:5B18/128 [0/0]
via ::, Loopback0
C   3:3:3:34::/64 [0/0]
via ::, Serial0/0/1
L   3:3:3:34:217:EFF:FE64:5B18/128 [0/0]
via ::, Serial0/0/1
R   3:3:3:40::/64 [120/2]
via FE80::216:C7FF:FEBE:6D58, Serial0/0/1
L   FE80::/10 [0/0]
via ::, Null0
L   FF00::/8 [0/0]
via ::, Null0
R3#

R4#sh ipv6 route
IPv6 Routing Table – 8 entries
Codes: C – Connected, L – Local, S – Static, R – RIP, B – BGP
U – Per-user Static route
I1 – ISIS L1, I2 – ISIS L2, IA – ISIS interarea, IS – ISIS summary
O – OSPF intra, OI – OSPF inter, OE1 – OSPF ext 1, OE2 – OSPF ext 2
ON1 – OSPF NSSA ext 1, ON2 – OSPF NSSA ext 2
R   3:3:3:30::/64 [120/2]
via FE80::217:EFF:FE64:5B18, Serial0/0/1
R   3:3:3:33::/64 [120/2]
via FE80::217:EFF:FE64:5B18, Serial0/0/1
C   3:3:3:34::/64 [0/0]
via ::, Serial0/0/1
L   3:3:3:34:216:C7FF:FEBE:6D58/128 [0/0]
via ::, Serial0/0/1
C   3:3:3:40::/64 [0/0]
via ::, FastEthernet0/1
L   3:3:3:40:216:C7FF:FEBE:6D59/128 [0/0]
via ::, FastEthernet0/1
L   FE80::/10 [0/0]
via ::, Null0
L   FF00::/8 [0/0]
via ::, Null0

Now we are configuring R3 and R4 for IPv6 multicast-routing. Also, we are going to Configure R4 to join group FF04::40 using its Fast0/0 interface and make sure R3 is the PIM DR on the Serial network.

To configure a router to do IPv6 multicast routing we first need to configure the following command.

R3#conf ter
Enter configuration commands, one per line.  End with CNTL/Z.
R3(config)#ipv6 multicast-routing
R3(config)#^Z

R4#conf ter
Enter configuration commands, one per line.  End with CNTL/Z.
R4(config)#ipv6 multicast-routing
R4(config)#^Z

The host to router signaling in IPv6 multicast is performed by a protocol called Multicast Lister Discovery (MLD). Cisco IOS supports MLDv1 (similar to IGMPv2) and MLDv2 (similar to IGMPv3). Below command will configure R4’s Fast0/0 to join the requested group.

R4#conf ter
Enter configuration commands, one per line.  End with CNTL/Z.
R4(config)#int f0/0
R4(config-if)#ipv6 mld join-group FF04::40

Note that unlike Ipv4 multicast, as soon as you configure IPv6 multicast routing all interfaces automatically run PIM-SM (IPv6 multicast only supports PIM-SM and PIM-SSM. No PIM-DM).

You can check your configuration using show ipv6 mroute

R4#show ipv6 mroute
Multicast Routing Table
Flags: D – Dense, S – Sparse, B – Bidir Group, s – SSM Group,
C – Connected, L – Local, I – Received Source Specific Host Report,
P – Pruned, R – RP-bit set, F – Register flag, T – SPT-bit set,
J – Join SPT
Timers: Uptime/Expires
Interface state: Interface, State

(*, FF04::40), 00:00:06/never, RP ::, flags: SCLJ
Incoming interface: Null
RPF nbr: ::
Immediate Outgoing interface list:
FastEthernet0/1, Forward, 00:00:06/never

Now we need to make sure R3 is the PIM DR. The default DR priority is 1 one so we will configure R3’s Serial0/0/1 interface to have a priority of 2 and then check to make sure it is the DR.

R3(config)#int s0/0/1
R3(config-if)#ipv6 pim dr-priority 2
R3(config-if)#exit
R3(config)#do sh ipv6 pim interface Serial0/0/1
Interface          PIM  Nbr   Hello  DR
Count Intvl  Prior

Serial0/0/1        on   1     30     2
Address: FE80::217:EFF:FE64:5B18
DR     : this system

R3  is going to work as a candidate BSR and candidate RP for groups in the range FF00::/8 using Loopback0 address as an ID.

Cisco IOS doesn’t support Auto-RP or at least not until 12.4T. It only supports BSR routers that look at candidate RP advertisements and send the mapping to the rest of the multicast routers.

The range specified in the question is in fact the whole IPv6 multicast range because an IPv6 Multicast address is identified by the first 8 bits being set (FF).

We will configure the commands below for BSR and RD candidature. Note that we don’t need to configure the Loopback interface for PIM because this happens automatically as soon as we configure IPv6 multicast.

R3(config)#ipv6 pim bsr candidate bsr  3:3:3:33:217:EFF:FE64:5B18 !Lo0
R3(config)#ipv6 pim bsr candidate rp  3:3:3:33:217:EFF:FE64:5B18
R3(config)#

We can confirm the configuration on R3 itself using the commands below.

R3#show ipv6 pim bsr candidate-rp
PIMv2 C-RP information
Candidate RP: 3:3:3:33:217:EFF:FE64:5B18 SM
All Learnt Scoped Zones, Priority 192, Holdtime 150
Advertisement interval 60 seconds
Next advertisement in 00:00:45

R3#show ipv6 pim bsr election
PIMv2 BSR information

BSR Election Information
Scope Range List: ff00::/8
BSR Address: ::
Uptime: 00:00:00, BSR Priority: 0, Hash mask length: 0
RPF: ::,
BS Timer: 00:00:21
This system is candidate BSR
Candidate BSR address: 3:3:3:33:217:EFF:FE64:5B18, priority: 0, hash mask length: 126

R3#sh ipv6 pim group-map info-source bsr

FF00::/8*
SM, RP: 3:3:3:33:217:EFF:FE64:5B18
RPF: Tu2,3:3:3:33:217:EFF:FE64:5B18 (us)
Info source: BSR From: 3:3:3:33:217:EFF:FE64:5B18(00:02:23), Priority: 192
Uptime: 00:00:06, Groups: 1

A Networker Blog

Filter For Multicast Groups in the MA

mcast1.jpg

We are going to configure Sparse-dense mode for the network between R3, R5, and R6, the Ethernet network between R5 and R2, and the 192.168.x.x loopbacks on R3, R5, and R6. We just need to enable multicast on our routers, and configure the interfaces for the PIM mode of sparse-dense. We will also configure the loopback

R5#conf ter
Enter configuration commands, one per line. End with CNTL/Z.
R5(config)#ip multicast-routing
R5(config)#int ser0/0/0.136
R5(config-subif)#ip pim sparse-dense
R5(config-subif)#int fasteth0/0
R5(config-if)#ip pim sparse-dense
R5(config-if)#int loop1
R5(config-if)#ip pim sparse-dense
R5(config-if)#^Z
R5#

R2#conf ter
Enter configuration commands, one per line. End with CNTL/Z.
R2(config)#ip multicast-routing
R2(config)#int fasteth0/0
R2(config-if)#ip pim sparse-dense

R6#
R6#conf ter
Enter configuration commands, one per line. End with CNTL/Z.
R6(config)#
R6(config)#ip multicast-routing
R6(config)#int ser0/0/0
R6(config-if)#ip pim sparse-dense
R6(config-if)#int loop1
R6(config-if)#ip pim sparse-dense
R6(config-if)#^Z
R6#

R5 would act as the mapping agent and R3 and R6 would be Candidate RPs for the following groups: 225.0.0.1, 225.0.0.2, 225.0.0.3, 226.0.0.1, 226.0.0.2, 226.0.0.3

In order to configure R5 as a mapping agent, we will configure the command ip pim send-rp-discovery.

R5#conf ter
Enter configuration commands, one per line. End with CNTL/Z.
R5(config)#ip pim send-rp-discovery loop1 scope 4
R5(config)#

In order to configure R3 and R6 as candidate RPs, we will configure an access list for the groups, and use the command ip pim send-rp-announce.

R6#conf ter
Enter configuration commands, one per line. End with CNTL/Z.
R6(config)#access-list 24 permit 225.0.0.1
R6(config)#access-list 24 permit 225.0.0.2
R6(config)#access-list 24 permit 225.0.0.3
R6(config)#access-list 24 permit 226.0.0.1
R6(config)#access-list 24 permit 226.0.0.2
R6(config)#access-list 24 permit 226.0.0.3
R6(config)#ip pim send-rp-announce loop1 scope 4 group-list 24
R6(config)#^Z

R3#conf ter
Enter configuration commands, one per line. End with CNTL/Z.
R3(config)#
R3(config)#access-list 24 permit 225.0.0.1
R3(config)#access-list 24 permit 225.0.0.2
R3(config)#access-list 24 permit 225.0.0.3
R3(config)#access-list 24 permit 226.0.0.1
R3(config)#access-list 24 permit 226.0.0.2
R3(config)#access-list 24 permit 226.0.0.3
R3(config)#ip pim send-rp-announce loop1 scope 4 group-list 24
R3(config)#^Z

Verify that R5 shows as a mapping agent with the command show ip pim rp mapping.

R5#show ip pim rp mapping
PIM Group-to-RP Mappings
This system is an RP-mapping agent (Loopback1)

Group(s) 225.0.0.1/32
RP 192.168.3.3 (?), v2v1
Info source: 192.168.3.3 (?), elected via Auto-RP
Uptime: 00:00:13, expires: 00:02:47
Group(s) 225.0.0.2/32
RP 192.168.3.3 (?), v2v1
Info source: 192.168.3.3 (?), elected via Auto-RP
Uptime: 00:00:13, expires: 00:02:42
Group(s) 225.0.0.3/32
RP 192.168.3.3 (?), v2v1
Info source: 192.168.3.3 (?), elected via Auto-RP
Uptime: 00:00:13, expires: 00:02:47
Group(s) 226.0.0.1/32
RP 192.168.3.3 (?), v2v1
Info source: 192.168.3.3 (?), elected via Auto-RP
Uptime: 00:00:13, expires: 00:02:43
Group(s) 226.0.0.2/32
RP 192.168.3.3 (?), v2v1
Info source: 192.168.3.3 (?), elected via Auto-RP
Uptime: 00:00:13, expires: 00:02:47
Group(s) 226.0.0.3/32
RP 192.168.3.3 (?), v2v1
Info source: 192.168.3.3 (?), elected via Auto-RP
Uptime: 00:00:13, expires: 00:02:43

We are going to configure a filter so that R5 will accept R3 as the RP for the 225.x.x.x groups, and will accept R6 as the RP for the 226.x.x.x groups. Verify that R2 sees R3 as the RP for the 225 groups, and R6 as the RP for the 226 groups.

In order to filter the groups, we will configure the command ip pim rp-announce-filter rp-list on R5. First, we will configure four access lists. Two access lists will match the two groups of multicast addresses, and two will match our RPs.

R5#conf
*Jul 31 05:35:12.063: %SYS-5-CONFIG_I: Configured from console by console
R5#conf ter
Enter configuration commands, one per line. End with CNTL/Z.
R5(config)#
R5(config)#ip access-list standard permitR3
R5(config-std-nacl)#permit 192.168.3.3
R5(config-std-nacl)#
R5(config-std-nacl)#ip access-list standard permitR6
R5(config-std-nacl)#permit 192.168.6.6
R5(config-std-nacl)#
R5(config-std-nacl)#ip access-list standard R3groups
R5(config-std-nacl)#permit 225.0.0.0 0.0.0.3
R5(config-std-nacl)#
R5(config-std-nacl)#ip access-list standard R6groups
R5(config-std-nacl)#permit 226.0.0.0 0.0.0.3
R5(config-std-nacl)#

Before we apply our filtering, let’s take a look at R2 and see what R2 currently has for mappings. At this point, R2 just sees mappings with R6 as the RP, due to the election

R2#show ip pim rp mapping
PIM Group-to-RP Mappings

Group(s) 225.0.0.1/32
RP 192.168.6.6 (?), v2v1
Info source: 192.168.5.5 (?), elected via Auto-RP
Uptime: 00:02:09, expires: 00:02:47
Group(s) 225.0.0.2/32
RP 192.168.6.6 (?), v2v1
Info source: 192.168.5.5 (?), elected via Auto-RP
Uptime: 00:02:09, expires: 00:02:49
Group(s) 225.0.0.3/32
RP 192.168.6.6 (?), v2v1
Info source: 192.168.5.5 (?), elected via Auto-RP
Uptime: 00:02:09, expires: 00:02:49
Group(s) 226.0.0.1/32
RP 192.168.6.6 (?), v2v1
Info source: 192.168.5.5 (?), elected via Auto-RP
Uptime: 00:02:09, expires: 00:02:45
Group(s) 226.0.0.2/32
RP 192.168.6.6 (?), v2v1
Info source: 192.168.5.5 (?), elected via Auto-RP
Uptime: 00:02:09, expires: 00:02:49
Group(s) 226.0.0.3/32
RP 192.168.6.6 (?), v2v1
Info source: 192.168.5.5 (?), elected via Auto-RP
Uptime: 00:02:09, expires: 00:02:49
R2#

We will apply our filtering, using the access-lists created to match our RPs and groups. There are two parts to the ip pim rp-announce-filter command. The first part identifies the RP, and the second identifies the groups.

R5#conf ter
Enter configuration commands, one per line. End with CNTL/Z.
R5(config)#ip pim rp-announce-filter rp-list permitR3 group-list R3groups
R5(config)#ip pim rp-announce-filter rp-list permitR6 group-list R6groups
R5(config)#^Z
R5#
R5#wr

The first line filters the groups that we are permitting for R3, the second filters the groups that we are permitting for R6.
If we had left off the second filter, it wouldn’t mean that R6 would be denied as a candidate RP, it would mean that R6’s multicast groups would not be filtered at all.

Let’s take a look at the output of debug ip pim auto-rp. Notice what happens with the updates from R3 and R6.

*Mar 2 19:05:59.732: Auto-RP(0): Received RP-announce, from 192.168.3.3, RP_cnt 1, ht 181
*Mar 2 19:05:59.732: Auto-RP(0): Update (225.0.0.1/32, RP:192.168.3.3), PIMv2 v1
*Mar 2 19:05:59.732: Auto-RP(0): Filtered 226.0.0.2/32 for RP 192.168.3.3
*Mar 2 19:05:59.732: Auto-RP(0): Filtered 226.0.0.3/32 for RP 192.168.3.3
*Mar 2 19:05:59.736: Auto-RP(0): Update (225.0.0.3/32, RP:192.168.3.3), PIMv2 v1
*Mar 2 19:05:59.736: Auto-RP(0): Update (225.0.0.2/32, RP:192.168.3.3), PIMv2 v1
*Mar 2 19:05:59.736: Auto-RP(0): Filtered 226.0.0.1/32 for RP 192.168.3.3

*Mar 2 19:08:33.993: Auto-RP(0): Received RP-announce, from 192.168.6.6, RP_cnt 1, ht 181
*Mar 2 19:08:33.997: Auto-RP(0): Filtered 225.0.0.1/32 for RP 192.168.6.6
*Mar 2 19:08:33.997: Auto-RP(0): Update (226.0.0.2/32, RP:192.168.6.6), PIMv2 v1
*Mar 2 19:08:33.997: Auto-RP(0): Update (226.0.0.3/32, RP:192.168.6.6), PIMv2 v1
*Mar 2 19:08:33.997: Auto-RP(0): Filtered 225.0.0.3/32 for RP 192.168.6.6
*Mar 2 19:08:33.997: Auto-RP(0): Filtered 225.0.0.2/32 for RP 192.168.6.6
*Mar 2 19:08:34.001: Auto-RP(0): Update (226.0.0.1/32, RP:192.168.6.6), PIMv2 v1

Now that our filtering is complete, verify the RP mappings on R5 and R2 with the command show ip pim rp mapping.

R5#show ip pim rp mapping
PIM Group-to-RP Mappings
This system is an RP-mapping agent (Loopback1)

Group(s) 225.0.0.1/32
RP 192.168.3.3 (?), v2v1
Info source: 192.168.3.3 (?), elected via Auto-RP
Uptime: 00:02:10, expires: 00:02:48
Group(s) 225.0.0.2/32
RP 192.168.3.3 (?), v2v1
Info source: 192.168.3.3 (?), elected via Auto-RP
Uptime: 00:02:10, expires: 00:02:45
Group(s) 225.0.0.3/32
RP 192.168.3.3 (?), v2v1
Info source: 192.168.3.3 (?), elected via Auto-RP
Uptime: 00:02:10, expires: 00:02:45
Group(s) 226.0.0.1/32
RP 192.168.6.6 (?), v2v1
Info source: 192.168.6.6 (?), elected via Auto-RP
Uptime: 00:50:36, expires: 00:02:22
Group(s) 226.0.0.2/32
RP 192.168.6.6 (?), v2v1
Info source: 192.168.6.6 (?), elected via Auto-RP
Uptime: 00:50:36, expires: 00:02:19
Group(s) 226.0.0.3/32
RP 192.168.6.6 (?), v2v1
Info source: 192.168.6.6 (?), elected via Auto-RP
Uptime: 00:50:47, expires: 00:02:09

R2#show ip pim rp mapping
PIM Group-to-RP Mappings

Group(s) 225.0.0.1/32
RP 192.168.3.3 (?), v2v1
Info source: 192.168.5.5 (?), via Auto-RP
Uptime: 00:06:37, expires: 00:02:50
Group(s) 225.0.0.2/32
RP 192.168.3.3 (?), v2v1
Info source: 192.168.5.5 (?), via Auto-RP
Uptime: 00:06:37, expires: 00:02:51
Group(s) 225.0.0.3/32
RP 192.168.3.3 (?), v2v1
Info source: 192.168.5.5 (?), via Auto-RP
Uptime: 00:06:37, expires: 00:02:52
Group(s) 226.0.0.1/32
RP 192.168.6.6 (?), v2v1
Info source: 192.168.5.5 (?), via Auto-RP
Uptime: 00:55:03, expires: 00:02:52
Group(s) 226.0.0.2/32
RP 192.168.6.6 (?), v2v1
Info source: 192.168.5.5 (?), via Auto-RP
Uptime: 00:55:03, expires: 00:02:54
Group(s) 226.0.0.3/32
RP 192.168.6.6 (?), v2v1
Info source: 192.168.5.5 (?), via Auto-RP
Uptime: 00:55:03, expires: 00:02:54
Lab5R2#

If you want to clear the mappings and verify that they are relearned properly, use the command clear ip pim rp-mapping.

Note: Be very careful when configuring your access lists on the RPs. The access list in the command send-rp-announce on the RP is sent to the mapping agent line by line. If we had configured a different access list on R3 and R6, our groups may have been completely blocked by our filter on R5.

Example: Here is the sample output, with a different access list on R3.

R3(config)#access-list 24 permit 224.0.0.0 15.255.255.255

OUTPUT ON R5 – Debug ip pim auto-rp:

*Jul 31 05:37:05.679: Auto-RP(0): Received RP-announce, from 192.168.3.3, RP_cnt 1, ht 181
*Jul 31 05:37:05.679: Auto-RP(0): Update (225.0.0.1/32, RP:192.168.3.3), PIMv2 v1
*Jul 31 05:37:05.679: Auto-RP(0): Filtered 226.0.0.2/32 for RP 192.168.3.3
*Jul 31 05:37:05.679: Auto-RP(0): Filtered 226.0.0.3/32 for RP 192.168.3.3
*Jul 31 05:37:05.679: Auto-RP(0): Update (225.0.0.3/32, RP:192.168.3.3), PIMv2 v1
*Jul 31 05:37:05.679: Auto-RP(0): Update (225.0.0.2/32, RP:192.168.3.3), PIMv2 v1
*Jul 31 05:37:05.679: Auto-RP(0): Filtered 226.0.0.1/32 for RP 192.168.3.3
*Jul 31 05:37:05.679: Auto-RP(0): Received RP-announce, from 192.168.3.3, RP_cnt 1, ht 181
R5#
*Jul 31 05:37:05.679: Auto-RP(0): Update (225.0.0.1/32, RP:192.168.3.3), PIMv2 v1
*Jul 31 05:37:05.679: Auto-RP(0): Filtered 226.0.0.2/32 for RP 192.168.3.3
*Jul 31 05:37:05.679: Auto-RP(0): Filtered 226.0.0.3/32 for RP 192.168.3.3
*Jul 31 05:37:05.683: Auto-RP(0): Update (225.0.0.3/32, RP:192.168.3.3), PIMv2 v1
*Jul 31 05:37:05.683: Auto-RP(0): Update (225.0.0.2/32, RP:192.168.3.3), PIMv2 v1
*Jul 31 05:37:05.683: Auto-RP(0): Filtered 226.0.0.1/32 for RP 192.168.3.3
R5#
*Jul 31 05:37:30.055: Auto-RP(0): Build RP-Discovery packet
*Jul 31 05:37:30.055: Auto-RP: Build mapping (225.0.0.1/32, RP:192.168.6.6), PIMv2 v1,
*Jul 31 05:37:30.055: Auto-RP: Build mapping (225.0.0.2/32, RP:192.168.6.6), PIMv2 v1.
*Jul 31 05:37:30.055: Auto-RP: Build mapping (225.0.0.3/32, RP:192.168.6.6), PIMv2 v1.
*Jul 31 05:37:30.055: Auto-RP: Build mapping (226.0.0.1/32, RP:192.168.6.6), PIMv2 v1.
*Jul 31 05:37:30.055: Auto-RP: Build mapping (226.0.0.2/32, RP:192.168.6.6), PIMv2 v1.
*Jul 31 05:37:30.055: Auto-RP: Build mapping (226.0.0.3/32, RP:192.168.6.6), PIMv2 v1.
*Jul 31 05:37:30.055: Auto-RP(0): Send RP-discovery packet on Serial0/0/0.136 (1 RP entries)
R5#
*Jul 31 05:37:30.055: Auto-RP(0): Send RP-discovery packet on FastEthernet0/0 (1 RP entries)
*Jul 31 05:37:30.055: Auto-RP: Send RP-discovery packet on Loopback1 (1 RP entries)
R5#
*Jul 31 05:37:32.311: Auto-RP(0): Received RP-announce, from 192.168.6.6, RP_cnt 1, ht 181
*Jul 31 05:37:32.311: Auto-RP(0): Filtered 225.0.0.1/32 for RP 192.168.6.6
*Jul 31 05:37:32.311: Auto-RP(0): Update (226.0.0.2/32, RP:192.168.6.6), PIMv2 v1
*Jul 31 05:37:32.311: Auto-RP(0): Update (226.0.0.3/32, RP:192.168.6.6), PIMv2 v1
*Jul 31 05:37:32.311: Auto-RP(0): Filtered 225.0.0.3/32 for RP 192.168.6.6
*Jul 31 05:37:32.311: Auto-RP(0): Filtered 225.0.0.2/32 for RP 192.168.6.6
*Jul 31 05:37:32.311: Auto-RP(0): Update (226.0.0.1/32, RP:192.168.6.6), PIMv2 v1
*Jul 31 05:37:32.311: Auto-RP(0): Received RP-announce, from 192.168.6.6, RP_cnt 1, ht 181
R5#
*Jul 31 05:37:32.311: Auto-RP(0): Filtered 225.0.0.1/32 for RP 192.168.6.6
*Jul 31 05:37:32.311: Auto-RP(0): Update (226.0.0.2/32, RP:192.168.6.6), PIMv2 v1
*Jul 31 05:37:32.311: Auto-RP(0): Update (226.0.0.3/32, RP:192.168.6.6), PIMv2 v1
*Jul 31 05:37:32.311: Auto-RP(0): Filtered 225.0.0.3/32 for RP 192.168.6.6
*Jul 31 05:37:32.311: Auto-RP(0): Filtered 225.0.0.2/32 for RP 192.168.6.6
*Jul 31 05:37:32.311: Auto-RP(0): Update (226.0.0.1/32, RP:192.168.6.6), PIMv2 v1
R5#

Notice that the line shows as ‘filtered’ for the access list line.

• Configure R3 to set a limit of 4 on the number of multicast entries for the two sources 143.3.134.200 and 143.3.134.201.

The ability to set such limit has been introduced in 12.3(14)T using the command below.

We will first need to create an access-list to permit the sources we want to apply the limit on.

R3#conf ter
Enter configuration commands, one per line. End with CNTL/Z.
R3(config)#
R3(config)# access-list 5 permit 144.3.134.200
R3(config)# access-list 5 permit 144.3.134.201

Then we will apply the ip multicast limit command to the Fast0/1 interface. Because these sources are directly connected, we will use the keyword connected.

R3(config)#int f0/1
R3(config-if)#ip multicast limit connected 5 4
R3(config-if)#ip pim sparse-dense

be also aware of RP Failures

R3#deb ip mpac
*Jul 31 05:42:42.747: IP(0): s=192.168.5.5 (Serial0/0/0) d=224.0.1.40 id=37012, ttl=3, prot=17, len=88(84), not RPF interface

R3#conf ter
Enter configuration commands, one per line. End with CNTL/Z.
R3(config)#ip mroute 0.0.0.0 0.0.0.0 143.3.136.5

A Networker Blog