Jump to content

Search the Community

Showing results for tags 'Multicast'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • ANNOUNCEMENTS
    • ANNOUNCEMENTS
  • CERTIFICATION - - - - - NO REQUESTS IN THESE FORUMS - - - - -
    • CISCO SYSTEMS
    • COMPTIA
    • LINUX
    • MICROSOFT
    • ORACLE
    • PROJECT MANAGEMENT
    • SECURITY CERTIFICATIONS
    • SUN MICROSYSTEMS
    • WIRELESS
    • OTHER CERTIFICATIONS
  • CISCO TECHNICAL SECTION
    • CISCO LABS
    • GNS3
    • NETWORK INFRASTRUCTURE
    • SECURITY
    • WIRELESS
    • SERVICE PROVIDERS
    • COLLABORATION, VOICE AND VIDEO
    • DATA CENTER
    • SMALL BUSINESS
  • MICROSOFT TECHNICAL SECTION
  • OTHER TECHNICAL SECTION
  • TRAINING OFFERS & REQUESTS
  • CERTCOLLECTION MALL
  • GENERAL FORUMS
  • COMMUNITY CENTER

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 27 results

  1. Hello; as I read in this forums in new config they as you to configure PIM only in R15,R17,R19-21 NOT on SW3 and SW4 and we can just run IP multicast-routing. in this case how we can pass the PIM information from R15 to other routers? please some one let us know about this. thank you in advance
  2. Hi Guys, I am sharing my multicast traces before adding offset-list on R17 interface connected to R15. Which result is required in the exam even question does not want any result or trace. I am asking for the safer side. Before Adding: SW5#mtrace 123.55.55.55 10.1.18.1 232.1.1.1 1 Type escape sequence to abort. Mtrace from 123.55.55.55 to 10.1.18.1 via group 232.1.1.1 From source (?) to destination (?) Querying full reverse path... 0 10.1.18.1 -1 123.20.1.26 ==> 123.20.1.26 PIM [123.55.55.0/24] -2 123.20.1.25 ==> 123.20.1.18 PIM [123.55.55.0/24] -3 123.20.1.17 ==> 123.20.1.2 PIM [123.55.55.0/24] -4 123.20.1.3 ==> 123.55.55.55 PIM_MT [123.55.55.55/32] -5 123.55.55.55 After Adding: SW5#mtrace 123.55.55.55 10.1.18.1 232.1.1.1 1 Type escape sequence to abort. Mtrace from 123.55.55.55 to 10.1.18.1 via group 232.1.1.1 From source (?) to destination (?) Querying full reverse path... 0 10.1.18.1 -1 123.20.1.26 ==> 123.20.1.26 PIM [using shared tree] -2 123.20.1.25 ==> 123.20.1.18 PIM [using shared tree] -3 123.20.1.17 ==> 123.20.1.2 PIM [using shared tree] -4 123.20.1.1 ==> 0.0.0.0 PIM Reached RP/Core [using shared tree] Ping Result Before and After Adding Metric: _SW5#ping 232.1.1.1 so vlan 5 Type escape sequence to abort. Sending 1, 100-byte ICMP Echos to 232.1.1.1, timeout is 2 seconds: Packet sent with a source address of 123.55.55.55 Reply to request 0 from 10.1.19.1, 17 ms Reply to request 0 from 10.1.18.1, 33 ms Reply to request 0 from 10.1.18.1, 33 ms Reply to request 0 from 10.1.19.1, 28 ms Experts comments are appreciated.
  3. Hi everyone - I just thought I'd take a few moments to share my thoughts on the INE 3-Day Multicast Boot Camp. Whilst studying for my CCIE Written exam, I discovered that for some reason I just could not grasp the concept of Multicast. I read the Cisco Press CCIE Cert Guide, I watched the IPExpert and INE CCIE Written Boot Camps, but for some reason I just could not visualize, for example, how the traffic could get to the receivers when the destination address was the multicast address. I spent too much time looking at this and even when I took the exam, I did not feel confident and the only reason I scored reasonably well in this subject in the exam was more through luck than judgement. Now, I know there are many of you here that will say "Multicast is easy, what's the problem?" and whilst that may be true, please bear in mind that some people have problems with ACL's, others have issues with MPLS. We all have an area of weakness and mine is Multicast. I admit it, I just plain suck at it. That said, however, admitting doesn't solve anything, I needed to find a way to make me understand it if I am to be successful in the lab. If, like me, you are struggling with Multicast too, then you would not go far wrong if you were to watch the INE Multicast Boot Camp aka Multicast Deep Dive. Straight off, as soon as I started watching I couldn't help but think "This is exactly the same explanation at the other sources I have used...this is going to be a waste of time". Something, however, told me to stick with it and I am glad I did. The presenter, Brian McGahan is a well known presenter and has done a lot of INE's courses. It seems they choose him a lot because he's just plain good and this course is no exception, he has made the topic interesting and has presented the course with enthusiasm. The course itself starts off where all other video's and books start, but then really goes deep into what exactly is happening. For example, Brian will make full use of the whiteboard, then when configuring, start PIM debugging, then set, say PIM Dense mode and you can see the packets as they start to flow across the network, thus assisting with trying to help you visualize exactly what is going on. I do like the way the video's are separated, one video will go through the theory on a particular topic, then you can have a break before the next video details how to configure that topic and verify the configuration. Topics covered include Sparse mode, Dense mode, BSR, Auto-RP, NMBA (very interesting and thought provoking), Bi-Directional PIM, Source Specific, MSDP, Anycast, IPv6, GRE, in fact, every area of Multicast is covered as far as I could tell. One thing you will notice is that this course was recorded from a live course ran with students present. The first thing I liked was, students were asking questions and they were the sort of questions you yourself would ask, so you get a bit more clarification. The second thing I liked, or dare I say absolutely loved, was that things went wrong during the configuration sections. This meant you got to see the troubleshooting process live, not something that was prearranged. A recorded course could be paused whilst the instructor fixes the issue. Those that have followed the Advanced Technologies video's will be pleased to know that the lab Brian uses in this course is exactly the same layout. If you have a topology already created in GNS3 or have a rack, you won't need to change anything, just load the initial configuration and you will be ready to 'play along' with Brian. The total running time of the course is more than 20 hours. This amount of video may seem excessive, but I promise you, it is worth every second if you really want to master this particular subject. For me, the end result of watching this boot camp was, I feel so much more confident in Multicast now. I do understand how it works and I feel that, come lab time, I will be more prepared. I hope this review proves useful to someone and that this in some way helps guide others in their journey towards their number. Toad
  4. Gents.... Here is the question, and I'm sure I brought this up before, but I am a bit confused, so here we go: 3.1 Multicast PIM SM between SW2, SW3, SW4 The QA and Support Vlan should handle multicast traffic Configure Auto-RP, with SW3 loopback 0 serving as RP only for the multicast group 239.10.5.0 /24 and SW4 serving as the mapping-agent. Enable SW2 loopback 0 to join group 239.10.5.1 To verify you should be able to successfully generate multicast traffic for the group 239.5.10.1 using R2 as the source. Ok, I know how to put to configure...BUT the highlighted bullet point is causing confusion and is driving me crazy. On our guide, and everywhere I searched on here has this as being: SW2 int vlan 243 ip igmp join-group 239.10.5.1 Why??? It should be under SW2 int loop 0. Or am I missing something. Because if I'm right, then how is everyone passing this section of the lab??? And the postings that have consolidation guide needs to be updated Thanks, DL
  5. Hi guys, Hope this will help for a quick look in Multicast Security. [hide][Hidden Content]] Regards, Devnath
  6. Would anyone please clarify that should end to end unicast IP be pingable for multicast traffic? Because in MSDP multicast ticket, from server to client, unicast IP (from R13 to R8) is not pinging, but multicast IP is pinging. can anyone please clarify this confusion?
  7. i am getting below log on R5 when i tried to ping 232.2.2.2 from R23. *Apr 14 20:17:36.611: %PIM-1-INVALID_RP_REG: Received Register from router 100.23.23.23 for group 232.2.2.2, 198.1.1.1 not willing to be RP would anyone please explain this as i am unable to find any proper solution of above error. i can ping unicast IP (10.1.2.1) from R28 (with source loo0) but unable to ping multicast IP. please help me out.
  8. srdccie

    TS5-multicast

    hi guys , i have a doubt regarding to the Multicast ticket ,mainly regarding to the boundaries . as per my understanding that when i have two diffrent domain for example(BSR in one domain , auto Rp in the second ) and MSDP up between them , i have to have one rp in every router which belong to the domain right now iam getting the rp for the BSR domain propagated on all the routers for the other domain and vice versa .evenn denying the auto rp wth the ACL is not working fine i have to put ip pim bsr-border >> to stop the bsr advertisment to the other domain and i have to add ip pim multicast boundary 10 (filter-autorp ) keyword am i correct for adding this two command .even without addding them my ping is successful but with 2 RP seeing on every router please update ASAP as my exam is so close .
  9. kamu2013

    TS5v2

    there is multicast boundary is configured on R13,,, on the e0/0 outbound direction,,, if we leave it as it is,, R13 will start to receive RP of R23,, that is not correct,,, do we need to configure multicast boundary on the interface s1/0 inbound direction,, so that we can block 39.40 group,, so that it will take Rp of Its own domain,,, Pls help,,,,
  10. JCRS

    TS5 v301 Multicast

    Hi, I am facing an issue in the multicast ticket of TS5 v301 on Uldisd's Web IOU v21. First of all a very big thanks to Uldisd for this amazing contribution. The issue is SW2 is not creating any RP mapping via auto-rp. Ping starts to work after creating static RP mapping. Here are the SW2 outputs : SW2#sh run | i multicast|pim ipv6 multicast rpf use-bgp ip multicast-routing ip pim sparse-dense-mode ip pim sparse-dense-mode ip pim autorp listener SW2# SW2#sh run int vlan 12 Building configuration... Current configuration : 92 bytes ! interface Vlan12 ip address 192.168.10.65 255.255.255.252 ip pim sparse-dense-mode end SW2#sh run int vlan 48 Building configuration... Current configuration : 92 bytes ! interface Vlan48 ip address 192.168.10.50 255.255.255.252 ip pim sparse-dense-mode end SW2#sh ip pim rp map PIM Group-to-RP Mappings SW2#sh ip mroute IP Multicast Routing Table Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected, L - Local, P - Pruned, R - RP-bit set, F - Register flag, T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet, X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement, U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel, z - MDT-data group sender, Y - Joined MDT-data group, y - Sending to MDT-data group, G - Received BGP C-Mroute, g - Sent BGP C-Mroute, Q - Received BGP S-A Route, q - Sent BGP S-A Route, V - RD & Vector, v - Vector Outgoing interface flags: H - Hardware switched, A - Assert winner Timers: Uptime/Expires Interface state: Interface, Next-Hop or VCD, State/Mode (*, 224.0.1.39), 00:01:06/stopped, RP 0.0.0.0, flags: DC Incoming interface: Null, RPF nbr 0.0.0.0 Outgoing interface list: Vlan48, Forward/Sparse-Dense, 00:01:06/stopped (198.23.23.23, 224.0.1.39), 00:01:06/00:01:53, flags: PTX Incoming interface: Vlan48, RPF nbr 192.168.10.49 Outgoing interface list: Null (*, 224.0.1.40), 00:01:06/stopped, RP 0.0.0.0, flags: D Incoming interface: Null, RPF nbr 0.0.0.0 Outgoing interface list: Vlan48, Forward/Sparse-Dense, 00:01:06/stopped (198.23.23.23, 224.0.1.40), 00:01:06/00:01:53, flags: PT Incoming interface: Vlan48, RPF nbr 192.168.10.49 Outgoing interface list: Null If anybody has any suggestions for me, I will really appreciate that.
  11. Friends! just wonder about this multicast boundary thing; as per RFC 239.0.0.0/8 is reserved for private use; so i was thing if nothing is specified in questions of MSDP/TS5, i can just go ahead remove whatever permitted or denied & do the deny for 239.0.0.0/8 & permit for 224.0.0.0/4 just like below...... R23#sh ip access-lists Standard IP access list 10 10 permit 198.2.2.2 20 permit 224.0.1.39 (17 matches) 30 permit 224.0.1.40 (2 matches) R23#conf t Enter configuration commands, one per line. End with CNTL/Z. R23(config)#no acce R23(config)#no access-list 10 R23(config)#acce R23(config)#access-list 10 den R23(config)#access-list 10 deny 239.0.0.0 0.255.255.255 R23(config)#access-list 10 per 224.0.0.0 15.255.255.255 R23(config)#^Z R23#sh ip access-lists *Feb 2 17:10:09.503: %SYS-5-CONFIG_I: Configured from console by console R23#sh ip access-lists Standard IP access list 10 10 deny 239.0.0.0, wildcard bits 0.255.255.255 20 permit 224.0.0.0, wildcard bits 15.255.255.255 (11 matches) R23# will it right approach..???
  12. Hi Folks, The requirement is like that, !! Ensure that both RPs process join requests for group 232.1.1.1 only. !! Ensure that only the authoized sources ( located in VLAN_68 ) are allowed to register with the RPs !! Do not use any route-map or named acl to achive this task. One of the solutions I found is below R2/R3 int fa0/0 ip multicast boundary 100 in ip pim bsr-can loop 1 0 !# Here loop 1 is 200.100.100.100, the RP. Some other solution use loop 100 ip pim rp-can loop 1 group-list 1 ip pim accept-register list 100 access-list 1 permit 232.1.1.1 access-list 100 permit ip 10.1YY.68 0.0.0.255 host 232.1.1.1 R5 access-list 1 permit 200.100.100.100 router ospf 100 distance 89 1YY.3.3.3 0.0.0.0 1 R4 access-list 1 permit 10.1yy.68.0 0.0.0.255 router eigrp YY offset-list 1 in 2147483647 s0/0/0 !# s0/0/0 is the interface of R4 facing R1 My question are, 1. Why we need to manipulate the routing decision of R5's and R4's? Configuration at R5 force itself to choose R3's 200.100.100.100 as RP. Configuration at R4 impact its RPF decision, from my understanding. I am not quite sure but the configuration at R2/3 should enough to meet the requirement. 2. At previous section 2.4, EIGRP and OSPF redistribution, explicitly saying do not change distance of OSPF. That's why we change the ospf cost of sub-interface, the .100 thing, to terminate the routing loop. Do you reckon that restriction is still a valid one in this section? If yes, I doubt using distance command here is appropriate. Similar scenario like " you should put all odd vlan int instance 1 bla bla bla". Initially we don't have vlan, lets say 101,102,103,104. As we keep going, we created them, and most of folks reckon we should put them in to the MST instances receptively. I am guessing, why so many candidates lost point int k8 multicast. That might be because the touched the distance in this section but break the previous restriction.
  13. Hi mates, pleas advice me to select best materials to learn Multicast ..
  14. Hi, I was wondering if you guys could share your experience on the lab regarding multicast issues you have found and tips you think would be relevant about it. I did my first attempt and did not pass (Lab 3.2). One of the problemas was related to MVPN. I was sure that all I need was configure correctly, but router 12 was not able to ping router 14 and vice versa. R13 and R11 were working just fine. Any help would be appreciated. Thank you.
  15. Guys, I am having some trouble in connectivity after configuring the MSDP between the two AS. Issue 1: R7 is unable to reach ip multicast address configured on loop back on R3, R5. Issue 2 : After configuring MSDP , and the summary show it is as up, but still unable to reach betweens the AS. Issue 1: Here ar R7 configs. interface Loopback0 ip address 9.9.0.7 255.255.255.255 ip pim sparse-mode ipv6 address 2002:9:9::7/128 ipv6 enable ipv6 ospf 9 area 0 end interface Ethernet0/0 ip address 9.9.47.7 255.255.255.0 ip pim sparse-mode ip ospf mtu-ignore ipv6 address 2002:9:47::7/64 ipv6 enable ipv6 ospf 9 area 0 mpls ip mpls traffic-eng tunnels ip rsvp bandwidth 20000 ! R7#sh run int eth 2/0 Building configuration... Current configuration : 207 bytes ! interface Ethernet2/0 ip address 9.9.27.7 255.255.255.0 ip pim sparse-mode ipv6 address 2002:9:27::7/64 ipv6 enable ipv6 ospf 9 area 0 mpls ip mpls traffic-eng tunnels ip rsvp bandwidth 20000 end ip pim bsr-candidate Loopback0 0 ip pim rp-candidate Loopback0 group-list 25 R7#sh ip pim neighbor PIM Neighbor Table Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority, P - Proxy Capable, S - State Refresh Capable, G - GenID Capable Neighbor Interface Uptime/Expires Ver DR Address Prio/Mode 9.9.47.4 Ethernet0/0 1d18h/00:01:20 v2 1 / S P G 9.9.27.2 Ethernet2/0 04:11:59/00:01:18 v2 1 / S P G Here are R3 configs interface Loopback0 ip address 9.9.0.3 255.255.255.255 ip pim sparse-mode ip igmp join-group 239.255.0.3 ipv6 address 2002:9:9::3/128 ipv6 enable ipv6 ospf 9 area 0 end interface Ethernet0/0 ip address 9.9.23.3 255.255.255.0 ip pim sparse-mode ip igmp join-group 239.255.0.20 ipv6 address 2002:9:23::3/64 ipv6 enable ipv6 ospf 9 area 0 mpls ip mpls traffic-eng tunnels ip rsvp bandwidth 20000 R3#sh run int eth 2/0 Building configuration... Current configuration : 207 bytes ! interface Ethernet2/0 ip address 9.9.34.3 255.255.255.0 ip pim sparse-mode ipv6 address 2002:9:34::3/64 ipv6 enable ipv6 ospf 9 area 0 mpls ip mpls traffic-eng tunnels ip rsvp bandwidth 2000 interface Ethernet3/0 ip address 9.9.35.3 255.255.255.0 ip pim sparse-mode ip igmp join-group 239.255.0.30 ipv6 address 2002:9:35::3/64 ipv6 enable ipv6 ospf 9 area 0 mpls ip mpls traffic-eng tunnels ip rsvp bandwidth 20000 end R3#sh ip pim neighbor PIM Neighbor Table Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority, P - Proxy Capable, S - State Refresh Capable, G - GenID Capable Neighbor Interface Uptime/Expires Ver DR Address Prio/Mode 9.9.23.2 Ethernet0/0 2d16h/00:01:16 v2 1 / S P G 9.9.34.4 Ethernet2/0 01:41:48/00:01:16 v2 1 / DR S P G 9.9.35.5 Ethernet3/0 1d18h/00:01:17 v2 1 / DR S P G R3#sh ip pim rp mapping PIM Group-to-RP Mappings Group(s) 239.255.0.0/16 RP 9.9.0.7 (?), v2 Info source: 9.9.0.7 (?), via bootstrap, priority 0, holdtime 150 Uptime: 02:38:41, expires: 00:02:23 R7#ping 239.255.0.3 Type escape sequence to abort. Sending 1, 100-byte ICMP Echos to 239.255.0.3, timeout is 2 seconds: . R7# R7#ping 239.255.0.4 Type escape sequence to abort. Sending 1, 100-byte ICMP Echos to 239.255.0.4, timeout is 2 seconds: Reply to request 0 from 9.9.0.4, 24 ms Reply to request 0 from 9.9.0.4, 48 ms Reply to request 0 from 9.9.0.4, 48 ms R7# Issue 2:unable to reach between multicast address after configuring msdps R8# ip msdp peer 9.9.0.7 connect-source Loopback0 remote-as 9 R7#sh run | in msdp ip msdp peer 9.9.0.8 connect-source Loopback0 remote-as 1009 R7#sh ip msdp summary MSDP Peer Status Summary Peer Address AS State Uptime/ Reset SA Peer Name Downtime Count Count 9.9.0.8 1009 Up 04:01:12 1 5 ? R8#sh ip msdp summary MSDP Peer Status Summary Peer Address AS State Uptime/ Reset SA Peer Name Downtime Count Count 9.9.0.7 9 Up 04:01:36 1 7 ? R8#ping 239.255.0.4 so R8#ping 239.255.0.4 source l0 Type escape sequence to abort. Sending 1, 100-byte ICMP Echos to 239.255.0.4, timeout is 2 seconds: Packet sent with a source address of 9.9.0.8 . R8# Can some one help me on this plz?
  16. R17#sh ip pim neighbor PIM Neighbor Table Neighbor Interfac Address 10.100.0.21 Tunnel0 10.100.0.20 Tunnel0 10.100.0.19 Tunnel0 R19#sh ip pim neighbor PIM Neighbor Table Neighbor Interface Address 10.100.0.1 Tunnel0 R17#ping 239.2.2.2 Type escape sequence to abort. Sending 1, 100-byte ICMP Echos to 239.2.2.2, timeout is 2 seconds: Reply to request 0 from 10.16.3.1, 18 ms Reply to request 0 from 10.16.1.1, 42 ms Reply to request 0 from 10.16.3.1, 42 ms Reply to request 0 from 10.16.2.1, 41 ms Reply to request 0 from 10.16.3.1, 21 ms Reply to request 0 from 10.16.3.1, 21 ms Reply to request 0 from 10.16.1.1, 20 ms Reply to request 0 from 10.16.1.1, 20 ms Reply to request 0 from 10.16.1.1, 20 ms Reply to request 0 from 10.16.2.1, 20 ms Reply to request 0 from 10.16.2.1, 20 ms Reply to request 0 from 10.16.2.1, 18 ms ===================================================== SW3#ping 10.2.0.38 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 10.2.0.38, timeout is 2 seconds: !!!!! SW3#ping 239.2.2.2 source vlan 173 Type escape sequence to abort. Sending 1, 100-byte ICMP Echos to 239.2.2.2, timeout is 2 seconds: Packet sent with a source address of 10.2.0.37 SW3#mtrace 10.2.0.37 239.2.2.2 Type escape sequence to abort. Mtrace from 10.2.0.37 to 10.2.0.37 via group 239.2.2.2 From source (?) to destination (?) Querying full reverse path... * switching to hop-by-hop: 0 10.2.0.37 -1 * * * Timed out receiving responses Perhaps no local router has a route for source, the receiver is not a member of the multicast group or the multicast ttl is too low. Sorry to bother with this question 2.11 MULTICAST ROUTING A5... is everyone experiencing this issue...am I doing something wrong? Does anyone know how to solve this?
  17. hi all, I've got the workbook on hand, but didn't have the luck to find the initial scripts yet, I would appreciate if someone could share
  18. I smell new INE product It seems very intersting , can anyone with Good heart SHARE CCIE R&S: L3 Multicast with PIM Sparse-Mode [Hidden Content] Instructor: Keith Bogart Course Duration :: 6hrs 47min [Hidden Content] Thanks in advance Regards , Cyberspirits
  19. Candidates for the months of September, Oct, Nov and Dec. Please join this Forum. Share and discuss problems, issues, terms which you are still facing doubts with it, with the help of each other lets solve it.
  20. Hello Experts, I have a problem in MSDPv1, MSDP Ticket, i couldn't solve, Q5 Hosts that are attached to R8 in BGP AS 200 must be able to receive multicast traffic that is sent from sources in BGP AS 100 to the group address 224.100.100.100. Fix the problem so that the following ping resolves replies from R8 172.16.11.11: I checked: Ip Multicast-routing -- R8 ip pim-sparse on all multicast enabled interfaces rp-address (200 for Devices AS200, 100 for Devices AS100) MSDP , connecto source ==> loop1 checked prefix-list & ACL checking network advertised on bgp - multicast to be 200.0.0.1 255.255.255.255 mroute --- add keyword "255" after all of checking, unable to ping 224.100.100.100 from R1 or R3-....... - R13 Spend more than one day to resolve, without any result!. i checked all solution provided in the forum, non one is working please i need your help.. one more question i having is "Passing Score for TS", i mean MPLS TS: ?? Points (is it 22 points: 8 X 2P + 2 X 3P) or (7 X 2P + 3 X 3P) and what about MSDP + TS5
  21. Hi friends, In this question, i stuck in BGP's redistribution in OSPF. Basically, there are question in my mind... 1. Is it possible to redistribute BGP IPv4 multicast address-family, if yes how? 2. when solving the question i'm getting 2 replays while pinging from R13 the 224.100.100.100 output: R13#ping 224.100.100.100 Type escape sequence to abort. Sending 1, 100-byte ICMP Echos to 224.100.100.100, timeout is 2 seconds: Reply to request 0 from 172.16.11.11, 72 ms Reply to request 0 from 172.16.11.11, 72 ms
  22. Friends! Can any explain the eject packet flow for the MSDPv2 multicast question? as per me it's like this:- R8 Prefix going into AS 100: R8 (iBGP Route, RR-Client) -----> R6 (iBGP Route, RR) ------> R2 (iBGP Route, RR-Client) ==> Multicast Peer==>R1 (MBGP Route) ---> R3(iBGP, & OSPF E2) R13 Prefix going AS 200: R13 (OSPF E2)----->R11(OSPF E2)------>R9(OSPF E2)---->R5(OSPF E2)-----R3(iBGP & OSPF E2) Now confusion i have about this flow is, there a redistribution happen at R7(from OSPF into BGP domain); but in order to make cross multicast traffic; i'm also doing the redistribution at R2 (from OSPF into BGP domain), without this even R3 can't ping the 224.100.100.100, i checked all unicast routing in place but didn't know why it is required?, plus is it right method to make it work as per cisco guidelines..??? please advice on it? below is some strange mtrace output, please tell if anyone else faced similar issue! R3#mtrace 172.16.10.2 172.16.11.11 224.100.100.100 Type escape sequence to abort. Mtrace from 172.16.10.2 to 172.16.11.11 via group 224.100.100.100 From source (?) to destination (?) Querying full reverse path... * switching to hop-by-hop: 0 172.16.11.11 -1 * 172.16.11.11 PIM/MBGP Prune sent upstream [using shared tree] -2 * 172.16.11.9 PIM/MBGP [using shared tree] -3 * 172.16.11.1 PIM/Static No route -4 * Route must have changed...try again R3#p 224.100.100.100 re 5 Type escape sequence to abort. Sending 5, 100-byte ICMP Echos to 224.100.100.100, timeout is 2 seconds: Reply to request 0 from 172.16.11.11, 16 ms Reply to request 1 from 172.16.11.11, 16 ms Reply to request 2 from 172.16.11.11, 16 ms Reply to request 3 from 172.16.11.11, 16 ms Reply to request 4 from 172.16.11.11, 12 ms R3#
  23. JCRS

    K6 Multicast Issue

    Hi Everyone, I am doing K6 on my real equipment. I am facing one issue with Multicast. I am getting replies from SW2's Vlan 123 interface rather Vlan33 interface, I am not sure why. Please help me out on this as I have lab very soon. Rack64R4#ping Protocol [ip]: Target IP address: 239.64.64.1 Repeat count [1]: 10 Datagram size [100]: Timeout in seconds [2]: Extended commands [n]: y Interface [All]: serial0/0/0 Time to live [255]: Source address: 64.64.44.4 Type of service [0]: Set DF bit in IP header? [no]: Validate reply data? [no]: Data pattern [0xABCD]: Loose, Strict, Record, Timestamp, Verbose[none]: Sweep range of sizes [n]: Type escape sequence to abort. Sending 10, 100-byte ICMP Echos to 239.64.64.1, timeout is 2 seconds: Packet sent with a source address of 64.64.44.4 Reply to request 0 from 64.64.123.8, 4 ms Reply to request 1 from 64.64.123.8, 4 ms Reply to request 2 from 64.64.123.8, 1 ms Reply to request 3 from 64.64.123.8, 4 ms Reply to request 4 from 64.64.123.8, 1 ms Reply to request 5 from 64.64.123.8, 4 ms Reply to request 6 from 64.64.123.8, 1 ms Reply to request 7 from 64.64.123.8, 1 ms Reply to request 8 from 64.64.123.8, 4 ms Reply to request 9 from 64.64.123.8, 4 ms Rack64SW2#sh run int vlan 123 Building configuration... Current configuration : 226 bytes ! interface Vlan123 ip address 64.64.123.8 255.255.255.0 ip pim sparse-mode ip ospf priority 254 ip ospf mtu-ignore ipv6 address FEC0:CC1E:123::8/64 ipv6 ospf priority 254 ipv6 ospf mtu-ignore ipv6 ospf 64 area 0 end Rack64SW2#sh run int vlan 33 Building configuration... Current configuration : 113 bytes ! interface Vlan33 ip address 150.3.64.1 255.255.255.0 ip pim sparse-mode ip igmp join-group 239.64.64.1 end Regards, JCRS
  24. JCRS

    K6 Multicast Doubt

    Hi Everyone, I have two doubts in K6 multicast config. 1. BSR rp-candidate priority: What should be the priority we should configure to make R1 as primary and R2 as secondary. I think 0 on R1 and 1 on R2 but some solutions use 254 on R1 and 255 on R2. According to me we should use extreme values, am I correct? 2. DR priority : In the solution we are configuring dr-priority <max-1> on SW4, why? The question never says that SW4 should be the second preferred DR. The question says any other SW in area 0 should forward PIM join upstream.
  25. Guys, Are we to enable "ip pim sparse-mode" on R5 and R6? I did "ip pim bsr-border" only and couldn't ping across.
×
×
  • Create New...