Before You Read: This Is Not About Overlay Multicast
When engineers hear "multicast in SD-Access," they often think about multicast routing within a Virtual Network, delivering IP multicast streams to receivers inside the fabric overlay. That is a valid and separate topic. This article is about something different and lower level: underlay multicast as the transport mechanism for Layer 2 BUM (Broadcast, Unknown-unicast, Multicast) flooding.
This distinction matters because the two have completely separate designs and separate failure modes. As we will cover in this article, underlay multicast requires infrastructure that Catalyst Center will not configure for you.
Why L2 Flooding Exists in SD-Access
SD-Access is built on a LISP/VXLAN fabric. By default, it replaces traditional flooding and learning with a control plane: endpoints are registered in the LISP map server, and traffic is forwarded based on identity rather than flooded across the network. For most unicast traffic, this works without any flooding at all.
However, there are legitimate use cases that depend on Layer 2 broadcast or unknown-unicast behavior. Wake on LAN is a common example, as magic packets are broadcast frames that need to reach a sleeping endpoint across the fabric (we covered this in detail in a previous article). OT and industrial systems frequently rely on Layer 2 broadcast for device discovery and communication protocols that predate modern network design. Building Management Systems, IP Directed Broadcast used in paging systems, and PXE booting environments where DHCP Discover traffic needs to reach a provisioning server are all equally dependent on flooding being functional. In some deployments, mDNS service discovery also falls back to flooding rather than using a dedicated mDNS gateway.
When L2 Flooding is enabled on an IP Pool or VLAN within Catalyst Center, these use cases become supported. But enabling it in the GUI is only half the story.
How BUM Flooding Works in the SD-Access Underlay
When L2 Flooding is enabled for a VLAN, the fabric needs a mechanism to replicate BUM frames to all Edge Nodes that participate in that VLAN. In SD-Access, this is achieved through VXLAN encapsulation with a multicast group address as the outer destination IP.
The process works as follows. An endpoint sends a broadcast frame, for example a Wake on LAN magic packet or a DHCP Discover. The ingress Edge Node encapsulates the frame in VXLAN, using a multicast group IP as the outer destination. The underlay network delivers this multicast packet to all Edge Nodes that have joined the group via IGMP. Each receiving Edge Node decapsulates the VXLAN packet and delivers the inner frame to the appropriate VLAN, identified by the VNI in the VXLAN header.
The key point is that this multicast group lives in the Global Routing Table of the underlay, not inside any VRF or Virtual Network. It requires Any Source Multicast (ASM) routing to be operational across all fabric Edge Nodes and Border Nodes. Catalyst Center makes this explicit when you enable L2 Flooding on an IP Pool:
"Layer 2 Flooding relies on Any Source Multicast (ASM) in the SD-Access underlay. Please ensure ASM routing is configured in the Global Routing Table of all Fabric Edge Nodes and Layer 2 Border Nodes."
This warning is Cisco acknowledging that the responsibility is yours. Catalyst Center will configure the LISP instance to reference the multicast group address, but it will not build the PIM multicast routing infrastructure in the underlay.

The Anycast RP Requirement
ASM multicast requires a Rendezvous Point (RP). All PIM-SM routers in the underlay need to know where the RP is so they can register sources and build shared trees for the multicast groups used for BUM flooding.
In a typical SD-Access deployment, there are two Border Nodes providing redundancy for fabric exit and inter-VN routing. This immediately creates a problem: which Border Node is the RP? If you configure a single static RP on one Border Node, a failure of that node breaks multicast delivery across the entire fabric. Edge Nodes on the side of the fabric physically closer to the other Border Node also take a suboptimal path to reach the RP under normal conditions.
The solution is Anycast RP: both Border Nodes are configured as the RP using the same IP address on a dedicated loopback interface. From the perspective of any Edge Node in the underlay, the RP is always reachable via the nearest Border Node, and IS-IS takes care of the routing. At Entek, our standard design uses Loopback10 on both Border Nodes configured with the shared RP address, which is then advertised into IS-IS. This makes the RP address reachable from any Edge Node in the fabric, and topology changes or Border Node failures are handled automatically by IS-IS reconverging.
The remaining challenge with Anycast RP is RP state synchronisation. When a source registers with its nearest RP, the other Border Node needs to know about that source so it can forward traffic to any receivers that joined via its own RP instance. This is solved with MSDP (Multicast Source Discovery Protocol), peered between the two Border Nodes using their unique Loopback0 addresses.
In our design, bn-01 and bn-02 each have a unique Loopback0 address used as the MSDP peer source, and both share the same Loopback10 address as the Anycast RP, advertised into IS-IS. MSDP ensures that active sources learned by one RP are advertised to the other, keeping multicast state consistent across both Border Nodes regardless of which one an Edge Node uses as its RP.
None of this is provisioned by Catalyst Center. It requires deliberate design and configuration, which is exactly the gap that organisations often discover only after enabling L2 Flooding and finding that broadcast traffic is not reaching all Edge Nodes.
How Catalyst Center Configures the LISP Side
While the underlay multicast infrastructure is your responsibility, Catalyst Center does handle the LISP configuration that ties each VLAN to a multicast group address. Understanding what gets pushed, and how it has changed across CatC versions, is important for both troubleshooting and design.
Catalyst Center 2.3.7: One Group for All VLANs
In CatC version 2.3.7, all VLANs with L2 Flooding enabled share a single multicast group address. The following is an example of the LISP configuration pushed to Edge Nodes in this version:
instance-id 8188
service ethernet
eid-table vlan 10
broadcast-underlay 239.0.17.4
flood unknown-unicast
instance-id 8189
service ethernet
eid-table vlan 11
broadcast-underlay 239.0.17.4
flood arp-nd
flood unknown-unicast
instance-id 8191
service ethernet
eid-table vlan 12
broadcast-underlay 239.0.17.4
flood arp-nd
flood unknown-unicast
instance-id 8192
service ethernet
eid-table vlan 13
broadcast-underlay 239.0.17.4
flood unknown-unicast
Every VLAN, regardless of whether it has any local activity on a given Edge Node, references the same group 239.0.17.4. The consequence becomes visible in the multicast routing table on a Border Node:
(*, 239.0.17.4), RP 10.0.0.253
Outgoing interface list:
L2LISP0.8214
L2LISP0.8188
L2LISP0.8192
L2LISP0.8196
L2LISP0.8203
L2LISP0.8189
L2LISP0.8200
L2LISP0.8201
L2LISP0.8199
L2LISP0.8191
All ten L2LISP VNI interfaces appear in the outgoing interface list for that single group. This means that any BUM packet sent by any Edge Node for any VLAN is received by every Edge Node participating in the group, whether or not that Edge Node has any clients in the relevant VLAN. The VNI ID in the VXLAN header allows the receiving Edge Node to identify the correct VLAN, so functionally it works. But from a bandwidth and CPU perspective, Edge Nodes are processing BUM traffic they have no interest in.
In practice, on networks that are not BUM-heavy, this does not cause visible problems. The inefficiency is however real, particularly in environments with many VLANs, high broadcast rates, or constrained underlay bandwidth.
Catalyst Center 3.x: Per-VLAN Flooding Address Assignment
Catalyst Center 3.x reintroduces the ability to assign individual multicast group addresses per IP Pool, restoring the more efficient behaviour that existed in older DNA Center versions. In the IP Pool configuration, a new "Flooding Address Assignment" option appears with two modes: Shared (the 2.3.7 behaviour) or a per-pool assignment where you specify the flooding group address directly.
This brings back the original design intent: an Edge Node that has no clients in a particular VLAN does not need to join that VLAN's multicast group. When activity starts on a VLAN at an Edge Node, it joins the specific group for that VLAN via IGMP, while Edge Nodes with no clients in that VLAN remain unaffected. The practical benefit grows with the size of the deployment, as the more VLANs with L2 Flooding enabled, the more meaningful the traffic separation becomes.
What This Means for Your Design
Enabling L2 Flooding in Catalyst Center is a checkbox. Building the infrastructure that makes it work reliably is an engineering task that requires understanding the full stack.
Anycast RP is not optional when you have two Border Nodes. A single RP creates a single point of failure for all L2 flooding across the fabric. If the RP is unreachable, multicast delivery stops and BUM traffic is silently dropped.
MSDP between Border Nodes is required for correct Anycast RP behaviour. Without it, sources registering to one RP are invisible to the other, leading to asymmetric multicast delivery that is very difficult to diagnose.
The RP address must be reachable from all Edge Nodes. Advertising it via IS-IS from both Border Nodes ensures this, and provides automatic failover if one Border Node goes down.
Finally, verify the underlay multicast infrastructure is in place before enabling L2 Flooding in production. The CatC warning is clear, but it is easy to click past. A quick check of PIM neighbour relationships and the multicast routing table on your Edge Nodes before you enable flooding will save you from a difficult troubleshooting session later.
If you are on CatC 3.x, the shared group default is a legacy behaviour. Assigning dedicated multicast groups per IP Pool is more efficient and aligns with how the platform was originally designed.
Closing
L2 Flooding in SD-Access is often enabled as a quick fix for a specific use case: a Wake on LAN requirement, an OT device that will not work without broadcasts, or a paging system that assumes flat Layer 2. What is less visible is that enabling it creates a dependency on a functioning multicast routing infrastructure in the underlay that Catalyst Center never provisions.
Getting Anycast RP right, with proper loopback design, IS-IS advertisement, and MSDP peering between Border Nodes, is the difference between L2 Flooding that works reliably and one that fails intermittently or not at all under failover conditions.
If you are planning to enable L2 Flooding in your SD-Access environment, or if you want to validate that your current underlay multicast design is correct, feel free to reach out to us at Entek IT. This is exactly the kind of infrastructure detail we help organizations get right.
Related articles: Wake on LAN in Cisco SD-Access, PXE in Cisco SD-Access







