Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BGP]Optimize BGP during peer down as well has avoid handling duplicate link up #743

Merged
merged 2 commits into from
Mar 1, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
From f63a4be085b28c5138b95d55681f2bfb38bdaf4f Mon Sep 17 00:00:00 2001
From: Donald Sharp <sharpd@nvidia.com>
Date: Fri, 24 Jan 2025 15:04:13 -0500
Subject: [PATCH] bgpd: Optimize evaluate paths for a peer going down

Currently when a directly connected peer is going down
BGP gets a call back for nexthop tracking in addition
the interface down events. On the interface down
event BGP goes through and sets up a per peer Q that
holds all the bgp path info's associated with that peer
and then it goes and processes this in the future. In
the meantime zebra is also at work and sends a nexthop
removal event to BGP as well. This triggers a complete
walk of all path info's associated with the bnc( which
happens to be all the path info's already scheduled
for removal here shortly). This evaluate paths
is not an inexpensive operation in addition the work
for handling this is already being done via the
peer down queue. Let's optimize the bnc handling
of evaluate paths and check to see if the peer is
still up to actually do the work here.

Signed-off-by: Donald Sharp <sharpd@nvidia.com>

diff --git a/bgpd/bgp_nht.c b/bgpd/bgp_nht.c
index 3b6db31ea0..196cc00385 100644
--- a/bgpd/bgp_nht.c
+++ b/bgpd/bgp_nht.c
@@ -1258,6 +1258,25 @@ void evaluate_paths(struct bgp_nexthop_cache *bnc)
}

LIST_FOREACH (path, &(bnc->paths), nh_thread) {
+ /*
+ * Currently when a peer goes down, bgp immediately
+ * sees this via the interface events( if it is directly
+ * connected). And in this case it takes and puts on
+ * a special peer queue all path info's associated with
+ * but these items are not yet processed typically when
+ * the nexthop is being handled here. Thus we end
+ * up in a situation where the process Queue for BGP
+ * is being asked to look at the same path info multiple
+ * times. Let's just cut to the chase here and if
+ * the bnc has a peer associated with it and the path info
+ * being looked at uses that peer and the peer is no
+ * longer established we know the path_info is being
+ * handled elsewhere and we do not need to process
+ * it here at all since the pathinfo is going away
+ */
+ if (peer && path->peer == peer && !peer_established(peer->connection))
+ continue;
+
if (path->type == ZEBRA_ROUTE_BGP &&
(path->sub_type == BGP_ROUTE_NORMAL ||
path->sub_type == BGP_ROUTE_STATIC ||
--
2.43.2

Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
From 086c32eb5bf2ebfb4805f76219c1a3bc5dd9213e Mon Sep 17 00:00:00 2001
From: dgsudharsan <sudharsand@nvidia.com>
Date: Wed, 19 Feb 2025 17:24:39 +0000
Subject: [PATCH] Revert "bgpd: upon if event, evaluate bnc with matching
nexthop"

This reverts commit 58592be57783a3b24e7351af2a5afc61299768df.

diff --git a/bgpd/bgp_nht.c b/bgpd/bgp_nht.c
index 196cc00385..78eb1a9183 100644
--- a/bgpd/bgp_nht.c
+++ b/bgpd/bgp_nht.c
@@ -751,10 +751,6 @@ static void bgp_nht_ifp_table_handle(struct bgp *bgp,
struct interface *ifp, bool up)
{
struct bgp_nexthop_cache *bnc;
- struct nexthop *nhop;
- uint16_t other_nh_count;
- bool nhop_ll_found = false;
- bool nhop_found = false;

if (ifp->ifindex == IFINDEX_INTERNAL) {
zlog_warn("%s: The interface %s ignored", __func__, ifp->name);
@@ -762,42 +758,9 @@ static void bgp_nht_ifp_table_handle(struct bgp *bgp,
}

frr_each (bgp_nexthop_cache, table, bnc) {
- other_nh_count = 0;
- nhop_ll_found = bnc->ifindex_ipv6_ll == ifp->ifindex;
- for (nhop = bnc->nexthop; nhop; nhop = nhop->next) {
- if (nhop->ifindex == bnc->ifindex_ipv6_ll)
- continue;
-
- if (nhop->ifindex != ifp->ifindex) {
- other_nh_count++;
- continue;
- }
- if (nhop->vrf_id != ifp->vrf->vrf_id) {
- other_nh_count++;
- continue;
- }
- nhop_found = true;
- }
-
- if (!nhop_found && !nhop_ll_found)
- /* The event interface does not match the nexthop cache
- * entry */
- continue;
-
- if (!up && other_nh_count > 0)
- /* Down event ignored in case of multiple next-hop
- * interfaces. The other might interfaces might be still
- * up. The cases where all interfaces are down or a bnc
- * is invalid are processed by a separate zebra rnh
- * messages.
- */
+ if (bnc->ifindex_ipv6_ll != ifp->ifindex)
continue;

- if (!nhop_ll_found) {
- evaluate_paths(bnc);
- continue;
- }
-
bnc->last_update = monotime(NULL);
bnc->change_flags = 0;

@@ -810,7 +773,6 @@ static void bgp_nht_ifp_table_handle(struct bgp *bgp,
if (up) {
SET_FLAG(bnc->flags, BGP_NEXTHOP_VALID);
SET_FLAG(bnc->change_flags, BGP_NEXTHOP_CHANGED);
- /* change nexthop number only for ll */
bnc->nexthop_num = 1;
} else {
UNSET_FLAG(bnc->flags, BGP_NEXTHOP_PEER_NOTIFIED);
--
2.43.2

2 changes: 2 additions & 0 deletions src/sonic-frr/patch/series
Original file line number Diff line number Diff line change
@@ -59,3 +59,5 @@
0077-frr-vtysh-dependencies-for-srv6-static-patches.patch
0078-vtysh-de-conditionalize-and-reorder-install-node.patch
0079-staticd-add-support-for-srv6.patch
0081-bgpd-Optimize-evaluate-paths-for-a-peer-going-down.patch
0082-Revert-bgpd-upon-if-event-evaluate-bnc-with-matching.patch