summaryrefslogtreecommitdiffstats
path: root/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c
diff options
context:
space:
mode:
authorAdham Faris <afaris@nvidia.com>2023-03-08 11:59:36 +0200
committerSaeed Mahameed <saeedm@nvidia.com>2023-08-21 10:55:14 -0700
commit7a73cf0bf7f96cd2b9f2ea890bf1e981730d8d6c (patch)
tree2fc8c19fb257281d3afb9d2bb4fdb024f03ce341 /drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c
parentcb39c35783f26892bb1a72b1115c94fa2e77f4c5 (diff)
downloadlinux-stable-7a73cf0bf7f96cd2b9f2ea890bf1e981730d8d6c.tar.gz
linux-stable-7a73cf0bf7f96cd2b9f2ea890bf1e981730d8d6c.tar.bz2
linux-stable-7a73cf0bf7f96cd2b9f2ea890bf1e981730d8d6c.zip
net/mlx5e: aRFS, Prevent repeated kernel rule migrations requests
aRFS rule movement requests from one Rx ring to other Rx ring arrive from the kernel to ensure that packets are steered to the right Rx ring. In the time interval until satisfying such a request, several more requests might follow, for the same flow. This patch detects and prevents repeated aRFS rules movement requests. In mlx5e_rx_flow_steer() ndo, after finding the aRFS rule that have been requested to move by the kernel, check if it's already requested to move by calling work_busy(&arfs_rule->arfs_work) handler. IOW, if this request is pending to be executed (in the work queue) or it's executing now but hasn't finished yet, return current filter ID and don't issue a new transition work. Signed-off-by: Adham Faris <afaris@nvidia.com> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
Diffstat (limited to 'drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c')
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c
index 5aa51d74f8b4..67d8b198a014 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c
@@ -740,7 +740,7 @@ int mlx5e_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb,
spin_lock_bh(&arfs->arfs_lock);
arfs_rule = arfs_find_rule(arfs_t, &fk);
if (arfs_rule) {
- if (arfs_rule->rxq == rxq_index) {
+ if (arfs_rule->rxq == rxq_index || work_busy(&arfs_rule->arfs_work)) {
spin_unlock_bh(&arfs->arfs_lock);
return arfs_rule->filter_id;
}