summaryrefslogtreecommitdiffstats
path: root/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
Commit message (Collapse)AuthorAgeFilesLines
* net/mlx5: Dynamically resize flow counters query bufferAvihai Horon2021-12-021-14/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The flow counters bulk query buffer is allocated once during mlx5_fc_init_stats(). For PFs and VFs this buffer usually takes a little more than 512KB of memory, which is aligned to the next power of 2, to 1MB. For SFs, this buffer is reduced and takes around 128 Bytes. The buffer size determines the maximum number of flow counters that can be queried at a time. Thus, having a bigger buffer can improve performance for users that need to query many flow counters. There are cases that don't use many flow counters and don't need a big buffer (e.g. SFs, VFs). Since this size is critical with large scale, in these cases the buffer size should be reduced. In order to reduce memory consumption while maintaining query performance, change the query buffer's allocation scheme to the following: - First allocate the buffer with small initial size. - If the number of counters surpasses the initial size, resize the buffer to the maximum size. The buffer only grows and isn't shrank, because users with many flow counters don't care about the buffer size and we don't want to add resize overhead if the current number of counters drops. This solution is preferable to the current one, which is less accurate and only addresses SFs. Signed-off-by: Avihai Horon <avihaih@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
* net/mlx5: Fix flow counters SF bulk query lenAvihai Horon2021-11-161-1/+1
| | | | | | | | | | | | When doing a flow counters bulk query, the number of counters to query must be aligned to 4. Current SF bulk query len is not aligned to 4, which leads to an error when trying to query more than 4 counters. Fix it by aligning SF bulk query len to 4. Fixes: 2fdeb4f4c2ae ("net/mlx5: Reduce flow counters bulk query buffer size for SFs") Signed-off-by: Avihai Horon <avihaih@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
* net/mlx5: Allow skipping counter refresh on creationPaul Blakey2021-10-291-3/+11
| | | | | | | | | | | | | | | | CT creates a counter for each CT rule, and for each such counter, fs_counters tries to queue mlx5_fc_stats_work() work again via mod_delayed_work(0) call to refresh all counters. This call has a large performance impact when reaching high insertion rate and accounts for ~8% of the insertion time when using software steering. Allow skipping the refresh of all counters during counter creation. Change CT to use this refresh skipping for it's counters. Signed-off-by: Paul Blakey <paulb@nvidia.com> Reviewed-by: Roi Dayan <roid@nvidia.com> Reviewed-by: Oz Shlomo <ozsh@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
* net/mlx5: Reduce flow counters bulk query buffer size for SFsAvihai Horon2021-10-251-2/+7
| | | | | | | | | | | | | | | | | Currently, the flow counters bulk query buffer takes a little more than 512KB of memory, which is aligned to the next power of 2, to 1MB. The buffer size determines the maximum number of flow counters that can be queried at a time. Thus, having a bigger buffer can improve performance for users that need to query many flow counters. SFs don't use many flow counters and don't need a big buffer. Since this size is critical with large scale, reduce the size of the bulk query buffer for SFs. Signed-off-by: Avihai Horon <avihaih@nvidia.com> Reviewed-by: Mark Bloch <mbloch@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
* net/mlx5: Use struct_size() helper in kvzalloc()Gustavo A. R. Silva2021-09-301-2/+1
| | | | | | | | | | Make use of the struct_size() helper instead of an open-coded version, in order to avoid any potential type mistakes or integer overflows that, in the worse scenario, could lead to heap overflows. Link: https://github.com/KSPP/linux/issues/160 Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
* net/mlx5: Allocate FC bulk structs with kvzalloc() instead of kzalloc()Maor Dickman2021-04-161-8/+8
| | | | | | | | The bulk size is larger than 16K so use kvzalloc(). The bulk bitmask upper size limit is 16K so use kvcalloc(). Signed-off-by: Maor Dickman <maord@nvidia.com> Signed-off-by: Saeed Mahameed <saeedm@nvidia.com>
* net/mlx5e: Replace zero-length array with flexible-array memberGustavo A. R. Silva2020-02-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current codebase makes use of the zero-length array language extension to the C90 standard, but the preferred mechanism to declare variable-length types such as these ones is a flexible array member[1][2], introduced in C99: struct foo { int stuff; struct boo array[]; }; By making use of the mechanism above, we will get a compiler warning in case the flexible array does not occur last in the structure, which will help us prevent some kind of undefined behavior bugs from being inadvertently introduced[3] to the codebase from now on. Also, notice that, dynamic memory allocations won't be affected by this change: "Flexible array members have incomplete type, and so the sizeof operator may not be applied. As a quirk of the original implementation of zero-length arrays, sizeof evaluates to zero."[1] This issue was found with the help of Coccinelle. [1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html [2] https://github.com/KSPP/linux/issues/21 [3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour") Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
* net/mlx5: Fix the order of fc_stats cleanupGavi Teitz2019-08-201-5/+4
| | | | | | | | | | | | | | | | Previously, mlx5_cleanup_fc_stats() would cleanup the flow counter pool beofre releasing all the counters to it, which would result in flow counter bulks not getting freed. Resolve this by changing the order in which elements of fc_stats are cleaned up, so that the flow counter pool is cleaned up after all the counters are released. Also move cleanup actions for freeing the bulk query memory and destroying the idr to the end of mlx5_cleanup_fc_stats(). Signed-off-by: Gavi Teitz <gavi@mellanox.com> Reviewed-by: Roi Dayan <roid@mellanox.com> Reviewed-by: Vlad Buslov <vladbu@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/netDavid S. Miller2019-08-061-0/+5
|\ | | | | | | | | | | Just minor overlapping changes in the conflicts here. Signed-off-by: David S. Miller <davem@davemloft.net>
| * net/mlx5e: Prevent encap flow counter update async to user queryAriel Levkovich2019-07-251-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch prevents a race between user invoked cached counters query and a neighbor last usage updater. The cached flow counter stats can be queried by calling "mlx5_fc_query_cached" which provides the number of bytes and packets that passed via this flow since the last time this counter was queried. It does so by reducting the last saved stats from the current, cached stats and then updating the last saved stats with the cached stats. It also provide the lastuse value for that flow. Since "mlx5e_tc_update_neigh_used_value" needs to retrieve the last usage time of encapsulation flows, it calls the flow counter query method periodically and async to user queries of the flow counter using cls_flower. This call is causing the driver to update the last reported bytes and packets from the cache and therefore, future user queries of the flow stats will return lower than expected number for bytes and packets since the last saved stats in the driver was updated async to the last saved stats in cls_flower. This causes wrong stats presentation of encapsulation flows to user. Since the neighbor usage updater only needs the lastuse stats from the cached counter, the fix is to use a dedicated lastuse query call that returns the lastuse value without synching between the cached stats and the last saved stats. Fixes: f6dfb4c3f216 ("net/mlx5e: Update neighbour 'used' state using HW flow rules counters") Signed-off-by: Ariel Levkovich <lariel@mellanox.com> Reviewed-by: Roi Dayan <roid@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
* | net/mlx5: Add flow counter poolGavi Teitz2019-08-011-25/+206
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a pool of flow counters, based on flow counter bulks, removing the need to allocate a new counter via a costly FW command during the flow creation process. The time it takes to acquire/release a flow counter is cut from ~50 [us] to ~50 [ns]. The pool is part of the mlx5 driver instance, and provides flow counters for aging flows. mlx5_fc_create() was modified to provide counters for aging flows from the pool by default, and mlx5_destroy_fc() was modified to release counters back to the pool for later reuse. If bulk allocation is not supported or fails, and for non-aging flows, the fallback behavior is to allocate and free individual counters. The pool is comprised of three lists of flow counter bulks, one of fully used bulks, one of partially used bulks, and one of unused bulks. Counters are provided from the partially used bulks first, to help limit bulk fragmentation. The pool maintains a threshold, and strives to maintain the amount of available counters below it. The pool is increased in size when a counter acquisition request is made and there are no available counters, and it is decreased in size when the last counter in a bulk is released and there are more available counters than the threshold. All pool size changes are done in the context of the acquiring/releasing process. The value of the threshold is directly correlated to the amount of used counters the pool is providing, while constrained by a hard maximum, and is recalculated every time a bulk is allocated/freed. This ensures that the pool only consumes large amounts of memory for available counters if the pool is being used heavily. When fully populated and at the hard maximum, the buffer of available counters consumes ~40 [MB]. Signed-off-by: Gavi Teitz <gavi@mellanox.com> Reviewed-by: Vlad Buslov <vladbu@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
* | net/mlx5: Add flow counter bulk infrastructureGavi Teitz2019-08-011-0/+105
| | | | | | | | | | | | | | | | | | | | Add infrastructure to track bulks of flow counters, providing the means to allocate and deallocate bulks, and to acquire and release individual counters from the bulks. Signed-off-by: Gavi Teitz <gavi@mellanox.com> Reviewed-by: Vlad Buslov <vladbu@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
* | net/mlx5: Refactor and optimize flow counter bulk queryGavi Teitz2019-08-011-57/+68
|/ | | | | | | | | | | | | | | | | | Towards introducing the ability to allocate bulks of flow counters, refactor the flow counter bulk query process, removing functions and structs whose names indicated being used for flow counter bulk allocation FW commands, despite them actually only being used to support bulk querying, and migrate their functionality to correctly named functions in their natural location, fs_counters.c. Additionally, optimize the bulk query process by: * Extracting the memory used for the query to mlx5_fc_stats so that it is only allocated once, and not for each bulk query. * Querying all the counters in one function call. Signed-off-by: Gavi Teitz <gavi@mellanox.com> Reviewed-by: Vlad Buslov <vladbu@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
* idr: introduce idr_for_each_entry_continue_ul()Cong Wang2019-07-011-4/+6
| | | | | | | | | | | | | | | | | | | | | | | Similarly, other callers of idr_get_next_ul() suffer the same overflow bug as they don't handle it properly either. Introduce idr_for_each_entry_continue_ul() to help these callers iterate from a given ID. cls_flower needs more care here because it still has overflow when does arg->cookie++, we have to fold its nested loops into one and remove the arg->cookie++. Fixes: 01683a146999 ("net: sched: refactor flower walk to iterate over idr") Fixes: 12d6066c3b29 ("net/mlx5: Add flow counters idr") Reported-by: Li Shuang <shuali@redhat.com> Cc: Davide Caratti <dcaratti@redhat.com> Cc: Vlad Buslov <vladbu@mellanox.com> Cc: Chris Mi <chrism@mellanox.com> Cc: Matthew Wilcox <willy@infradead.org> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Tested-by: Davide Caratti <dcaratti@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net/mlx5: Move flow counters data structures from flow steering headerSaeed Mahameed2018-12-091-0/+23
| | | | | | | | | | | After the following flow counters API refactoring: ("net/mlx5: Use flow counter IDs and not the wrapping cache object") flow counters private data structures mlx5_fc_cache and mlx5_fc are redundantly exposed in fs_core.h, they have nothing to do with flow steering core and they are private to fs_counter.c, this patch moves them to where they belong and reduces their exposure in the driver. Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
* net/mlx5: Remove counter from idr after removing it from listVlad Buslov2018-10-181-5/+13
| | | | | | | | | | | | | | | | | | | | Fs_counters list can temporary become unsorted when new counters are created/deleted concurrently. Idr is used to quickly lookup position to insert new counter in logarithmic time. However, if new flows are concurrently inserted during time window when flows with adjacent ids are already removed from idr but are still present in counters list, mlx5_fc_stats_work() observes counters list in inconsistent state, which results following warning: [ 1839.561955] mlx5_core 0000:81:00.0: mlx5_cmd_fc_bulk_get:587:(pid 729): Flow counter id (0x102d5) out of range (0x1c0a8..0x1c10b). Counter ignored. Move idr_remove() call to be executed synchronously with counter deletion from list. Extract this code to mlx5_fc_stats_remove() helper function that is called by workqueue job handler mlx5_fc_stats_work(). Fixes: 12d6066c3b29 ("net/mlx5: Add flow counters idr") Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Reviewed-by: Roi Dayan <roid@mellanox.com>
* net/mlx5: Take fs_counters dellist before addlistVlad Buslov2018-10-181-5/+8
| | | | | | | | | | | | | | | | | | | In fs_counters elements from both addlist and dellist are removed by mlx5_fc_stats_work() without any locking. This introduces race condition when batch of new rules is created and then immediately deleted (for example, when error occurred during flow creation). In such case some of the rules might be in dellist, but not in addlist when mlx5_fc_stats_work() is executed concurrently with tc, which will result rule deletion and use-after-free on next iteration because deleted rules are still in addlist. Always take dellist first to guarantee that rules can only be deleted after they were removed from addlist. Fixes: 6e5e22839136 ("net/mlx5: Add new list to store deleted flow counters") Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Reported-by: Chris Mi <chrism@mellanox.com> Reviewed-by: Roi Dayan <roid@mellanox.com>
* net/mlx5: Use flow counter IDs and not the wrapping cache objectMark Bloch2018-10-171-0/+6
| | | | | | | | | | | | | | | | | Currently, when a flow rule is created using the FS core layer, the caller has to pass the entire flow counter object and not just the counter HW handle (ID). This requires both the FS core and the caller to have knowledge about the inner implementation of the FS layer flow counters cache and limits the possible users. Move to use the counter ID across the place when dealing with flows. Doing this decoupling, now can we privatize the inner implementation of the flow counters. Signed-off-by: Mark Bloch <markb@mellanox.com> Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
* net/mlx5: Add flow counters idrVlad Buslov2018-09-051-4/+33
| | | | | | | | | | | | | | | | | | Previous patch in series changed flow counter storage structure from rb_tree to linked list in order to improve flow counter traversal performance. The drawback of such solution is that flow counter lookup by id becomes linear in complexity. Store pointers to flow counters in idr in order to improve lookup performance to logarithmic again. Idr is non-intrusive data structure and doesn't require extending flow counter struct with new elements. This means that idr can be used for lookup, while linked list from previous patch is used for traversal, and struct mlx5_fc size is <= 2 cache lines. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Amir Vadai <amir@vadai.me> Reviewed-by: Paul Blakey <paulb@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
* net/mlx5: Store flow counters in a listVlad Buslov2018-09-051-48/+40
| | | | | | | | | | | | | | | | | | | In order to improve performance of flow counter stats query loop that traverses all configured flow counters, replace rb_tree with double-linked list. This change improves performance of traversing flow counters by removing the tree traversal. (profiling data showed that call to rb_next was most top CPU consumer) However, lookup of flow flow counter in list becomes linear, instead of logarithmic. This problem is fixed by next patch in series, which adds idr for fast lookup. Idr is to be used because it is not an intrusive data structure and doesn't require adding any new members to struct mlx5_fc, which allows its control data part to stay <= 1 cache line in size. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Amir Vadai <amir@vadai.me> Reviewed-by: Paul Blakey <paulb@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
* net/mlx5: Add new list to store deleted flow countersVlad Buslov2018-09-051-22/+12
| | | | | | | | | | | | | | | | | | | | In order to prevent flow counters stats work function from traversing whole flow counters tree while searching for deleted flow counters, new list to store deleted flow counters is added to struct mlx5_fc_stats. Lockless NULL-terminated single linked list data type is used due to following reasons: - This use case only needs to add single element to list and remove/iterate whole list. Lockless list doesn't require any additional synchronization for these operations. - First cache line of flow counter data structure only has space to store single additional pointer, which precludes usage of double linked list. Remove flow counter 'deleted' flag that is no longer needed. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Amir Vadai <amir@vadai.me> Reviewed-by: Paul Blakey <paulb@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
* net/mlx5: Change flow counters addlist type to single linked listVlad Buslov2018-09-051-25/+20
| | | | | | | | | | | | | | | | | | | | | In order to prevent flow counters stats work function from traversing whole flow counters tree while searching for deleted flow counters, new list to store deleted flow counters will be added to struct mlx5_fc_stats. However, the flow counter structure itself has no space left to store any more data in first cache line. To free space that is needed to store additional list node, convert current addlist double linked list (two pointers per node) to atomic single linked list (one pointer per node). Lockless NULL-terminated single linked list data type doesn't require any additional external synchronization for operations used by flow counters module (add single new element, remove all elements from list and traverse them). Remove addlist_lock that is no longer needed. Signed-off-by: Vlad Buslov <vladbu@mellanox.com> Acked-by: Amir Vadai <amir@vadai.me> Reviewed-by: Paul Blakey <paulb@mellanox.com> Reviewed-by: Roi Dayan <roid@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
* net/mlx5: Export flow counter related APIRaed Salem2018-06-021-0/+3
| | | | | | | | | Exports counters API to be used in both IB and EN. Reviewed-by: Yishai Hadas <yishaih@mellanox.com> Signed-off-by: Raed Salem <raeds@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
* net/mlx5: Use flow counter pointer as input to the query functionOr Gerlitz2018-06-021-2/+2
| | | | | | | | | This allows to un-expose the details of struct mlx5_fc and keep it internal to the core driver. Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
* net/mlx5e: E-switch, Add steering drop countersEugenia Emantayev2018-01-091-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | Add flow counters to count packets dropped due to drop rules configured in eswitch egress and ingress ACLs. These counters will count VFs violations and incoming traffic drops. Will be presented on hypervisor via standard 'ip -s link show' command. Example: "ip -s link show dev enp5s0f0" 6: enp5s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether 24:8a:07:a5:28:f0 brd ff:ff:ff:ff:ff:ff RX: bytes packets errors dropped overrun mcast 0 0 0 0 0 2 TX: bytes packets errors dropped carrier collsns 1406 17 0 0 0 0 vf 0 MAC 00:00:ca:fe:ca:fe, vlan 5, spoof checking off, link-state auto, trust off, query_rss off RX: bytes packets mcast bcast dropped 1666 29 14 32 0 TX: bytes packets dropped 2880 44 2412 Signed-off-by: Eugenia Emantayev <eugenia@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
* net/mlx5: Increase the maximum flow counters supportedRabie Loulou2017-08-071-3/+10
| | | | | | | | | | | | | | | | | | | | | Read new NIC capability field which represnts 16 MSBs of the max flow counters number supported (max_flow_counter_31_16). Backward compatibility with older firmware is preserved, the modified driver reads max_flow_counter_31_16 as 0 from the older firmware and uses up to 64K counters. Changed flow counter id from 16 bits to 32 bits. Backward compatibility with older firmware is preserved as we kept the 16 LSBs of the counter id in place and added 16 MSBs from reserved field. Changed the background bulk reading of flow counters to work in chunks of at most 32K counters, to make sure we don't attempt to allocate very large buffers. Signed-off-by: Rabie Loulou <rabiel@mellanox.com> Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
* net/mlx5e: Update neighbour 'used' state using HW flow rules countersHadar Hen Zion2017-04-301-2/+22
| | | | | | | | | | | | | | | | | | | | | | | | When IP tunnel encapsulation rules are offloaded, the kernel can't see the traffic of the offloaded flow. The neighbour for the IP tunnel destination of the offloaded flow can mistakenly become STALE and deleted by the kernel since its 'used' value wasn't changed. To make sure that a neighbour which is used by the HW won't become STALE, we proactively update the neighbour 'used' value every DELAY_PROBE_TIME period, when packets were matched and counted by the HW for one of the tunnel encap flows related to this neighbour. The periodic task that updates the used neighbours is scheduled when a tunnel encap rule is successfully offloaded into HW and keeps re-scheduling itself as long as the representor's neighbours list isn't empty. Add, remove, lookup and status change operations done over the representor's neighbours list or the neighbour hash entry encaps list are all serialized by RTNL lock. Signed-off-by: Hadar Hen Zion <hadarh@mellanox.com> Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com>
* net/mlx5: use rb_entry()Geliang Tang2016-12-201-1/+1
| | | | | | | | | To make the code clearer, use rb_entry() instead of container_of() to deal with rbtree. Signed-off-by: Geliang Tang <geliangtang@gmail.com> Acked-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net/mlx5: Correctly initialize last use of flow countersPaul Blakey2016-10-291-0/+1
| | | | | | | | | | | | | Currently, last use timestamp is initialized to zero. This is not the expected value by higher layers such as when we do TC action offloading. To fix that, set it to the current time, e.g when the counter/rule is offloaded. This is the same behaviour of non-offloaded TC actions. Fixes: 43a335e055bb ('mlx5_core: Flow counters infrastructure') Signed-off-by: Paul Blakey <paulb@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net/mlx5: Update last-use statistics for flow rulesAmir Vadai2016-08-191-1/+10
| | | | | | | | | | | Set lastuse statistic, when number of packets is changed compared to last query. This was wrongly dropped when bulk counter reading was added. Fixes: a351a1b03bf1 ('net/mlx5: Introduce bulk reading of flow counters') Signed-off-by: Amir Vadai <amirva@mellanox.com> Reported-by: Paul Blakey <paulb@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net/mlx5: Introduce bulk reading of flow countersAmir Vadai2016-07-141-22/+61
| | | | | | | | | | | | | | | | | | This commit utilize the ability of ConnectX-4 to bulk read flow counters. Few bulk counter queries could be done instead of issuing thousands of firmware commands per second to get statistics of all flows set to HW, such as those programmed when we offload tc filters. Counters are stored sorted by hardware id, and queried in blocks (id + number of counters). Due to hardware requirement, start of block and number of counters in a block must be four aligned. Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Amir Vadai <amir@vadai.me> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net/mlx5: Store counters in rbtree instead of listAmir Vadai2016-07-141-10/+54
| | | | | | | | | In order to use bulk counters, we need to have counters sorted by id. Signed-off-by: Amir Vadai <amir@vadai.me> Reviewed-by: Or Gerlitz <ogerlitz@mellanox.com> Signed-off-by: Saeed Mahameed <saeedm@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net/mlx5_core: Flow counters infrastructureAmir Vadai2016-05-161-0/+226
If a counter has the aging flag set when created, it is added to a list of counters that will be queried periodically from a workqueue. query result and last use timestamp are cached. add/del counter must be very efficient since thousands of such operations might be issued in a second. There is only a single reference to counters without aging, therefore no need for locks. But, counters with aging enabled are stored in a list. In order to make code as lockless as possible, all the list manipulation and access to hardware is done from a single context - the periodic counters query thread. The hardware supports multiple counters per FTE, however currently we are using one counter for each FTE. Signed-off-by: Amir Vadai <amirva@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>