diff options
author | Sunil Goutham <sgoutham@cavium.com> | 2016-12-01 18:24:28 +0530 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2016-12-02 13:32:59 -0500 |
commit | bd3ad7d3a14b07aeeb4f92abc757672719e2a0eb (patch) | |
tree | 401add62a6c03cc480f2e4addba361eafe87abff /drivers/net/ethernet/cavium/thunder/nicvf_queues.h | |
parent | 9aac3c18790cb3510810fafc7ba115e31d49f3bf (diff) | |
download | linux-stable-bd3ad7d3a14b07aeeb4f92abc757672719e2a0eb.tar.gz linux-stable-bd3ad7d3a14b07aeeb4f92abc757672719e2a0eb.tar.bz2 linux-stable-bd3ad7d3a14b07aeeb4f92abc757672719e2a0eb.zip |
net: thunderx: Fix transmit queue timeout issue
Transmit queue timeout issue is seen in two cases
- Due to a race condition btw setting stop_queue at xmit()
and checking for stopped_queue in NAPI poll routine, at times
transmission from a SQ comes to a halt. This is fixed
by using barriers and also added a check for SQ free descriptors,
incase SQ is stopped and there are only CQE_RX i.e no CQE_TX.
- Contrary to an assumption, a HW errata where HW doesn't stop transmission
even though there are not enough CQEs available for a CQE_TX is
not fixed in T88 pass 2.x. This results in a Qset error with
'CQ_WR_FULL' stalling transmission. This is fixed by adjusting
RXQ's RED levels for CQ level such that there is always enough
space left for CQE_TXs.
Signed-off-by: Sunil Goutham <sgoutham@cavium.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'drivers/net/ethernet/cavium/thunder/nicvf_queues.h')
-rw-r--r-- | drivers/net/ethernet/cavium/thunder/nicvf_queues.h | 15 |
1 files changed, 8 insertions, 7 deletions
diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_queues.h b/drivers/net/ethernet/cavium/thunder/nicvf_queues.h index 20511f2cb134..9e2104675bc9 100644 --- a/drivers/net/ethernet/cavium/thunder/nicvf_queues.h +++ b/drivers/net/ethernet/cavium/thunder/nicvf_queues.h @@ -88,13 +88,13 @@ /* RED and Backpressure levels of CQ for pkt reception * For CQ, level is a measure of emptiness i.e 0x0 means full - * eg: For CQ of size 4K, and for pass/drop levels of 128/96 - * HW accepts pkt if unused CQE >= 2048 - * RED accepts pkt if unused CQE < 2048 & >= 1536 - * DROPs pkts if unused CQE < 1536 + * eg: For CQ of size 4K, and for pass/drop levels of 160/144 + * HW accepts pkt if unused CQE >= 2560 + * RED accepts pkt if unused CQE < 2304 & >= 2560 + * DROPs pkts if unused CQE < 2304 */ -#define RQ_PASS_CQ_LVL 128ULL -#define RQ_DROP_CQ_LVL 96ULL +#define RQ_PASS_CQ_LVL 160ULL +#define RQ_DROP_CQ_LVL 144ULL /* RED and Backpressure levels of RBDR for pkt reception * For RBDR, level is a measure of fullness i.e 0x0 means empty @@ -306,7 +306,8 @@ void nicvf_sq_disable(struct nicvf *nic, int qidx); void nicvf_put_sq_desc(struct snd_queue *sq, int desc_cnt); void nicvf_sq_free_used_descs(struct net_device *netdev, struct snd_queue *sq, int qidx); -int nicvf_sq_append_skb(struct nicvf *nic, struct sk_buff *skb); +int nicvf_sq_append_skb(struct nicvf *nic, struct snd_queue *sq, + struct sk_buff *skb, u8 sq_num); struct sk_buff *nicvf_get_rcv_skb(struct nicvf *nic, struct cqe_rx_t *cqe_rx); void nicvf_rbdr_task(unsigned long data); |