diff options
author | Eric Dumazet <eric.dumazet@gmail.com> | 2010-12-20 12:54:58 +0000 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2010-12-20 21:32:59 -0800 |
commit | eda83e3b63e88351310c13c99178eb4634f137b2 (patch) | |
tree | 55b9c1f75337a8ca4032e607405e370b437c398e /net/sched/cls_rsvp.c | |
parent | d9993be65a77f500ae926176baa264816bfe3816 (diff) | |
download | linux-stable-eda83e3b63e88351310c13c99178eb4634f137b2.tar.gz linux-stable-eda83e3b63e88351310c13c99178eb4634f137b2.tar.bz2 linux-stable-eda83e3b63e88351310c13c99178eb4634f137b2.zip |
net_sched: sch_sfq: better struct layouts
Here is a respin of patch.
I'll send a short patch to make SFQ more fair in presence of large
packets as well.
Thanks
[PATCH v3 net-next-2.6] net_sched: sch_sfq: better struct layouts
This patch shrinks sizeof(struct sfq_sched_data)
from 0x14f8 (or more if spinlocks are bigger) to 0x1180 bytes, and
reduce text size as well.
text data bss dec hex filename
4821 152 0 4973 136d old/net/sched/sch_sfq.o
4627 136 0 4763 129b new/net/sched/sch_sfq.o
All data for a slot/flow is now grouped in a compact and cache friendly
structure, instead of being spreaded in many different points.
struct sfq_slot {
struct sk_buff *skblist_next;
struct sk_buff *skblist_prev;
sfq_index qlen; /* number of skbs in skblist */
sfq_index next; /* next slot in sfq chain */
struct sfq_head dep; /* anchor in dep[] chains */
unsigned short hash; /* hash value (index in ht[]) */
short allot; /* credit for this slot */
};
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Jarek Poplawski <jarkao2@gmail.com>
Cc: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/sched/cls_rsvp.c')
0 files changed, 0 insertions, 0 deletions