diff options
author | Benjamin LaHaise <bcrl@kvack.org> | 2006-01-03 14:06:50 -0800 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2006-01-03 14:06:50 -0800 |
commit | 4947d3ef8de7b4f42aed6ea9ba689dc8fb45b5a5 (patch) | |
tree | a4e77f0271702e4ff34a7a9e0c9598a3807204ee /include | |
parent | 17ba15fb6264f27374bc87f4c3f8519b80289d85 (diff) | |
download | linux-4947d3ef8de7b4f42aed6ea9ba689dc8fb45b5a5.tar.gz linux-4947d3ef8de7b4f42aed6ea9ba689dc8fb45b5a5.tar.bz2 linux-4947d3ef8de7b4f42aed6ea9ba689dc8fb45b5a5.zip |
[NET]: Speed up __alloc_skb()
From: Benjamin LaHaise <bcrl@kvack.org>
In __alloc_skb(), the use of skb_shinfo() which casts a u8 * to the
shared info structure results in gcc being forced to do a reload of the
pointer since it has no information on possible aliasing. Fix this by
using a pointer to refer to skb_shared_info.
By initializing skb_shared_info sequentially, the write combining buffers
can reduce the number of memory transactions to a single write. Reorder
the initialization in __alloc_skb() to match the structure definition.
There is also an alignment issue on 64 bit systems with skb_shared_info
by converting nr_frags to a short everything packs up nicely.
Also, pass the slab cache pointer according to the fclone flag instead
of using two almost identical function calls.
This raises bw_unix performance up to a peak of 707KB/s when combined
with the spinlock patch. It should help other networking protocols, too.
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/skbuff.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h index 971677178e0c..483cfc47ec34 100644 --- a/include/linux/skbuff.h +++ b/include/linux/skbuff.h @@ -133,7 +133,7 @@ struct skb_frag_struct { */ struct skb_shared_info { atomic_t dataref; - unsigned int nr_frags; + unsigned short nr_frags; unsigned short tso_size; unsigned short tso_segs; unsigned short ufo_size; |