diff options
author | Santiago Leon <santil@us.ibm.com> | 2006-04-25 11:19:59 -0500 |
---|---|---|
committer | Jeff Garzik <jeff@garzik.org> | 2006-05-24 01:30:37 -0400 |
commit | 860f242eb5340d0b0cfe243cb86b2a98f92e8b91 (patch) | |
tree | 286d64b4acfc392bcb926a6f5f7bfb311b0d3efc /drivers/net/ibmveth.h | |
parent | 7b32a312895c00ff03178e49db8b651ee1e48178 (diff) | |
download | linux-860f242eb5340d0b0cfe243cb86b2a98f92e8b91.tar.gz linux-860f242eb5340d0b0cfe243cb86b2a98f92e8b91.tar.bz2 linux-860f242eb5340d0b0cfe243cb86b2a98f92e8b91.zip |
[PATCH] ibmveth change buffer pools dynamically
This patch provides a sysfs interface to change some properties of the
ibmveth buffer pools (size of the buffers, number of buffers per pool,
and whether a pool is active). Ethernet drivers use ethtool to provide
this type of functionality. However, the buffers in the ibmveth driver
can have an arbitrary size (not only regular, mini, and jumbo which are
the only sizes that ethtool can change), and also ibmveth can have an
arbitrary number of buffer pools
Under heavy load we have seen dropped packets which obviously kills TCP
performance. We have created several fixes that mitigate this issue,
but we definitely need a way of changing the number of buffers for an
adapter dynamically. Also, changing the size of the buffers allows
users to change the MTU to something big (bigger than a jumbo frame)
greatly improving performance on partition to partition transfers.
The patch creates directories pool1...pool4 in the device directory in
sysfs, each with files: num, size, and active (which default to the
values in the mainline version).
Comments and suggestions are welcome...
--
Santiago A. Leon
Power Linux Development
IBM Linux Technology Center
Signed-off-by: Jeff Garzik <jeff@garzik.org>
Diffstat (limited to 'drivers/net/ibmveth.h')
-rw-r--r-- | drivers/net/ibmveth.h | 7 |
1 files changed, 6 insertions, 1 deletions
diff --git a/drivers/net/ibmveth.h b/drivers/net/ibmveth.h index 46919a814fca..b526dda06900 100644 --- a/drivers/net/ibmveth.h +++ b/drivers/net/ibmveth.h @@ -75,10 +75,13 @@ #define IbmVethNumBufferPools 5 #define IBMVETH_BUFF_OH 22 /* Overhead: 14 ethernet header + 8 opaque handle */ +#define IBMVETH_MAX_MTU 68 +#define IBMVETH_MAX_POOL_COUNT 4096 +#define IBMVETH_MAX_BUF_SIZE (1024 * 128) -/* pool_size should be sorted */ static int pool_size[] = { 512, 1024 * 2, 1024 * 16, 1024 * 32, 1024 * 64 }; static int pool_count[] = { 256, 768, 256, 256, 256 }; +static int pool_active[] = { 1, 1, 0, 0, 0}; #define IBM_VETH_INVALID_MAP ((u16)0xffff) @@ -94,6 +97,7 @@ struct ibmveth_buff_pool { dma_addr_t *dma_addr; struct sk_buff **skbuff; int active; + struct kobject kobj; }; struct ibmveth_rx_q { @@ -118,6 +122,7 @@ struct ibmveth_adapter { dma_addr_t filter_list_dma; struct ibmveth_buff_pool rx_buff_pool[IbmVethNumBufferPools]; struct ibmveth_rx_q rx_queue; + int pool_config; /* adapter specific stats */ u64 replenish_task_cycles; |