diff options
author | Christoph Lameter <clameter@sgi.com> | 2008-04-14 19:11:41 +0300 |
---|---|---|
committer | Pekka Enberg <penberg@cs.helsinki.fi> | 2008-04-27 18:28:40 +0300 |
commit | 9b2cd506e5f2117f94c28a0040bf5da058105316 (patch) | |
tree | 670857a0cea6723124301901cb5261811c64d701 /mm/slub.c | |
parent | 114e9e89e668ec561c9b0f3dea7bcc8af7c29d21 (diff) | |
download | linux-9b2cd506e5f2117f94c28a0040bf5da058105316.tar.gz linux-9b2cd506e5f2117f94c28a0040bf5da058105316.tar.bz2 linux-9b2cd506e5f2117f94c28a0040bf5da058105316.zip |
slub: Calculate min_objects based on number of processors.
The mininum objects per slab is calculated based on the number of processors
that may come online.
Processors min_objects
---------------------------
1 8
2 12
4 16
8 20
16 24
32 28
64 32
1024 48
4096 56
The higher the number of processors the large the order sizes used for various
slab caches will become. This has been shown to address the performance issues
in hackbench on 16p etc.
The calculation is only performed if slub_min_objects is zero (default). If one
specifies a slub_min_objects on boot then that setting is taken.
As suggested by Zhang Yanmin's performance tests on 16-core Tigerton, use the
formula '4 * (fls(nr_cpu_ids) + 1)':
./hackbench 100 process 2000:
1) 2.6.25-rc6slab: 23.5 seconds
2) 2.6.25-rc7SLUB+slub_min_objects=20: 31 seconds
3) 2.6.25-rc7SLUB+slub_min_objects=24: 23.5 seconds
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com>
Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
Diffstat (limited to 'mm/slub.c')
-rw-r--r-- | mm/slub.c | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/mm/slub.c b/mm/slub.c index 6572cef0c43c..e2e6ba7a5172 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1803,7 +1803,7 @@ static struct page *get_object_page(const void *x) */ static int slub_min_order; static int slub_max_order = PAGE_ALLOC_COSTLY_ORDER; -static int slub_min_objects = 4; +static int slub_min_objects; /* * Merge control. If this is set then no merging of slab caches will occur. @@ -1880,6 +1880,8 @@ static inline int calculate_order(int size) * we reduce the minimum objects required in a slab. */ min_objects = slub_min_objects; + if (!min_objects) + min_objects = 4 * (fls(nr_cpu_ids) + 1); while (min_objects > 1) { fraction = 8; while (fraction >= 4) { |