diff options
author | NeilBrown <neilb@suse.de> | 2014-12-15 12:56:56 +1100 |
---|---|---|
committer | NeilBrown <neilb@suse.de> | 2015-02-04 08:35:52 +1100 |
commit | 5c675f83c68fbdf9c0e103c1090b06be747fa62c (patch) | |
tree | 9a03f84c7a3bcef7d5e757dc28ce7bd5d205b26a /drivers/md/raid0.c | |
parent | 85572d7c75fd5b9fa3fc911e1c99c68ec74903a0 (diff) | |
download | linux-stable-5c675f83c68fbdf9c0e103c1090b06be747fa62c.tar.gz linux-stable-5c675f83c68fbdf9c0e103c1090b06be747fa62c.tar.bz2 linux-stable-5c675f83c68fbdf9c0e103c1090b06be747fa62c.zip |
md: make ->congested robust against personality changes.
There is currently no locking around calls to the 'congested'
bdi function. If called at an awkward time while an array is
being converted from one level (or personality) to another, there
is a tiny chance of running code in an unreferenced module etc.
So add a 'congested' function to the md_personality operations
structure, and call it with appropriate locking from a central
'mddev_congested'.
When the array personality is changing the array will be 'suspended'
so no IO is processed.
If mddev_congested detects this, it simply reports that the
array is congested, which is a safe guess.
As mddev_suspend calls synchronize_rcu(), mddev_congested can
avoid races by included the whole call inside an rcu_read_lock()
region.
This require that the congested functions for all subordinate devices
can be run under rcu_lock. Fortunately this is the case.
Signed-off-by: NeilBrown <neilb@suse.de>
Diffstat (limited to 'drivers/md/raid0.c')
-rw-r--r-- | drivers/md/raid0.c | 9 |
1 files changed, 2 insertions, 7 deletions
diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c index ba6b85de96d2..4b521eac5b69 100644 --- a/drivers/md/raid0.c +++ b/drivers/md/raid0.c @@ -25,17 +25,13 @@ #include "raid0.h" #include "raid5.h" -static int raid0_congested(void *data, int bits) +static int raid0_congested(struct mddev *mddev, int bits) { - struct mddev *mddev = data; struct r0conf *conf = mddev->private; struct md_rdev **devlist = conf->devlist; int raid_disks = conf->strip_zone[0].nb_dev; int i, ret = 0; - if (mddev_congested(mddev, bits)) - return 1; - for (i = 0; i < raid_disks && !ret ; i++) { struct request_queue *q = bdev_get_queue(devlist[i]->bdev); @@ -263,8 +259,6 @@ static int create_strip_zones(struct mddev *mddev, struct r0conf **private_conf) mdname(mddev), (unsigned long long)smallest->sectors); } - mddev->queue->backing_dev_info.congested_fn = raid0_congested; - mddev->queue->backing_dev_info.congested_data = mddev; /* * now since we have the hard sector sizes, we can make sure @@ -729,6 +723,7 @@ static struct md_personality raid0_personality= .size = raid0_size, .takeover = raid0_takeover, .quiesce = raid0_quiesce, + .congested = raid0_congested, }; static int __init raid0_init (void) |