summaryrefslogtreecommitdiffstats
path: root/drivers
diff options
context:
space:
mode:
authorNeilBrown <neilb@suse.de>2014-01-14 10:38:09 +1100
committerNeilBrown <neilb@suse.de>2014-01-14 16:44:07 +1100
commitb50c259e25d9260b9108dc0c2964c26e5ecbe1c1 (patch)
tree146cf33b0f029effcf8c9c413627558979093ce7 /drivers
parent1cc03eb93245e63b0b7a7832165efdc52e25b4e6 (diff)
downloadlinux-b50c259e25d9260b9108dc0c2964c26e5ecbe1c1.tar.gz
linux-b50c259e25d9260b9108dc0c2964c26e5ecbe1c1.tar.bz2
linux-b50c259e25d9260b9108dc0c2964c26e5ecbe1c1.zip
md/raid10: fix two bugs in handling of known-bad-blocks.
If we discover a bad block when reading we split the request and potentially read some of it from a different device. The code path of this has two bugs in RAID10. 1/ we get a spin_lock with _irq, but unlock without _irq!! 2/ The calculation of 'sectors_handled' is wrong, as can be clearly seen by comparison with raid1.c This leads to at least 2 warnings and a probable crash is a RAID10 ever had known bad blocks. Cc: stable@vger.kernel.org (v3.1+) Fixes: 856e08e23762dfb92ffc68fd0a8d228f9e152160 Reported-by: Damian Nowak <spam@nowaker.net> URL: https://bugzilla.kernel.org/show_bug.cgi?id=68181 Signed-off-by: NeilBrown <neilb@suse.de>
Diffstat (limited to 'drivers')
-rw-r--r--drivers/md/raid10.c4
1 files changed, 2 insertions, 2 deletions
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index c504e8389e69..65285211568f 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -1319,7 +1319,7 @@ read_again:
/* Could not read all from this device, so we will
* need another r10_bio.
*/
- sectors_handled = (r10_bio->sectors + max_sectors
+ sectors_handled = (r10_bio->sector + max_sectors
- bio->bi_sector);
r10_bio->sectors = max_sectors;
spin_lock_irq(&conf->device_lock);
@@ -1327,7 +1327,7 @@ read_again:
bio->bi_phys_segments = 2;
else
bio->bi_phys_segments++;
- spin_unlock(&conf->device_lock);
+ spin_unlock_irq(&conf->device_lock);
/* Cannot call generic_make_request directly
* as that will be queued in __generic_make_request
* and subsequent mempool_alloc might block