diff options
author | NeilBrown <neilb@suse.com> | 2015-08-14 11:26:17 +1000 |
---|---|---|
committer | NeilBrown <neilb@suse.com> | 2015-08-31 19:43:45 +0200 |
commit | 95af587e95aacb9cfda4a9641069a5244a540dc8 (patch) | |
tree | 7f891cb4a1cd2b31e3543f05c95fab9baba0aef9 /fs | |
parent | 55ce74d4bfe1b9444436264c637f39a152d1e5ac (diff) | |
download | linux-stable-95af587e95aacb9cfda4a9641069a5244a540dc8.tar.gz linux-stable-95af587e95aacb9cfda4a9641069a5244a540dc8.tar.bz2 linux-stable-95af587e95aacb9cfda4a9641069a5244a540dc8.zip |
md/raid10: ensure device failure recorded before write request returns.
When a write to one of the legs of a RAID10 fails, the failure is
recorded in the metadata of the other legs so that after a restart
the data on the failed drive wont be trusted even if that drive seems
to be working again (maybe a cable was unplugged).
Currently there is no interlock between the write request completing
and the metadata update. So it is possible that the write will
complete, the app will confirm success in some way, and then the
machine will crash before the metadata update completes.
This is an extremely small hole for a racy to fit in, but it is
theoretically possible and so should be closed.
So:
- set MD_CHANGE_PENDING when requesting a metadata update for a
failed device, so we can know with certainty when it completes
- queue requests that experienced an error on a new queue which
is only processed after the metadata update completes
- call raid_end_bio_io() on bios in that queue when the time comes.
Signed-off-by: NeilBrown <neilb@suse.com>
Diffstat (limited to 'fs')
0 files changed, 0 insertions, 0 deletions