diff options
author | Tomasz Majchrzak <tomasz.majchrzak@intel.com> | 2017-12-27 10:31:40 +0100 |
---|---|---|
committer | Shaohua Li <sh.li@alibaba-inc.com> | 2018-01-15 14:29:42 -0800 |
commit | 1532d9e87e8b2377f12929f9e40724d5fbe6ecc5 (patch) | |
tree | fa8ec94368dfff1ec93b0366833e6f5d3cbcc70c /drivers/md/raid5.c | |
parent | 92e6245deab80f0934a102ba969d8b891b8ba5bf (diff) | |
download | linux-stable-1532d9e87e8b2377f12929f9e40724d5fbe6ecc5.tar.gz linux-stable-1532d9e87e8b2377f12929f9e40724d5fbe6ecc5.tar.bz2 linux-stable-1532d9e87e8b2377f12929f9e40724d5fbe6ecc5.zip |
raid5-ppl: PPL support for disks with write-back cache enabled
In order to provide data consistency with PPL for disks with write-back
cache enabled all data has to be flushed to disks before next PPL
entry. The disks to be flushed are marked in the bitmap. It's modified
under a mutex and it's only read after PPL io unit is submitted.
A limitation of 64 disks in the array has been introduced to keep data
structures and implementation simple. RAID5 arrays with so many disks are
not likely due to high risk of multiple disks failure. Such restriction
should not be a real life limitation.
With write-back cache disabled next PPL entry is submitted when data write
for current one completes. Data flush defers next log submission so trigger
it when there are no stripes for handling found.
As PPL assures all data is flushed to disk at request completion, just
acknowledge flush request when PPL is enabled.
Signed-off-by: Tomasz Majchrzak <tomasz.majchrzak@intel.com>
Signed-off-by: Shaohua Li <sh.li@alibaba-inc.com>
Diffstat (limited to 'drivers/md/raid5.c')
-rw-r--r-- | drivers/md/raid5.c | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c index 5a2a29bd02dd..50d01144b805 100644 --- a/drivers/md/raid5.c +++ b/drivers/md/raid5.c @@ -5563,7 +5563,7 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi) bool do_flush = false; if (unlikely(bi->bi_opf & REQ_PREFLUSH)) { - int ret = r5l_handle_flush_request(conf->log, bi); + int ret = log_handle_flush_request(conf, bi); if (ret == 0) return true; @@ -6168,7 +6168,7 @@ static int handle_active_stripes(struct r5conf *conf, int group, break; if (i == NR_STRIPE_HASH_LOCKS) { spin_unlock_irq(&conf->device_lock); - r5l_flush_stripe_to_raid(conf->log); + log_flush_stripe_to_raid(conf); spin_lock_irq(&conf->device_lock); return batch_size; } @@ -8060,7 +8060,7 @@ static void raid5_quiesce(struct mddev *mddev, int quiesce) wake_up(&conf->wait_for_overlap); unlock_all_device_hash_locks_irq(conf); } - r5l_quiesce(conf->log, quiesce); + log_quiesce(conf, quiesce); } static void *raid45_takeover_raid0(struct mddev *mddev, int level) |