summaryrefslogtreecommitdiffstats
path: root/Documentation/md
diff options
context:
space:
mode:
authorTomasz Majchrzak <tomasz.majchrzak@intel.com>2017-12-27 10:31:40 +0100
committerShaohua Li <sh.li@alibaba-inc.com>2018-01-15 14:29:42 -0800
commit1532d9e87e8b2377f12929f9e40724d5fbe6ecc5 (patch)
treefa8ec94368dfff1ec93b0366833e6f5d3cbcc70c /Documentation/md
parent92e6245deab80f0934a102ba969d8b891b8ba5bf (diff)
downloadlinux-stable-1532d9e87e8b2377f12929f9e40724d5fbe6ecc5.tar.gz
linux-stable-1532d9e87e8b2377f12929f9e40724d5fbe6ecc5.tar.bz2
linux-stable-1532d9e87e8b2377f12929f9e40724d5fbe6ecc5.zip
raid5-ppl: PPL support for disks with write-back cache enabled
In order to provide data consistency with PPL for disks with write-back cache enabled all data has to be flushed to disks before next PPL entry. The disks to be flushed are marked in the bitmap. It's modified under a mutex and it's only read after PPL io unit is submitted. A limitation of 64 disks in the array has been introduced to keep data structures and implementation simple. RAID5 arrays with so many disks are not likely due to high risk of multiple disks failure. Such restriction should not be a real life limitation. With write-back cache disabled next PPL entry is submitted when data write for current one completes. Data flush defers next log submission so trigger it when there are no stripes for handling found. As PPL assures all data is flushed to disk at request completion, just acknowledge flush request when PPL is enabled. Signed-off-by: Tomasz Majchrzak <tomasz.majchrzak@intel.com> Signed-off-by: Shaohua Li <sh.li@alibaba-inc.com>
Diffstat (limited to 'Documentation/md')
-rw-r--r--Documentation/md/raid5-ppl.txt7
1 files changed, 4 insertions, 3 deletions
diff --git a/Documentation/md/raid5-ppl.txt b/Documentation/md/raid5-ppl.txt
index 127072b09363..bfa092589e00 100644
--- a/Documentation/md/raid5-ppl.txt
+++ b/Documentation/md/raid5-ppl.txt
@@ -39,6 +39,7 @@ case the behavior is the same as in plain raid5.
PPL is available for md version-1 metadata and external (specifically IMSM)
metadata arrays. It can be enabled using mdadm option --consistency-policy=ppl.
-Currently, volatile write-back cache should be disabled on all member drives
-when using PPL. Otherwise it cannot guarantee consistency in case of power
-failure.
+There is a limitation of maximum 64 disks in the array for PPL. It allows to
+keep data structures and implementation simple. RAID5 arrays with so many disks
+are not likely due to high risk of multiple disks failure. Such restriction
+should not be a real life limitation.