diff options
author | Richard Weinberger <richard@nod.at> | 2014-11-10 16:27:10 +0100 |
---|---|---|
committer | Richard Weinberger <richard@nod.at> | 2015-03-26 22:46:01 +0100 |
commit | 8fb2a514780eee48602424a86e3cd72d3f1bdfb2 (patch) | |
tree | 2c5c3bced155af20a7a27a2b368bc7c1df5586fa /drivers/mtd/ubi/wl.c | |
parent | ad3d6a05ee45eebf68ff08da0d3f86251b530a27 (diff) | |
download | linux-stable-8fb2a514780eee48602424a86e3cd72d3f1bdfb2.tar.gz linux-stable-8fb2a514780eee48602424a86e3cd72d3f1bdfb2.tar.bz2 linux-stable-8fb2a514780eee48602424a86e3cd72d3f1bdfb2.zip |
UBI: Fastmap: Fix race after ubi_wl_get_peb()
ubi_wl_get_peb() returns a fresh PEB which can be used by
user of UBI. Due to the pool logic fastmap will correctly
map this PEB upon attach time because it will be scanned.
If a new fastmap is written (due to heavy parallel io)
while the before the fresh PEB is assigned to the EBA table
it will not be scanned as it is no longer in the pool.
So, the race window exists between ubi_wl_get_peb()
and the EBA table assignment.
We have to make sure that no new fastmap can be written
while that.
To ensure that ubi_wl_get_peb() will grab ubi->fm_sem in read mode
and the user of ubi_wl_get_peb() has to release it after the PEB
got assigned.
Signed-off-by: Richard Weinberger <richard@nod.at>
Diffstat (limited to 'drivers/mtd/ubi/wl.c')
-rw-r--r-- | drivers/mtd/ubi/wl.c | 7 |
1 files changed, 7 insertions, 0 deletions
diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c index b9b7f97db837..b8ad5e005cdc 100644 --- a/drivers/mtd/ubi/wl.c +++ b/drivers/mtd/ubi/wl.c @@ -641,6 +641,7 @@ void ubi_refill_pools(struct ubi_device *ubi) /* ubi_wl_get_peb - works exaclty like __wl_get_peb but keeps track of * the fastmap pool. + * Returns with ubi->fm_sem held in read mode! */ int ubi_wl_get_peb(struct ubi_device *ubi) { @@ -649,16 +650,20 @@ int ubi_wl_get_peb(struct ubi_device *ubi) struct ubi_fm_pool *wl_pool = &ubi->fm_wl_pool; again: + down_read(&ubi->fm_sem); spin_lock(&ubi->wl_lock); /* We check here also for the WL pool because at this point we can * refill the WL pool synchronous. */ if (pool->used == pool->size || wl_pool->used == wl_pool->size) { spin_unlock(&ubi->wl_lock); + up_read(&ubi->fm_sem); ret = ubi_update_fastmap(ubi); if (ret) { ubi_msg(ubi, "Unable to write a new fastmap: %i", ret); + down_read(&ubi->fm_sem); return -ENOSPC; } + down_read(&ubi->fm_sem); spin_lock(&ubi->wl_lock); } @@ -670,6 +675,7 @@ again: goto out; } retried = 1; + up_read(&ubi->fm_sem); goto again; } @@ -725,6 +731,7 @@ int ubi_wl_get_peb(struct ubi_device *ubi) spin_lock(&ubi->wl_lock); peb = wl_get_peb(ubi); spin_unlock(&ubi->wl_lock); + down_read(&ubi->fm_sem); if (peb < 0) return peb; |