diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2023-02-27 16:18:51 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2023-02-27 16:18:51 -0800 |
commit | 103830683cfc8f43b15158b0a48014b6d6e83633 (patch) | |
tree | 13e75ffd3a75f4552edf94bbe954e9253200d4f5 | |
parent | 46d733d0efc79bc8430d63b57ab88011806d5180 (diff) | |
parent | ddf1eca4fc5a4038cb323306f51fbba34ce3f4d2 (diff) | |
download | linux-stable-103830683cfc8f43b15158b0a48014b6d6e83633.tar.gz linux-stable-103830683cfc8f43b15158b0a48014b6d6e83633.tar.bz2 linux-stable-103830683cfc8f43b15158b0a48014b6d6e83633.zip |
Merge tag 'f2fs-for-6.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs
Pull f2fs updates from Jaegeuk Kim:
"In this round, we've got a huge number of patches that improve code
readability along with minor bug fixes, while we've mainly fixed some
critical issues in recently-added per-block age-based extent_cache,
atomic write support, and some folio cases.
Enhancements:
- add sysfs nodes to set last_age_weight and manage
discard_io_aware_gran
- show ipu policy in debugfs
- reduce stack memory cost by using bitfield in struct f2fs_io_info
- introduce trace_f2fs_replace_atomic_write_block
- enhance iostat support and adds flush commands
Bug fixes:
- revert "f2fs: truncate blocks in batch in __complete_revoke_list()"
- fix kernel crash on the atomic write abort flow
- call clear_page_private_reference in .{release,invalid}_folio
- support .migrate_folio for compressed inode
- fix cgroup writeback accounting with fs-layer encryption
- retry to update the inode page given data corruption
- fix kernel crash due to NULL io->bio
- fix some bugs in per-block age-based extent_cache:
- wrong calculation of block age
- update age extent in f2fs_do_zero_range()
- update age extent correctly during truncation"
* tag 'f2fs-for-6.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs: (81 commits)
f2fs: drop unnecessary arg for f2fs_ioc_*()
f2fs: Revert "f2fs: truncate blocks in batch in __complete_revoke_list()"
f2fs: synchronize atomic write aborts
f2fs: fix wrong segment count
f2fs: replace si->sbi w/ sbi in stat_show()
f2fs: export ipu policy in debugfs
f2fs: make kobj_type structures constant
f2fs: fix to do sanity check on extent cache correctly
f2fs: add missing description for ipu_policy node
f2fs: fix to set ipu policy
f2fs: fix typos in comments
f2fs: fix kernel crash due to null io->bio
f2fs: use iostat_lat_type directly as a parameter in the iostat_update_and_unbind_ctx()
f2fs: add sysfs nodes to set last_age_weight
f2fs: fix f2fs_show_options to show nogc_merge mount option
f2fs: fix cgroup writeback accounting with fs-layer encryption
f2fs: fix wrong calculation of block age
f2fs: fix to update age extent in f2fs_do_zero_range()
f2fs: fix to update age extent correctly during truncation
f2fs: fix to avoid potential memory corruption in __update_iostat_latency()
...
-rw-r--r-- | Documentation/ABI/testing/sysfs-fs-f2fs | 80 | ||||
-rw-r--r-- | Documentation/filesystems/f2fs.rst | 2 | ||||
-rw-r--r-- | MAINTAINERS | 1 | ||||
-rw-r--r-- | fs/f2fs/checkpoint.c | 37 | ||||
-rw-r--r-- | fs/f2fs/compress.c | 24 | ||||
-rw-r--r-- | fs/f2fs/data.c | 624 | ||||
-rw-r--r-- | fs/f2fs/debug.c | 64 | ||||
-rw-r--r-- | fs/f2fs/dir.c | 4 | ||||
-rw-r--r-- | fs/f2fs/extent_cache.c | 60 | ||||
-rw-r--r-- | fs/f2fs/f2fs.h | 128 | ||||
-rw-r--r-- | fs/f2fs/file.c | 173 | ||||
-rw-r--r-- | fs/f2fs/gc.c | 22 | ||||
-rw-r--r-- | fs/f2fs/gc.h | 2 | ||||
-rw-r--r-- | fs/f2fs/inline.c | 14 | ||||
-rw-r--r-- | fs/f2fs/inode.c | 78 | ||||
-rw-r--r-- | fs/f2fs/iostat.c | 186 | ||||
-rw-r--r-- | fs/f2fs/iostat.h | 19 | ||||
-rw-r--r-- | fs/f2fs/namei.c | 5 | ||||
-rw-r--r-- | fs/f2fs/node.c | 9 | ||||
-rw-r--r-- | fs/f2fs/segment.c | 225 | ||||
-rw-r--r-- | fs/f2fs/segment.h | 41 | ||||
-rw-r--r-- | fs/f2fs/super.c | 63 | ||||
-rw-r--r-- | fs/f2fs/sysfs.c | 49 | ||||
-rw-r--r-- | fs/f2fs/verity.c | 2 | ||||
-rw-r--r-- | include/linux/f2fs_fs.h | 2 | ||||
-rw-r--r-- | include/trace/events/f2fs.h | 104 |
26 files changed, 1090 insertions, 928 deletions
diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs index 9e3756625a81..94132745ecbe 100644 --- a/Documentation/ABI/testing/sysfs-fs-f2fs +++ b/Documentation/ABI/testing/sysfs-fs-f2fs @@ -49,16 +49,23 @@ Contact: "Jaegeuk Kim" <jaegeuk.kim@samsung.com> Description: Controls the in-place-update policy. updates in f2fs. User can set: - ==== ================= - 0x01 F2FS_IPU_FORCE - 0x02 F2FS_IPU_SSR - 0x04 F2FS_IPU_UTIL - 0x08 F2FS_IPU_SSR_UTIL - 0x10 F2FS_IPU_FSYNC - 0x20 F2FS_IPU_ASYNC - 0x40 F2FS_IPU_NOCACHE - 0x80 F2FS_IPU_HONOR_OPU_WRITE - ==== ================= + ===== =============== =================================================== + value policy description + 0x00 DISABLE disable IPU(=default option in LFS mode) + 0x01 FORCE all the time + 0x02 SSR if SSR mode is activated + 0x04 UTIL if FS utilization is over threashold + 0x08 SSR_UTIL if SSR mode is activated and FS utilization is over + threashold + 0x10 FSYNC activated in fsync path only for high performance + flash storages. IPU will be triggered only if the + # of dirty pages over min_fsync_blocks. + (=default option) + 0x20 ASYNC do IPU given by asynchronous write requests + 0x40 NOCACHE disable IPU bio cache + 0x80 HONOR_OPU_WRITE use OPU write prior to IPU write if inode has + FI_OPU_WRITE flag + ===== =============== =================================================== Refer segment.h for details. @@ -669,3 +676,56 @@ Contact: "Ping Xiong" <xiongping1@xiaomi.com> Description: When DATA SEPARATION is on, it controls the age threshold to indicate the data blocks as warm. By default it was initialized as 2621440 blocks (equals to 10GB). + +What: /sys/fs/f2fs/<disk>/fault_rate +Date: May 2016 +Contact: "Sheng Yong" <shengyong@oppo.com> +Contact: "Chao Yu" <chao@kernel.org> +Description: Enable fault injection in all supported types with + specified injection rate. + +What: /sys/fs/f2fs/<disk>/fault_type +Date: May 2016 +Contact: "Sheng Yong" <shengyong@oppo.com> +Contact: "Chao Yu" <chao@kernel.org> +Description: Support configuring fault injection type, should be + enabled with fault_injection option, fault type value + is shown below, it supports single or combined type. + + =================== =========== + Type_Name Type_Value + =================== =========== + FAULT_KMALLOC 0x000000001 + FAULT_KVMALLOC 0x000000002 + FAULT_PAGE_ALLOC 0x000000004 + FAULT_PAGE_GET 0x000000008 + FAULT_ALLOC_BIO 0x000000010 (obsolete) + FAULT_ALLOC_NID 0x000000020 + FAULT_ORPHAN 0x000000040 + FAULT_BLOCK 0x000000080 + FAULT_DIR_DEPTH 0x000000100 + FAULT_EVICT_INODE 0x000000200 + FAULT_TRUNCATE 0x000000400 + FAULT_READ_IO 0x000000800 + FAULT_CHECKPOINT 0x000001000 + FAULT_DISCARD 0x000002000 + FAULT_WRITE_IO 0x000004000 + FAULT_SLAB_ALLOC 0x000008000 + FAULT_DQUOT_INIT 0x000010000 + FAULT_LOCK_OP 0x000020000 + FAULT_BLKADDR 0x000040000 + =================== =========== + +What: /sys/fs/f2fs/<disk>/discard_io_aware_gran +Date: January 2023 +Contact: "Yangtao Li" <frank.li@vivo.com> +Description: Controls background discard granularity of inner discard thread + when is not in idle. Inner thread will not issue discards with size that + is smaller than granularity. The unit size is one block(4KB), now only + support configuring in range of [0, 512]. + Default: 512 + +What: /sys/fs/f2fs/<disk>/last_age_weight +Date: January 2023 +Contact: "Ping Xiong" <xiongping1@xiaomi.com> +Description: When DATA SEPARATION is on, it controls the weight of last data block age. diff --git a/Documentation/filesystems/f2fs.rst b/Documentation/filesystems/f2fs.rst index 220f3e0d3f55..2055e72871fe 100644 --- a/Documentation/filesystems/f2fs.rst +++ b/Documentation/filesystems/f2fs.rst @@ -158,7 +158,7 @@ nobarrier This option can be used if underlying storage guarantees If this option is set, no cache_flush commands are issued but f2fs still guarantees the write ordering of all the data writes. -barrier If this option is set, cache_flush commands are allowed to be +barrier If this option is set, cache_flush commands are allowed to be issued. fastboot This option is used when a system wants to reduce mount time as much as possible, even though normal performance diff --git a/MAINTAINERS b/MAINTAINERS index edd3d562beee..b0db911207ba 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -7795,6 +7795,7 @@ M: Chao Yu <chao@kernel.org> L: linux-f2fs-devel@lists.sourceforge.net S: Maintained W: https://f2fs.wiki.kernel.org/ +Q: https://patchwork.kernel.org/project/f2fs/list/ B: https://bugzilla.kernel.org/enter_bug.cgi?product=File%20System&component=f2fs T: git git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git F: Documentation/ABI/testing/sysfs-fs-f2fs diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c index 5a5515d83a1b..c3e058e0a018 100644 --- a/fs/f2fs/checkpoint.c +++ b/fs/f2fs/checkpoint.c @@ -70,7 +70,7 @@ static struct page *__get_meta_page(struct f2fs_sb_info *sbi, pgoff_t index, .old_blkaddr = index, .new_blkaddr = index, .encrypted_page = NULL, - .is_por = !is_meta, + .is_por = !is_meta ? 1 : 0, }; int err; @@ -171,10 +171,8 @@ static bool __is_bitmap_valid(struct f2fs_sb_info *sbi, block_t blkaddr, bool f2fs_is_valid_blkaddr(struct f2fs_sb_info *sbi, block_t blkaddr, int type) { - if (time_to_inject(sbi, FAULT_BLKADDR)) { - f2fs_show_injection_info(sbi, FAULT_BLKADDR); + if (time_to_inject(sbi, FAULT_BLKADDR)) return false; - } switch (type) { case META_NAT: @@ -239,8 +237,8 @@ int f2fs_ra_meta_pages(struct f2fs_sb_info *sbi, block_t start, int nrpages, .op = REQ_OP_READ, .op_flags = sync ? (REQ_META | REQ_PRIO) : REQ_RAHEAD, .encrypted_page = NULL, - .in_list = false, - .is_por = (type == META_POR), + .in_list = 0, + .is_por = (type == META_POR) ? 1 : 0, }; struct blk_plug plug; int err; @@ -625,7 +623,6 @@ int f2fs_acquire_orphan_inode(struct f2fs_sb_info *sbi) if (time_to_inject(sbi, FAULT_ORPHAN)) { spin_unlock(&im->ino_lock); - f2fs_show_injection_info(sbi, FAULT_ORPHAN); return -ENOSPC; } @@ -798,7 +795,7 @@ static void write_orphan_inodes(struct f2fs_sb_info *sbi, block_t start_blk) */ head = &im->ino_list; - /* loop for each orphan inode entry and write them in Jornal block */ + /* loop for each orphan inode entry and write them in journal block */ list_for_each_entry(orphan, head, list) { if (!page) { page = f2fs_grab_meta_page(sbi, start_blk++); @@ -1128,7 +1125,7 @@ retry: } else { /* * We should submit bio, since it exists several - * wribacking dentry pages in the freeing inode. + * writebacking dentry pages in the freeing inode. */ f2fs_submit_merged_write(sbi, DATA); cond_resched(); @@ -1476,20 +1473,18 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) ckpt->elapsed_time = cpu_to_le64(get_mtime(sbi, true)); ckpt->free_segment_count = cpu_to_le32(free_segments(sbi)); for (i = 0; i < NR_CURSEG_NODE_TYPE; i++) { - ckpt->cur_node_segno[i] = - cpu_to_le32(curseg_segno(sbi, i + CURSEG_HOT_NODE)); - ckpt->cur_node_blkoff[i] = - cpu_to_le16(curseg_blkoff(sbi, i + CURSEG_HOT_NODE)); - ckpt->alloc_type[i + CURSEG_HOT_NODE] = - curseg_alloc_type(sbi, i + CURSEG_HOT_NODE); + struct curseg_info *curseg = CURSEG_I(sbi, i + CURSEG_HOT_NODE); + + ckpt->cur_node_segno[i] = cpu_to_le32(curseg->segno); + ckpt->cur_node_blkoff[i] = cpu_to_le16(curseg->next_blkoff); + ckpt->alloc_type[i + CURSEG_HOT_NODE] = curseg->alloc_type; } for (i = 0; i < NR_CURSEG_DATA_TYPE; i++) { - ckpt->cur_data_segno[i] = - cpu_to_le32(curseg_segno(sbi, i + CURSEG_HOT_DATA)); - ckpt->cur_data_blkoff[i] = - cpu_to_le16(curseg_blkoff(sbi, i + CURSEG_HOT_DATA)); - ckpt->alloc_type[i + CURSEG_HOT_DATA] = - curseg_alloc_type(sbi, i + CURSEG_HOT_DATA); + struct curseg_info *curseg = CURSEG_I(sbi, i + CURSEG_HOT_DATA); + + ckpt->cur_data_segno[i] = cpu_to_le32(curseg->segno); + ckpt->cur_data_blkoff[i] = cpu_to_le16(curseg->next_blkoff); + ckpt->alloc_type[i + CURSEG_HOT_DATA] = curseg->alloc_type; } /* 2 cp + n data seg summary + orphan inode blocks */ diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c index 2532f369cb10..b40dec3d7f79 100644 --- a/fs/f2fs/compress.c +++ b/fs/f2fs/compress.c @@ -241,7 +241,7 @@ static int lz4_init_compress_ctx(struct compress_ctx *cc) unsigned int size = LZ4_MEM_COMPRESS; #ifdef CONFIG_F2FS_FS_LZ4HC - if (F2FS_I(cc->inode)->i_compress_flag >> COMPRESS_LEVEL_OFFSET) + if (F2FS_I(cc->inode)->i_compress_level) size = LZ4HC_MEM_COMPRESS; #endif @@ -267,8 +267,7 @@ static void lz4_destroy_compress_ctx(struct compress_ctx *cc) #ifdef CONFIG_F2FS_FS_LZ4HC static int lz4hc_compress_pages(struct compress_ctx *cc) { - unsigned char level = F2FS_I(cc->inode)->i_compress_flag >> - COMPRESS_LEVEL_OFFSET; + unsigned char level = F2FS_I(cc->inode)->i_compress_level; int len; if (level) @@ -340,8 +339,7 @@ static int zstd_init_compress_ctx(struct compress_ctx *cc) zstd_cstream *stream; void *workspace; unsigned int workspace_size; - unsigned char level = F2FS_I(cc->inode)->i_compress_flag >> - COMPRESS_LEVEL_OFFSET; + unsigned char level = F2FS_I(cc->inode)->i_compress_level; if (!level) level = F2FS_ZSTD_DEFAULT_CLEVEL; @@ -564,7 +562,7 @@ module_param(num_compress_pages, uint, 0444); MODULE_PARM_DESC(num_compress_pages, "Number of intermediate compress pages to preallocate"); -int f2fs_init_compress_mempool(void) +int __init f2fs_init_compress_mempool(void) { compress_page_pool = mempool_create_page_pool(num_compress_pages, 0); return compress_page_pool ? 0 : -ENOMEM; @@ -690,9 +688,7 @@ static int f2fs_compress_pages(struct compress_ctx *cc) vm_unmap_ram(cc->cbuf, cc->nr_cpages); vm_unmap_ram(cc->rbuf, cc->cluster_size); - for (i = 0; i < cc->nr_cpages; i++) { - if (i < new_nr_cpages) - continue; + for (i = new_nr_cpages; i < cc->nr_cpages; i++) { f2fs_compress_free_page(cc->cpages[i]); cc->cpages[i] = NULL; } @@ -1070,7 +1066,7 @@ retry: if (ret) goto out; if (bio) - f2fs_submit_bio(sbi, bio, DATA); + f2fs_submit_read_bio(sbi, bio, DATA); ret = f2fs_init_compress_ctx(cc); if (ret) @@ -1215,10 +1211,11 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc, .page = NULL, .encrypted_page = NULL, .compressed_page = NULL, - .submitted = false, + .submitted = 0, .io_type = io_type, .io_wbc = wbc, - .encrypted = fscrypt_inode_uses_fs_layer_crypto(cc->inode), + .encrypted = fscrypt_inode_uses_fs_layer_crypto(cc->inode) ? + 1 : 0, }; struct dnode_of_data dn; struct node_info ni; @@ -1228,7 +1225,7 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc, loff_t psize; int i, err; - /* we should bypass data pages to proceed the kworkder jobs */ + /* we should bypass data pages to proceed the kworker jobs */ if (unlikely(f2fs_cp_error(sbi))) { mapping_set_error(cc->rpages[0]->mapping, -EIO); goto out_free; @@ -1813,6 +1810,7 @@ unsigned int f2fs_cluster_blocks_are_contiguous(struct dnode_of_data *dn) const struct address_space_operations f2fs_compress_aops = { .release_folio = f2fs_release_folio, .invalidate_folio = f2fs_invalidate_folio, + .migrate_folio = filemap_migrate_folio, }; struct address_space *COMPRESS_MAPPING(struct f2fs_sb_info *sbi) diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c index 41addc605350..06b552a0aba2 100644 --- a/fs/f2fs/data.c +++ b/fs/f2fs/data.c @@ -292,13 +292,11 @@ static void f2fs_read_end_io(struct bio *bio) struct bio_post_read_ctx *ctx; bool intask = in_task(); - iostat_update_and_unbind_ctx(bio, 0); + iostat_update_and_unbind_ctx(bio); ctx = bio->bi_private; - if (time_to_inject(sbi, FAULT_READ_IO)) { - f2fs_show_injection_info(sbi, FAULT_READ_IO); + if (time_to_inject(sbi, FAULT_READ_IO)) bio->bi_status = BLK_STS_IOERR; - } if (bio->bi_status) { f2fs_finish_read_bio(bio, intask); @@ -332,13 +330,11 @@ static void f2fs_write_end_io(struct bio *bio) struct bio_vec *bvec; struct bvec_iter_all iter_all; - iostat_update_and_unbind_ctx(bio, 1); + iostat_update_and_unbind_ctx(bio); sbi = bio->bi_private; - if (time_to_inject(sbi, FAULT_WRITE_IO)) { - f2fs_show_injection_info(sbi, FAULT_WRITE_IO); + if (time_to_inject(sbi, FAULT_WRITE_IO)) bio->bi_status = BLK_STS_IOERR; - } bio_for_each_segment_all(bvec, bio, iter_all) { struct page *page = bvec->bv_page; @@ -507,65 +503,66 @@ static bool f2fs_crypt_mergeable_bio(struct bio *bio, const struct inode *inode, return fscrypt_mergeable_bio(bio, inode, next_idx); } -static inline void __submit_bio(struct f2fs_sb_info *sbi, - struct bio *bio, enum page_type type) +void f2fs_submit_read_bio(struct f2fs_sb_info *sbi, struct bio *bio, + enum page_type type) { - if (!is_read_io(bio_op(bio))) { - unsigned int start; + WARN_ON_ONCE(!is_read_io(bio_op(bio))); + trace_f2fs_submit_read_bio(sbi->sb, type, bio); - if (type != DATA && type != NODE) - goto submit_io; + iostat_update_submit_ctx(bio, type); + submit_bio(bio); +} - if (f2fs_lfs_mode(sbi) && current->plug) - blk_finish_plug(current->plug); +static void f2fs_align_write_bio(struct f2fs_sb_info *sbi, struct bio *bio) +{ + unsigned int start = + (bio->bi_iter.bi_size >> F2FS_BLKSIZE_BITS) % F2FS_IO_SIZE(sbi); - if (!F2FS_IO_ALIGNED(sbi)) - goto submit_io; + if (start == 0) + return; - start = bio->bi_iter.bi_size >> F2FS_BLKSIZE_BITS; - start %= F2FS_IO_SIZE(sbi); + /* fill dummy pages */ + for (; start < F2FS_IO_SIZE(sbi); start++) { + struct page *page = + mempool_alloc(sbi->write_io_dummy, + GFP_NOIO | __GFP_NOFAIL); + f2fs_bug_on(sbi, !page); - if (start == 0) - goto submit_io; + lock_page(page); - /* fill dummy pages */ - for (; start < F2FS_IO_SIZE(sbi); start++) { - struct page *page = - mempool_alloc(sbi->write_io_dummy, - GFP_NOIO | __GFP_NOFAIL); - f2fs_bug_on(sbi, !page); + zero_user_segment(page, 0, PAGE_SIZE); + set_page_private_dummy(page); - lock_page(page); + if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) + f2fs_bug_on(sbi, 1); + } +} - zero_user_segment(page, 0, PAGE_SIZE); - set_page_private_dummy(page); +static void f2fs_submit_write_bio(struct f2fs_sb_info *sbi, struct bio *bio, + enum page_type type) +{ + WARN_ON_ONCE(is_read_io(bio_op(bio))); + + if (type == DATA || type == NODE) { + if (f2fs_lfs_mode(sbi) && current->plug) + blk_finish_plug(current->plug); - if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) - f2fs_bug_on(sbi, 1); + if (F2FS_IO_ALIGNED(sbi)) { + f2fs_align_write_bio(sbi, bio); + /* + * In the NODE case, we lose next block address chain. + * So, we need to do checkpoint in f2fs_sync_file. + */ + if (type == NODE) + set_sbi_flag(sbi, SBI_NEED_CP); } - /* - * In the NODE case, we lose next block address chain. So, we - * need to do checkpoint in f2fs_sync_file. - */ - if (type == NODE) - set_sbi_flag(sbi, SBI_NEED_CP); } -submit_io: - if (is_read_io(bio_op(bio))) - trace_f2fs_submit_read_bio(sbi->sb, type, bio); - else - trace_f2fs_submit_write_bio(sbi->sb, type, bio); + trace_f2fs_submit_write_bio(sbi->sb, type, bio); iostat_update_submit_ctx(bio, type); submit_bio(bio); } -void f2fs_submit_bio(struct f2fs_sb_info *sbi, - struct bio *bio, enum page_type type) -{ - __submit_bio(sbi, bio, type); -} - static void __submit_merged_bio(struct f2fs_bio_info *io) { struct f2fs_io_info *fio = &io->fio; @@ -573,12 +570,13 @@ static void __submit_merged_bio(struct f2fs_bio_info *io) if (!io->bio) return; - if (is_read_io(fio->op)) + if (is_read_io(fio->op)) { trace_f2fs_prepare_read_bio(io->sbi->sb, fio->type, io->bio); - else + f2fs_submit_read_bio(io->sbi, io->bio, fio->type); + } else { trace_f2fs_prepare_write_bio(io->sbi->sb, fio->type, io->bio); - - __submit_bio(io->sbi, io->bio, fio->type); + f2fs_submit_write_bio(io->sbi, io->bio, fio->type); + } io->bio = NULL; } @@ -655,6 +653,9 @@ static void __f2fs_submit_merged_write(struct f2fs_sb_info *sbi, f2fs_down_write(&io->io_rwsem); + if (!io->bio) + goto unlock_out; + /* change META to META_FLUSH in the checkpoint procedure */ if (type >= META_FLUSH) { io->fio.type = META_FLUSH; @@ -663,6 +664,7 @@ static void __f2fs_submit_merged_write(struct f2fs_sb_info *sbi, io->bio->bi_opf |= REQ_PREFLUSH | REQ_FUA; } __submit_merged_bio(io); +unlock_out: f2fs_up_write(&io->io_rwsem); } @@ -741,12 +743,15 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio) } if (fio->io_wbc && !is_read_io(fio->op)) - wbc_account_cgroup_owner(fio->io_wbc, page, PAGE_SIZE); + wbc_account_cgroup_owner(fio->io_wbc, fio->page, PAGE_SIZE); inc_page_count(fio->sbi, is_read_io(fio->op) ? __read_io_type(page) : WB_DATA_TYPE(fio->page)); - __submit_bio(fio->sbi, bio, fio->type); + if (is_read_io(bio_op(bio))) + f2fs_submit_read_bio(fio->sbi, bio, fio->type); + else + f2fs_submit_write_bio(fio->sbi, bio, fio->type); return 0; } @@ -848,7 +853,7 @@ static int add_ipu_page(struct f2fs_io_info *fio, struct bio **bio, /* page can't be merged into bio; submit the bio */ del_bio_entry(be); - __submit_bio(sbi, *bio, DATA); + f2fs_submit_write_bio(sbi, *bio, DATA); break; } f2fs_up_write(&io->bio_list_lock); @@ -911,7 +916,7 @@ void f2fs_submit_merged_ipu_write(struct f2fs_sb_info *sbi, } if (found) - __submit_bio(sbi, target, DATA); + f2fs_submit_write_bio(sbi, target, DATA); if (bio && *bio) { bio_put(*bio); *bio = NULL; @@ -948,7 +953,7 @@ alloc_new: } if (fio->io_wbc) - wbc_account_cgroup_owner(fio->io_wbc, page, PAGE_SIZE); + wbc_account_cgroup_owner(fio->io_wbc, fio->page, PAGE_SIZE); inc_page_count(fio->sbi, WB_DATA_TYPE(page)); @@ -991,7 +996,7 @@ next: bio_page = fio->page; /* set submitted = true as a return value */ - fio->submitted = true; + fio->submitted = 1; inc_page_count(sbi, WB_DATA_TYPE(bio_page)); @@ -1007,7 +1012,7 @@ alloc_new: (fio->type == DATA || fio->type == NODE) && fio->new_blkaddr & F2FS_IO_SIZE_MASK(sbi)) { dec_page_count(sbi, WB_DATA_TYPE(bio_page)); - fio->retry = true; + fio->retry = 1; goto skip; } io->bio = __bio_alloc(fio, BIO_MAX_VECS); @@ -1022,7 +1027,7 @@ alloc_new: } if (fio->io_wbc) - wbc_account_cgroup_owner(fio->io_wbc, bio_page, PAGE_SIZE); + wbc_account_cgroup_owner(fio->io_wbc, fio->page, PAGE_SIZE); io->last_block_in_bio = fio->new_blkaddr; @@ -1107,7 +1112,7 @@ static int f2fs_submit_page_read(struct inode *inode, struct page *page, } inc_page_count(sbi, F2FS_RD_DATA); f2fs_update_iostat(sbi, NULL, FS_DATA_READ_IO, F2FS_BLKSIZE); - __submit_bio(sbi, bio, DATA); + f2fs_submit_read_bio(sbi, bio, DATA); return 0; } @@ -1207,19 +1212,6 @@ int f2fs_reserve_block(struct dnode_of_data *dn, pgoff_t index) return err; } -int f2fs_get_block(struct dnode_of_data *dn, pgoff_t index) -{ - struct extent_info ei = {0, }; - struct inode *inode = dn->inode; - - if (f2fs_lookup_read_extent_cache(inode, index, &ei)) { - dn->data_blkaddr = ei.blk + index - ei.fofs; - return 0; - } - - return f2fs_reserve_block(dn, index); -} - struct page *f2fs_get_read_data_page(struct inode *inode, pgoff_t index, blk_opf_t op_flags, bool for_write, pgoff_t *next_pgofs) @@ -1227,15 +1219,14 @@ struct page *f2fs_get_read_data_page(struct inode *inode, pgoff_t index, struct address_space *mapping = inode->i_mapping; struct dnode_of_data dn; struct page *page; - struct extent_info ei = {0, }; int err; page = f2fs_grab_cache_page(mapping, index, for_write); if (!page) return ERR_PTR(-ENOMEM); - if (f2fs_lookup_read_extent_cache(inode, index, &ei)) { - dn.data_blkaddr = ei.blk + index - ei.fofs; + if (f2fs_lookup_read_extent_cache_block(inode, index, + &dn.data_blkaddr)) { if (!f2fs_is_valid_blkaddr(F2FS_I_SB(inode), dn.data_blkaddr, DATA_GENERIC_ENHANCE_READ)) { err = -EFSCORRUPTED; @@ -1432,13 +1423,12 @@ static int __allocate_data_block(struct dnode_of_data *dn, int seg_type) return err; dn->data_blkaddr = f2fs_data_blkaddr(dn); - if (dn->data_blkaddr != NULL_ADDR) - goto alloc; - - if (unlikely((err = inc_valid_block_count(sbi, dn->inode, &count)))) - return err; + if (dn->data_blkaddr == NULL_ADDR) { + err = inc_valid_block_count(sbi, dn->inode, &count); + if (unlikely(err)) + return err; + } -alloc: set_summary(&sum, dn->nid, dn->ofs_in_node, ni.version); old_blkaddr = dn->data_blkaddr; f2fs_allocate_data_block(sbi, NULL, old_blkaddr, &dn->data_blkaddr, @@ -1452,19 +1442,91 @@ alloc: return 0; } -void f2fs_do_map_lock(struct f2fs_sb_info *sbi, int flag, bool lock) +static void f2fs_map_lock(struct f2fs_sb_info *sbi, int flag) { - if (flag == F2FS_GET_BLOCK_PRE_AIO) { - if (lock) - f2fs_down_read(&sbi->node_change); - else - f2fs_up_read(&sbi->node_change); + if (flag == F2FS_GET_BLOCK_PRE_AIO) + f2fs_down_read(&sbi->node_change); + else + f2fs_lock_op(sbi); +} + +static void f2fs_map_unlock(struct f2fs_sb_info *sbi, int flag) +{ + if (flag == F2FS_GET_BLOCK_PRE_AIO) + f2fs_up_read(&sbi->node_change); + else + f2fs_unlock_op(sbi); +} + +int f2fs_get_block_locked(struct dnode_of_data *dn, pgoff_t index) +{ + struct f2fs_sb_info *sbi = F2FS_I_SB(dn->inode); + int err = 0; + + f2fs_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO); + if (!f2fs_lookup_read_extent_cache_block(dn->inode, index, + &dn->data_blkaddr)) + err = f2fs_reserve_block(dn, index); + f2fs_map_unlock(sbi, F2FS_GET_BLOCK_PRE_AIO); + + return err; +} + +static int f2fs_map_no_dnode(struct inode *inode, + struct f2fs_map_blocks *map, struct dnode_of_data *dn, + pgoff_t pgoff) +{ + struct f2fs_sb_info *sbi = F2FS_I_SB(inode); + + /* + * There is one exceptional case that read_node_page() may return + * -ENOENT due to filesystem has been shutdown or cp_error, return + * -EIO in that case. + */ + if (map->m_may_create && + (is_sbi_flag_set(sbi, SBI_IS_SHUTDOWN) || f2fs_cp_error(sbi))) + return -EIO; + + if (map->m_next_pgofs) + *map->m_next_pgofs = f2fs_get_next_page_offset(dn, pgoff); + if (map->m_next_extent) + *map->m_next_extent = f2fs_get_next_page_offset(dn, pgoff); + return 0; +} + +static bool f2fs_map_blocks_cached(struct inode *inode, + struct f2fs_map_blocks *map, int flag) +{ + struct f2fs_sb_info *sbi = F2FS_I_SB(inode); + unsigned int maxblocks = map->m_len; + pgoff_t pgoff = (pgoff_t)map->m_lblk; + struct extent_info ei = {}; + + if (!f2fs_lookup_read_extent_cache(inode, pgoff, &ei)) + return false; + + map->m_pblk = ei.blk + pgoff - ei.fofs; + map->m_len = min((pgoff_t)maxblocks, ei.fofs + ei.len - pgoff); + map->m_flags = F2FS_MAP_MAPPED; + if (map->m_next_extent) + *map->m_next_extent = pgoff + map->m_len; + + /* for hardware encryption, but to avoid potential issue in future */ + if (flag == F2FS_GET_BLOCK_DIO) + f2fs_wait_on_block_writeback_range(inode, + map->m_pblk, map->m_len); + + if (f2fs_allow_multi_device_dio(sbi, flag)) { + int bidx = f2fs_target_device_index(sbi, map->m_pblk); + struct f2fs_dev_info *dev = &sbi->devs[bidx]; + + map->m_bdev = dev->bdev; + map->m_pblk -= dev->start_blk; + map->m_len = min(map->m_len, dev->end_blk + 1 - map->m_pblk); } else { - if (lock) - f2fs_lock_op(sbi); - else - f2fs_unlock_op(sbi); + map->m_bdev = inode->i_sb->s_bdev; } + return true; } /* @@ -1472,8 +1534,7 @@ void f2fs_do_map_lock(struct f2fs_sb_info *sbi, int flag, bool lock) * maps continuous logical blocks to physical blocks, and return such * info via f2fs_map_blocks structure. */ -int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, - int create, int flag) +int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, int flag) { unsigned int maxblocks = map->m_len; struct dnode_of_data dn; @@ -1483,14 +1544,17 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, int err = 0, ofs = 1; unsigned int ofs_in_node, last_ofs_in_node; blkcnt_t prealloc; - struct extent_info ei = {0, }; block_t blkaddr; unsigned int start_pgofs; int bidx = 0; + bool is_hole; if (!maxblocks) return 0; + if (!map->m_may_create && f2fs_map_blocks_cached(inode, map, flag)) + goto out; + map->m_bdev = inode->i_sb->s_bdev; map->m_multidev_dio = f2fs_allow_multi_device_dio(F2FS_I_SB(inode), flag); @@ -1502,42 +1566,9 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, pgofs = (pgoff_t)map->m_lblk; end = pgofs + maxblocks; - if (!create && f2fs_lookup_read_extent_cache(inode, pgofs, &ei)) { - if (f2fs_lfs_mode(sbi) && flag == F2FS_GET_BLOCK_DIO && - map->m_may_create) - goto next_dnode; - - map->m_pblk = ei.blk + pgofs - ei.fofs; - map->m_len = min((pgoff_t)maxblocks, ei.fofs + ei.len - pgofs); - map->m_flags = F2FS_MAP_MAPPED; - if (map->m_next_extent) - *map->m_next_extent = pgofs + map->m_len; - - /* for hardware encryption, but to avoid potential issue in future */ - if (flag == F2FS_GET_BLOCK_DIO) - f2fs_wait_on_block_writeback_range(inode, - map->m_pblk, map->m_len); - - if (map->m_multidev_dio) { - block_t blk_addr = map->m_pblk; - - bidx = f2fs_target_device_index(sbi, map->m_pblk); - - map->m_bdev = FDEV(bidx).bdev; - map->m_pblk -= FDEV(bidx).start_blk; - map->m_len = min(map->m_len, - FDEV(bidx).end_blk + 1 - map->m_pblk); - - if (map->m_may_create) - f2fs_update_device_state(sbi, inode->i_ino, - blk_addr, map->m_len); - } - goto out; - } - next_dnode: if (map->m_may_create) - f2fs_do_map_lock(sbi, flag, true); + f2fs_map_lock(sbi, flag); /* When reading holes, we need its node page */ set_new_dnode(&dn, inode, NULL, NULL, 0); @@ -1545,29 +1576,8 @@ next_dnode: if (err) { if (flag == F2FS_GET_BLOCK_BMAP) map->m_pblk = 0; - - if (err == -ENOENT) { - /* - * There is one exceptional case that read_node_page() - * may return -ENOENT due to filesystem has been - * shutdown or cp_error, so force to convert error - * number to EIO for such case. - */ - if (map->m_may_create && - (is_sbi_flag_set(sbi, SBI_IS_SHUTDOWN) || - f2fs_cp_error(sbi))) { - err = -EIO; - goto unlock_out; - } - - err = 0; - if (map->m_next_pgofs) - *map->m_next_pgofs = - f2fs_get_next_page_offset(&dn, pgofs); - if (map->m_next_extent) - *map->m_next_extent = - f2fs_get_next_page_offset(&dn, pgofs); - } + if (err == -ENOENT) + err = f2fs_map_no_dnode(inode, map, &dn, pgofs); goto unlock_out; } @@ -1578,78 +1588,76 @@ next_dnode: next_block: blkaddr = f2fs_data_blkaddr(&dn); - - if (__is_valid_data_blkaddr(blkaddr) && - !f2fs_is_valid_blkaddr(sbi, blkaddr, DATA_GENERIC_ENHANCE)) { + is_hole = !__is_valid_data_blkaddr(blkaddr); + if (!is_hole && + !f2fs_is_valid_blkaddr(sbi, blkaddr, DATA_GENERIC_ENHANCE)) { err = -EFSCORRUPTED; f2fs_handle_error(sbi, ERROR_INVALID_BLKADDR); goto sync_out; } - if (__is_valid_data_blkaddr(blkaddr)) { - /* use out-place-update for driect IO under LFS mode */ - if (f2fs_lfs_mode(sbi) && flag == F2FS_GET_BLOCK_DIO && - map->m_may_create) { + /* use out-place-update for direct IO under LFS mode */ + if (map->m_may_create && + (is_hole || (f2fs_lfs_mode(sbi) && flag == F2FS_GET_BLOCK_DIO))) { + if (unlikely(f2fs_cp_error(sbi))) { + err = -EIO; + goto sync_out; + } + + switch (flag) { + case F2FS_GET_BLOCK_PRE_AIO: + if (blkaddr == NULL_ADDR) { + prealloc++; + last_ofs_in_node = dn.ofs_in_node; + } + break; + case F2FS_GET_BLOCK_PRE_DIO: + case F2FS_GET_BLOCK_DIO: err = __allocate_data_block(&dn, map->m_seg_type); if (err) goto sync_out; - blkaddr = dn.data_blkaddr; + if (flag == F2FS_GET_BLOCK_PRE_DIO) + file_need_truncate(inode); set_inode_flag(inode, FI_APPEND_WRITE); + break; + default: + WARN_ON_ONCE(1); + err = -EIO; + goto sync_out; } - } else { - if (create) { - if (unlikely(f2fs_cp_error(sbi))) { - err = -EIO; - goto sync_out; - } - if (flag == F2FS_GET_BLOCK_PRE_AIO) { - if (blkaddr == NULL_ADDR) { - prealloc++; - last_ofs_in_node = dn.ofs_in_node; - } - } else { - WARN_ON(flag != F2FS_GET_BLOCK_PRE_DIO && - flag != F2FS_GET_BLOCK_DIO); - err = __allocate_data_block(&dn, - map->m_seg_type); - if (!err) { - if (flag == F2FS_GET_BLOCK_PRE_DIO) - file_need_truncate(inode); - set_inode_flag(inode, FI_APPEND_WRITE); - } - } - if (err) - goto sync_out; + + blkaddr = dn.data_blkaddr; + if (is_hole) map->m_flags |= F2FS_MAP_NEW; - blkaddr = dn.data_blkaddr; - } else { - if (f2fs_compressed_file(inode) && - f2fs_sanity_check_cluster(&dn) && - (flag != F2FS_GET_BLOCK_FIEMAP || - IS_ENABLED(CONFIG_F2FS_CHECK_FS))) { - err = -EFSCORRUPTED; - f2fs_handle_error(sbi, - ERROR_CORRUPTED_CLUSTER); - goto sync_out; - } - if (flag == F2FS_GET_BLOCK_BMAP) { - map->m_pblk = 0; - goto sync_out; - } - if (flag == F2FS_GET_BLOCK_PRECACHE) - goto sync_out; - if (flag == F2FS_GET_BLOCK_FIEMAP && - blkaddr == NULL_ADDR) { - if (map->m_next_pgofs) - *map->m_next_pgofs = pgofs + 1; - goto sync_out; - } - if (flag != F2FS_GET_BLOCK_FIEMAP) { - /* for defragment case */ + } else if (is_hole) { + if (f2fs_compressed_file(inode) && + f2fs_sanity_check_cluster(&dn) && + (flag != F2FS_GET_BLOCK_FIEMAP || + IS_ENABLED(CONFIG_F2FS_CHECK_FS))) { + err = -EFSCORRUPTED; + f2fs_handle_error(sbi, + ERROR_CORRUPTED_CLUSTER); + goto sync_out; + } + + switch (flag) { + case F2FS_GET_BLOCK_PRECACHE: + goto sync_out; + case F2FS_GET_BLOCK_BMAP: + map->m_pblk = 0; + goto sync_out; + case F2FS_GET_BLOCK_FIEMAP: + if (blkaddr == NULL_ADDR) { if (map->m_next_pgofs) *map->m_next_pgofs = pgofs + 1; goto sync_out; } + break; + default: + /* for defragment case */ + if (map->m_next_pgofs) + *map->m_next_pgofs = pgofs + 1; + goto sync_out; } } @@ -1660,9 +1668,9 @@ next_block: bidx = f2fs_target_device_index(sbi, blkaddr); if (map->m_len == 0) { - /* preallocated unwritten block should be mapped for fiemap. */ + /* reserved delalloc block should be mapped for fiemap. */ if (blkaddr == NEW_ADDR) - map->m_flags |= F2FS_MAP_UNWRITTEN; + map->m_flags |= F2FS_MAP_DELALLOC; map->m_flags |= F2FS_MAP_MAPPED; map->m_pblk = blkaddr; @@ -1721,7 +1729,7 @@ skip: f2fs_put_dnode(&dn); if (map->m_may_create) { - f2fs_do_map_lock(sbi, flag, false); + f2fs_map_unlock(sbi, flag); f2fs_balance_fs(sbi, dn.node_changed); } goto next_dnode; @@ -1767,11 +1775,11 @@ sync_out: f2fs_put_dnode(&dn); unlock_out: if (map->m_may_create) { - f2fs_do_map_lock(sbi, flag, false); + f2fs_map_unlock(sbi, flag); f2fs_balance_fs(sbi, dn.node_changed); } out: - trace_f2fs_map_blocks(inode, map, create, flag, err); + trace_f2fs_map_blocks(inode, map, flag, err); return err; } @@ -1793,7 +1801,7 @@ bool f2fs_overwrite_io(struct inode *inode, loff_t pos, size_t len) while (map.m_lblk < last_lblk) { map.m_len = last_lblk - map.m_lblk; - err = f2fs_map_blocks(inode, &map, 0, F2FS_GET_BLOCK_DEFAULT); + err = f2fs_map_blocks(inode, &map, F2FS_GET_BLOCK_DEFAULT); if (err || map.m_len == 0) return false; map.m_lblk += map.m_len; @@ -1967,7 +1975,7 @@ next: map.m_len = cluster_size - count_in_cluster; } - ret = f2fs_map_blocks(inode, &map, 0, F2FS_GET_BLOCK_FIEMAP); + ret = f2fs_map_blocks(inode, &map, F2FS_GET_BLOCK_FIEMAP); if (ret) goto out; @@ -1984,7 +1992,7 @@ next: compr_appended = false; /* In a case of compressed cluster, append this to the last extent */ - if (compr_cluster && ((map.m_flags & F2FS_MAP_UNWRITTEN) || + if (compr_cluster && ((map.m_flags & F2FS_MAP_DELALLOC) || !(map.m_flags & F2FS_MAP_FLAGS))) { compr_appended = true; goto skip_fill; @@ -2030,7 +2038,7 @@ skip_fill: compr_cluster = false; size += blks_to_bytes(inode, 1); } - } else if (map.m_flags & F2FS_MAP_UNWRITTEN) { + } else if (map.m_flags & F2FS_MAP_DELALLOC) { flags = FIEMAP_EXTENT_UNWRITTEN; } @@ -2099,7 +2107,7 @@ static int f2fs_read_single_page(struct inode *inode, struct page *page, map->m_lblk = block_in_file; map->m_len = last_block - block_in_file; - ret = f2fs_map_blocks(inode, map, 0, F2FS_GET_BLOCK_DEFAULT); + ret = f2fs_map_blocks(inode, map, F2FS_GET_BLOCK_DEFAULT); if (ret) goto out; got_it: @@ -2136,7 +2144,7 @@ zero_out: *last_block_in_bio, block_nr) || !f2fs_crypt_mergeable_bio(bio, inode, page->index, NULL))) { submit_and_realloc: - __submit_bio(F2FS_I_SB(inode), bio, DATA); + f2fs_submit_read_bio(F2FS_I_SB(inode), bio, DATA); bio = NULL; } if (bio == NULL) { @@ -2283,7 +2291,7 @@ skip_reading_dnode: *last_block_in_bio, blkaddr) || !f2fs_crypt_mergeable_bio(bio, inode, page->index, NULL))) { submit_and_realloc: - __submit_bio(sbi, bio, DATA); + f2fs_submit_read_bio(sbi, bio, DATA); bio = NULL; } @@ -2377,7 +2385,7 @@ static int f2fs_mpage_readpages(struct inode *inode, #ifdef CONFIG_F2FS_FS_COMPRESSION if (f2fs_compressed_file(inode)) { - /* there are remained comressed pages, submit them */ + /* there are remained compressed pages, submit them */ if (!f2fs_cluster_can_merge_page(&cc, page->index)) { ret = f2fs_read_multi_pages(&cc, &bio, max_nr_pages, @@ -2444,7 +2452,7 @@ next_page: #endif } if (bio) - __submit_bio(F2FS_I_SB(inode), bio, DATA); + f2fs_submit_read_bio(F2FS_I_SB(inode), bio, DATA); return ret; } @@ -2530,34 +2538,29 @@ static inline bool check_inplace_update_policy(struct inode *inode, struct f2fs_io_info *fio) { struct f2fs_sb_info *sbi = F2FS_I_SB(inode); - unsigned int policy = SM_I(sbi)->ipu_policy; - if (policy & (0x1 << F2FS_IPU_HONOR_OPU_WRITE) && - is_inode_flag_set(inode, FI_OPU_WRITE)) + if (IS_F2FS_IPU_HONOR_OPU_WRITE(sbi) && + is_inode_flag_set(inode, FI_OPU_WRITE)) return false; - if (policy & (0x1 << F2FS_IPU_FORCE)) + if (IS_F2FS_IPU_FORCE(sbi)) return true; - if (policy & (0x1 << F2FS_IPU_SSR) && f2fs_need_SSR(sbi)) + if (IS_F2FS_IPU_SSR(sbi) && f2fs_need_SSR(sbi)) return true; - if (policy & (0x1 << F2FS_IPU_UTIL) && - utilization(sbi) > SM_I(sbi)->min_ipu_util) + if (IS_F2FS_IPU_UTIL(sbi) && utilization(sbi) > SM_I(sbi)->min_ipu_util) return true; - if (policy & (0x1 << F2FS_IPU_SSR_UTIL) && f2fs_need_SSR(sbi) && - utilization(sbi) > SM_I(sbi)->min_ipu_util) + if (IS_F2FS_IPU_SSR_UTIL(sbi) && f2fs_need_SSR(sbi) && + utilization(sbi) > SM_I(sbi)->min_ipu_util) return true; /* * IPU for rewrite async pages */ - if (policy & (0x1 << F2FS_IPU_ASYNC) && - fio && fio->op == REQ_OP_WRITE && - !(fio->op_flags & REQ_SYNC) && - !IS_ENCRYPTED(inode)) + if (IS_F2FS_IPU_ASYNC(sbi) && fio && fio->op == REQ_OP_WRITE && + !(fio->op_flags & REQ_SYNC) && !IS_ENCRYPTED(inode)) return true; /* this is only set during fdatasync */ - if (policy & (0x1 << F2FS_IPU_FSYNC) && - is_inode_flag_set(inode, FI_NEED_IPU)) + if (IS_F2FS_IPU_FSYNC(sbi) && is_inode_flag_set(inode, FI_NEED_IPU)) return true; if (unlikely(fio && is_sbi_flag_set(sbi, SBI_CP_DISABLED) && @@ -2635,7 +2638,6 @@ int f2fs_do_write_data_page(struct f2fs_io_info *fio) struct page *page = fio->page; struct inode *inode = page->mapping->host; struct dnode_of_data dn; - struct extent_info ei = {0, }; struct node_info ni; bool ipu_force = false; int err = 0; @@ -2647,9 +2649,8 @@ int f2fs_do_write_data_page(struct f2fs_io_info *fio) set_new_dnode(&dn, inode, NULL, NULL, 0); if (need_inplace_update(fio) && - f2fs_lookup_read_extent_cache(inode, page->index, &ei)) { - fio->old_blkaddr = ei.blk + page->index - ei.fofs; - + f2fs_lookup_read_extent_cache_block(inode, page->index, + &fio->old_blkaddr)) { if (!f2fs_is_valid_blkaddr(fio->sbi, fio->old_blkaddr, DATA_GENERIC_ENHANCE)) { f2fs_handle_error(fio->sbi, @@ -2699,7 +2700,6 @@ got_it: goto out_writepage; set_page_writeback(page); - ClearPageError(page); f2fs_put_dnode(&dn); if (fio->need_lock == LOCK_REQ) f2fs_unlock_op(fio->sbi); @@ -2735,7 +2735,6 @@ got_it: goto out_writepage; set_page_writeback(page); - ClearPageError(page); if (fio->compr_blocks && fio->old_blkaddr == COMPRESS_ADDR) f2fs_i_compr_blocks_update(inode, fio->compr_blocks - 1, false); @@ -2780,10 +2779,10 @@ int f2fs_write_single_data_page(struct page *page, int *submitted, .old_blkaddr = NULL_ADDR, .page = page, .encrypted_page = NULL, - .submitted = false, + .submitted = 0, .compr_blocks = compr_blocks, .need_lock = LOCK_RETRY, - .post_read = f2fs_post_read_required(inode), + .post_read = f2fs_post_read_required(inode) ? 1 : 0, .io_type = io_type, .io_wbc = wbc, .bio = bio, @@ -2792,7 +2791,7 @@ int f2fs_write_single_data_page(struct page *page, int *submitted, trace_f2fs_writepage(page, DATA); - /* we should bypass data pages to proceed the kworkder jobs */ + /* we should bypass data pages to proceed the kworker jobs */ if (unlikely(f2fs_cp_error(sbi))) { mapping_set_error(page->mapping, -EIO); /* @@ -2904,14 +2903,14 @@ out: } if (submitted) - *submitted = fio.submitted ? 1 : 0; + *submitted = fio.submitted; return 0; redirty_out: redirty_page_for_writepage(wbc, page); /* - * pageout() in MM traslates EAGAIN, so calls handle_write_error() + * pageout() in MM translates EAGAIN, so calls handle_write_error() * -> mapping_set_error() -> set_bit(AS_EIO, ...). * file_write_and_wait_range() will see EIO error, which is critical * to return value of fsync() followed by atomic_write failure to user. @@ -2945,7 +2944,7 @@ out: } /* - * This function was copied from write_cche_pages from mm/page-writeback.c. + * This function was copied from write_cache_pages from mm/page-writeback.c. * The major change is making write step of cold data page separately from * warm/hot data page. */ @@ -3354,9 +3353,8 @@ static int prepare_write_begin(struct f2fs_sb_info *sbi, struct dnode_of_data dn; struct page *ipage; bool locked = false; - struct extent_info ei = {0, }; + int flag = F2FS_GET_BLOCK_PRE_AIO; int err = 0; - int flag; /* * If a whole page is being written and we already preallocated all the @@ -3366,14 +3364,13 @@ static int prepare_write_begin(struct f2fs_sb_info *sbi, return 0; /* f2fs_lock_op avoids race between write CP and convert_inline_page */ - if (f2fs_has_inline_data(inode) && pos + len > MAX_INLINE_DATA(inode)) - flag = F2FS_GET_BLOCK_DEFAULT; - else - flag = F2FS_GET_BLOCK_PRE_AIO; - - if (f2fs_has_inline_data(inode) || - (pos & PAGE_MASK) >= i_size_read(inode)) { - f2fs_do_map_lock(sbi, flag, true); + if (f2fs_has_inline_data(inode)) { + if (pos + len > MAX_INLINE_DATA(inode)) + flag = F2FS_GET_BLOCK_DEFAULT; + f2fs_map_lock(sbi, flag); + locked = true; + } else if ((pos & PAGE_MASK) >= i_size_read(inode)) { + f2fs_map_lock(sbi, flag); locked = true; } @@ -3393,40 +3390,40 @@ restart: set_inode_flag(inode, FI_DATA_EXIST); if (inode->i_nlink) set_page_private_inline(ipage); - } else { - err = f2fs_convert_inline_page(&dn, page); - if (err) - goto out; - if (dn.data_blkaddr == NULL_ADDR) - err = f2fs_get_block(&dn, index); - } - } else if (locked) { - err = f2fs_get_block(&dn, index); - } else { - if (f2fs_lookup_read_extent_cache(inode, index, &ei)) { - dn.data_blkaddr = ei.blk + index - ei.fofs; - } else { - /* hole case */ - err = f2fs_get_dnode_of_data(&dn, index, LOOKUP_NODE); - if (err || dn.data_blkaddr == NULL_ADDR) { - f2fs_put_dnode(&dn); - f2fs_do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, - true); - WARN_ON(flag != F2FS_GET_BLOCK_PRE_AIO); - locked = true; - goto restart; - } + goto out; } + err = f2fs_convert_inline_page(&dn, page); + if (err || dn.data_blkaddr != NULL_ADDR) + goto out; } - /* convert_inline_page can make node_changed */ - *blk_addr = dn.data_blkaddr; - *node_changed = dn.node_changed; + if (!f2fs_lookup_read_extent_cache_block(inode, index, + &dn.data_blkaddr)) { + if (locked) { + err = f2fs_reserve_block(&dn, index); + goto out; + } + + /* hole case */ + err = f2fs_get_dnode_of_data(&dn, index, LOOKUP_NODE); + if (!err && dn.data_blkaddr != NULL_ADDR) + goto out; + f2fs_put_dnode(&dn); + f2fs_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO); + WARN_ON(flag != F2FS_GET_BLOCK_PRE_AIO); + locked = true; + goto restart; + } out: + if (!err) { + /* convert_inline_page can make node_changed */ + *blk_addr = dn.data_blkaddr; + *node_changed = dn.node_changed; + } f2fs_put_dnode(&dn); unlock_out: if (locked) - f2fs_do_map_lock(sbi, flag, false); + f2fs_map_unlock(sbi, flag); return err; } @@ -3435,7 +3432,6 @@ static int __find_data_block(struct inode *inode, pgoff_t index, { struct dnode_of_data dn; struct page *ipage; - struct extent_info ei = {0, }; int err = 0; ipage = f2fs_get_node_page(F2FS_I_SB(inode), inode->i_ino); @@ -3444,9 +3440,8 @@ static int __find_data_block(struct inode *inode, pgoff_t index, set_new_dnode(&dn, inode, ipage, ipage, 0); - if (f2fs_lookup_read_extent_cache(inode, index, &ei)) { - dn.data_blkaddr = ei.blk + index - ei.fofs; - } else { + if (!f2fs_lookup_read_extent_cache_block(inode, index, + &dn.data_blkaddr)) { /* hole case */ err = f2fs_get_dnode_of_data(&dn, index, LOOKUP_NODE); if (err) { @@ -3467,7 +3462,7 @@ static int __reserve_data_block(struct inode *inode, pgoff_t index, struct page *ipage; int err = 0; - f2fs_do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, true); + f2fs_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO); ipage = f2fs_get_node_page(sbi, inode->i_ino); if (IS_ERR(ipage)) { @@ -3476,14 +3471,16 @@ static int __reserve_data_block(struct inode *inode, pgoff_t index, } set_new_dnode(&dn, inode, ipage, ipage, 0); - err = f2fs_get_block(&dn, index); + if (!f2fs_lookup_read_extent_cache_block(dn.inode, index, + &dn.data_blkaddr)) + err = f2fs_reserve_block(&dn, index); *blk_addr = dn.data_blkaddr; *node_changed = dn.node_changed; f2fs_put_dnode(&dn); unlock_out: - f2fs_do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, false); + f2fs_map_unlock(sbi, F2FS_GET_BLOCK_PRE_AIO); return err; } @@ -3729,6 +3726,7 @@ void f2fs_invalidate_folio(struct folio *folio, size_t offset, size_t length) } } + clear_page_private_reference(&folio->page); clear_page_private_gcing(&folio->page); if (test_opt(sbi, COMPRESS_CACHE) && @@ -3754,6 +3752,7 @@ bool f2fs_release_folio(struct folio *folio, gfp_t wait) clear_page_private_data(&folio->page); } + clear_page_private_reference(&folio->page); clear_page_private_gcing(&folio->page); folio_detach_private(folio); @@ -3835,7 +3834,7 @@ static sector_t f2fs_bmap(struct address_space *mapping, sector_t block) map.m_next_pgofs = NULL; map.m_seg_type = NO_CHECK_TYPE; - if (!f2fs_map_blocks(inode, &map, 0, F2FS_GET_BLOCK_BMAP)) + if (!f2fs_map_blocks(inode, &map, F2FS_GET_BLOCK_BMAP)) blknr = map.m_pblk; } out: @@ -3943,7 +3942,7 @@ retry: map.m_seg_type = NO_CHECK_TYPE; map.m_may_create = false; - ret = f2fs_map_blocks(inode, &map, 0, F2FS_GET_BLOCK_FIEMAP); + ret = f2fs_map_blocks(inode, &map, F2FS_GET_BLOCK_FIEMAP); if (ret) goto out; @@ -4168,8 +4167,7 @@ static int f2fs_iomap_begin(struct inode *inode, loff_t offset, loff_t length, if (flags & IOMAP_WRITE) map.m_may_create = true; - err = f2fs_map_blocks(inode, &map, flags & IOMAP_WRITE, - F2FS_GET_BLOCK_DIO); + err = f2fs_map_blocks(inode, &map, F2FS_GET_BLOCK_DIO); if (err) return err; @@ -4182,20 +4180,24 @@ static int f2fs_iomap_begin(struct inode *inode, loff_t offset, loff_t length, */ map.m_len = fscrypt_limit_io_blocks(inode, map.m_lblk, map.m_len); - if (map.m_flags & (F2FS_MAP_MAPPED | F2FS_MAP_UNWRITTEN)) { - iomap->length = blks_to_bytes(inode, map.m_len); - if (map.m_flags & F2FS_MAP_MAPPED) { - iomap->type = IOMAP_MAPPED; - iomap->flags |= IOMAP_F_MERGED; - } else { - iomap->type = IOMAP_UNWRITTEN; - } - if (WARN_ON_ONCE(!__is_valid_data_blkaddr(map.m_pblk))) - return -EINVAL; + /* + * We should never see delalloc or compressed extents here based on + * prior flushing and checks. + */ + if (WARN_ON_ONCE(map.m_pblk == NEW_ADDR)) + return -EINVAL; + if (WARN_ON_ONCE(map.m_pblk == COMPRESS_ADDR)) + return -EINVAL; + if (map.m_pblk != NULL_ADDR) { + iomap->length = blks_to_bytes(inode, map.m_len); + iomap->type = IOMAP_MAPPED; + iomap->flags |= IOMAP_F_MERGED; iomap->bdev = map.m_bdev; iomap->addr = blks_to_bytes(inode, map.m_pblk); } else { + if (flags & IOMAP_WRITE) + return -ENOTBLK; iomap->length = blks_to_bytes(inode, next_pgofs) - iomap->offset; iomap->type = IOMAP_HOLE; diff --git a/fs/f2fs/debug.c b/fs/f2fs/debug.c index 32af4f0c5735..30a77936e3c5 100644 --- a/fs/f2fs/debug.c +++ b/fs/f2fs/debug.c @@ -354,6 +354,17 @@ static char *s_flag[] = { [SBI_IS_FREEZING] = " freezefs", }; +static const char *ipu_mode_names[F2FS_IPU_MAX] = { + [F2FS_IPU_FORCE] = "FORCE", + [F2FS_IPU_SSR] = "SSR", + [F2FS_IPU_UTIL] = "UTIL", + [F2FS_IPU_SSR_UTIL] = "SSR_UTIL", + [F2FS_IPU_FSYNC] = "FSYNC", + [F2FS_IPU_ASYNC] = "ASYNC", + [F2FS_IPU_NOCACHE] = "NOCACHE", + [F2FS_IPU_HONOR_OPU_WRITE] = "HONOR_OPU_WRITE", +}; + static int stat_show(struct seq_file *s, void *v) { struct f2fs_stat_info *si; @@ -362,16 +373,18 @@ static int stat_show(struct seq_file *s, void *v) raw_spin_lock_irqsave(&f2fs_stat_lock, flags); list_for_each_entry(si, &f2fs_stat_list, stat_list) { - update_general_status(si->sbi); + struct f2fs_sb_info *sbi = si->sbi; + + update_general_status(sbi); seq_printf(s, "\n=====[ partition info(%pg). #%d, %s, CP: %s]=====\n", - si->sbi->sb->s_bdev, i++, - f2fs_readonly(si->sbi->sb) ? "RO" : "RW", - is_set_ckpt_flags(si->sbi, CP_DISABLED_FLAG) ? - "Disabled" : (f2fs_cp_error(si->sbi) ? "Error" : "Good")); - if (si->sbi->s_flag) { + sbi->sb->s_bdev, i++, + f2fs_readonly(sbi->sb) ? "RO" : "RW", + is_set_ckpt_flags(sbi, CP_DISABLED_FLAG) ? + "Disabled" : (f2fs_cp_error(sbi) ? "Error" : "Good")); + if (sbi->s_flag) { seq_puts(s, "[SBI:"); - for_each_set_bit(j, &si->sbi->s_flag, 32) + for_each_set_bit(j, &sbi->s_flag, 32) seq_puts(s, s_flag[j]); seq_puts(s, "]\n"); } @@ -383,8 +396,21 @@ static int stat_show(struct seq_file *s, void *v) si->overp_segs, si->rsvd_segs); seq_printf(s, "Current Time Sec: %llu / Mounted Time Sec: %llu\n\n", ktime_get_boottime_seconds(), - SIT_I(si->sbi)->mounted_time); - if (test_opt(si->sbi, DISCARD)) + SIT_I(sbi)->mounted_time); + + seq_puts(s, "Policy:\n"); + seq_puts(s, " - IPU: ["); + if (IS_F2FS_IPU_DISABLE(sbi)) { + seq_puts(s, " DISABLE"); + } else { + unsigned long policy = SM_I(sbi)->ipu_policy; + + for_each_set_bit(j, &policy, F2FS_IPU_MAX) + seq_printf(s, " %s", ipu_mode_names[j]); + } + seq_puts(s, " ]\n\n"); + + if (test_opt(sbi, DISCARD)) seq_printf(s, "Utilization: %u%% (%u valid blocks, %u discard blocks)\n", si->utilization, si->valid_count, si->discard_blks); else @@ -491,15 +517,15 @@ static int stat_show(struct seq_file *s, void *v) seq_printf(s, " - node segments : %d (%d)\n", si->node_segs, si->bg_node_segs); seq_puts(s, " - Reclaimed segs :\n"); - seq_printf(s, " - Normal : %d\n", si->sbi->gc_reclaimed_segs[GC_NORMAL]); - seq_printf(s, " - Idle CB : %d\n", si->sbi->gc_reclaimed_segs[GC_IDLE_CB]); + seq_printf(s, " - Normal : %d\n", sbi->gc_reclaimed_segs[GC_NORMAL]); + seq_printf(s, " - Idle CB : %d\n", sbi->gc_reclaimed_segs[GC_IDLE_CB]); seq_printf(s, " - Idle Greedy : %d\n", - si->sbi->gc_reclaimed_segs[GC_IDLE_GREEDY]); - seq_printf(s, " - Idle AT : %d\n", si->sbi->gc_reclaimed_segs[GC_IDLE_AT]); + sbi->gc_reclaimed_segs[GC_IDLE_GREEDY]); + seq_printf(s, " - Idle AT : %d\n", sbi->gc_reclaimed_segs[GC_IDLE_AT]); seq_printf(s, " - Urgent High : %d\n", - si->sbi->gc_reclaimed_segs[GC_URGENT_HIGH]); - seq_printf(s, " - Urgent Mid : %d\n", si->sbi->gc_reclaimed_segs[GC_URGENT_MID]); - seq_printf(s, " - Urgent Low : %d\n", si->sbi->gc_reclaimed_segs[GC_URGENT_LOW]); + sbi->gc_reclaimed_segs[GC_URGENT_HIGH]); + seq_printf(s, " - Urgent Mid : %d\n", sbi->gc_reclaimed_segs[GC_URGENT_MID]); + seq_printf(s, " - Urgent Low : %d\n", sbi->gc_reclaimed_segs[GC_URGENT_LOW]); seq_printf(s, "Try to move %d blocks (BG: %d)\n", si->tot_blks, si->bg_data_blks + si->bg_node_blks); seq_printf(s, " - data blocks : %d (%d)\n", si->data_blks, @@ -565,7 +591,7 @@ static int stat_show(struct seq_file *s, void *v) si->ndirty_imeta); seq_printf(s, " - fsync mark: %4lld\n", percpu_counter_sum_positive( - &si->sbi->rf_node_block_count)); + &sbi->rf_node_block_count)); seq_printf(s, " - NATs: %9d/%9d\n - SITs: %9d/%9d\n", si->dirty_nats, si->nats, si->dirty_sits, si->sits); seq_printf(s, " - free_nids: %9d/%9d\n - alloc_nids: %9d\n", @@ -592,12 +618,12 @@ static int stat_show(struct seq_file *s, void *v) si->block_count[LFS], si->segment_count[LFS]); /* segment usage info */ - f2fs_update_sit_info(si->sbi); + f2fs_update_sit_info(sbi); seq_printf(s, "\nBDF: %u, avg. vblocks: %u\n", si->bimodal, si->avg_vblocks); /* memory footprint */ - update_mem_info(si->sbi); + update_mem_info(sbi); seq_printf(s, "\nMemory: %llu KB\n", (si->base_mem + si->cache_mem + si->page_mem) >> 10); seq_printf(s, " - static: %llu KB\n", diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c index 8e025157f35c..9ccdbe120425 100644 --- a/fs/f2fs/dir.c +++ b/fs/f2fs/dir.c @@ -732,10 +732,8 @@ int f2fs_add_regular_entry(struct inode *dir, const struct f2fs_filename *fname, } start: - if (time_to_inject(F2FS_I_SB(dir), FAULT_DIR_DEPTH)) { - f2fs_show_injection_info(F2FS_I_SB(dir), FAULT_DIR_DEPTH); + if (time_to_inject(F2FS_I_SB(dir), FAULT_DIR_DEPTH)) return -ENOSPC; - } if (unlikely(current_depth == MAX_DIR_HASH_DEPTH)) return -ENOSPC; diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c index 342af24b2f8c..28b12553f2b3 100644 --- a/fs/f2fs/extent_cache.c +++ b/fs/f2fs/extent_cache.c @@ -19,6 +19,31 @@ #include "node.h" #include <trace/events/f2fs.h> +bool sanity_check_extent_cache(struct inode *inode) +{ + struct f2fs_sb_info *sbi = F2FS_I_SB(inode); + struct f2fs_inode_info *fi = F2FS_I(inode); + struct extent_info *ei; + + if (!fi->extent_tree[EX_READ]) + return true; + + ei = &fi->extent_tree[EX_READ]->largest; + + if (ei->len && + (!f2fs_is_valid_blkaddr(sbi, ei->blk, + DATA_GENERIC_ENHANCE) || + !f2fs_is_valid_blkaddr(sbi, ei->blk + ei->len - 1, + DATA_GENERIC_ENHANCE))) { + set_sbi_flag(sbi, SBI_NEED_FSCK); + f2fs_warn(sbi, "%s: inode (ino=%lx) extent info [%u, %u, %u] is incorrect, run fsck to fix", + __func__, inode->i_ino, + ei->blk, ei->fofs, ei->len); + return false; + } + return true; +} + static void __set_extent_info(struct extent_info *ei, unsigned int fofs, unsigned int len, block_t blk, bool keep_clen, @@ -233,7 +258,7 @@ struct rb_node **f2fs_lookup_rb_tree_for_insert(struct f2fs_sb_info *sbi, * @prev_ex: extent before ofs * @next_ex: extent after ofs * @insert_p: insert point for new extent at ofs - * in order to simpfy the insertion after. + * in order to simplify the insertion after. * tree must stay unchanged between lookup and insertion. */ struct rb_entry *f2fs_lookup_rb_tree_ret(struct rb_root_cached *root, @@ -718,7 +743,7 @@ static void __update_extent_tree_range(struct inode *inode, if (!en) en = next_en; - /* 2. invlidate all extent nodes in range [fofs, fofs + len - 1] */ + /* 2. invalidate all extent nodes in range [fofs, fofs + len - 1] */ while (en && en->ei.fofs < end) { unsigned int org_end; int parts = 0; /* # of parts current extent split into */ @@ -871,14 +896,23 @@ unlock_out: } #endif -static unsigned long long __calculate_block_age(unsigned long long new, +static unsigned long long __calculate_block_age(struct f2fs_sb_info *sbi, + unsigned long long new, unsigned long long old) { - unsigned long long diff; + unsigned int rem_old, rem_new; + unsigned long long res; + unsigned int weight = sbi->last_age_weight; + + res = div_u64_rem(new, 100, &rem_new) * (100 - weight) + + div_u64_rem(old, 100, &rem_old) * weight; - diff = (new >= old) ? new - (new - old) : new + (old - new); + if (rem_new) + res += rem_new * (100 - weight) / 100; + if (rem_old) + res += rem_old * weight / 100; - return div_u64(diff * LAST_AGE_WEIGHT, 100); + return res; } /* This returns a new age and allocated blocks in ei */ @@ -910,7 +944,7 @@ static int __get_new_block_age(struct inode *inode, struct extent_info *ei, cur_age = ULLONG_MAX - tei.last_blocks + cur_blocks; if (tei.age) - ei->age = __calculate_block_age(cur_age, tei.age); + ei->age = __calculate_block_age(sbi, cur_age, tei.age); else ei->age = cur_age; ei->last_blocks = cur_blocks; @@ -1047,6 +1081,17 @@ bool f2fs_lookup_read_extent_cache(struct inode *inode, pgoff_t pgofs, return __lookup_extent_tree(inode, pgofs, ei, EX_READ); } +bool f2fs_lookup_read_extent_cache_block(struct inode *inode, pgoff_t index, + block_t *blkaddr) +{ + struct extent_info ei = {}; + + if (!f2fs_lookup_read_extent_cache(inode, index, &ei)) + return false; + *blkaddr = ei.blk + index - ei.fofs; + return true; +} + void f2fs_update_read_extent_cache(struct dnode_of_data *dn) { return __update_extent_cache(dn, EX_READ); @@ -1226,6 +1271,7 @@ void f2fs_init_extent_cache_info(struct f2fs_sb_info *sbi) atomic64_set(&sbi->allocated_data_blocks, 0); sbi->hot_data_age_threshold = DEF_HOT_DATA_AGE_THRESHOLD; sbi->warm_data_age_threshold = DEF_WARM_DATA_AGE_THRESHOLD; + sbi->last_age_weight = LAST_AGE_WEIGHT; } int __init f2fs_create_extent_cache(void) diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h index 9a3ffa39ad30..b0ab2062038a 100644 --- a/fs/f2fs/f2fs.h +++ b/fs/f2fs/f2fs.h @@ -402,7 +402,6 @@ struct discard_cmd_control { struct list_head wait_list; /* store on-flushing entries */ struct list_head fstrim_list; /* in-flight discard from fstrim */ wait_queue_head_t discard_wait_queue; /* waiting queue for wake-up */ - unsigned int discard_wake; /* to wake up discard thread */ struct mutex cmd_lock; unsigned int nr_discards; /* # of discards in the list */ unsigned int max_discards; /* max. discards to be issued */ @@ -410,6 +409,7 @@ struct discard_cmd_control { unsigned int min_discard_issue_time; /* min. interval between discard issue */ unsigned int mid_discard_issue_time; /* mid. interval between discard issue */ unsigned int max_discard_issue_time; /* max. interval between discard issue */ + unsigned int discard_io_aware_gran; /* minimum discard granularity not be aware of I/O */ unsigned int discard_urgent_util; /* utilization which issue discard proactively */ unsigned int discard_granularity; /* discard granularity */ unsigned int max_ordered_discard; /* maximum discard granularity issued by lba order */ @@ -420,6 +420,7 @@ struct discard_cmd_control { atomic_t discard_cmd_cnt; /* # of cached cmd count */ struct rb_root_cached root; /* root of discard rb-tree */ bool rbtree_check; /* config for consistence check */ + bool discard_wake; /* to wake up discard thread */ }; /* for the list of fsync inodes, used only during recovery */ @@ -692,15 +693,13 @@ struct extent_tree_info { }; /* - * This structure is taken from ext4_map_blocks. - * - * Note that, however, f2fs uses NEW and MAPPED flags for f2fs_map_blocks(). + * State of block returned by f2fs_map_blocks. */ -#define F2FS_MAP_NEW (1 << BH_New) -#define F2FS_MAP_MAPPED (1 << BH_Mapped) -#define F2FS_MAP_UNWRITTEN (1 << BH_Unwritten) +#define F2FS_MAP_NEW (1U << 0) +#define F2FS_MAP_MAPPED (1U << 1) +#define F2FS_MAP_DELALLOC (1U << 2) #define F2FS_MAP_FLAGS (F2FS_MAP_NEW | F2FS_MAP_MAPPED |\ - F2FS_MAP_UNWRITTEN) + F2FS_MAP_DELALLOC) struct f2fs_map_blocks { struct block_device *m_bdev; /* for multi-device dio */ @@ -870,7 +869,7 @@ struct f2fs_inode_info { unsigned char i_compress_algorithm; /* algorithm type */ unsigned char i_log_cluster_size; /* log of cluster size */ unsigned char i_compress_level; /* compress level (lz4hc,zstd) */ - unsigned short i_compress_flag; /* compress flag */ + unsigned char i_compress_flag; /* compress flag */ unsigned int i_cluster_size; /* cluster size */ unsigned int atomic_write_cnt; @@ -1193,7 +1192,8 @@ enum iostat_type { FS_META_READ_IO, /* meta read IOs */ /* other */ - FS_DISCARD, /* discard */ + FS_DISCARD_IO, /* discard */ + FS_FLUSH_IO, /* flush */ NR_IO_TYPE, }; @@ -1210,19 +1210,19 @@ struct f2fs_io_info { struct page *encrypted_page; /* encrypted page */ struct page *compressed_page; /* compressed page */ struct list_head list; /* serialize IOs */ - bool submitted; /* indicate IO submission */ - int need_lock; /* indicate we need to lock cp_rwsem */ - bool in_list; /* indicate fio is in io_list */ - bool is_por; /* indicate IO is from recovery or not */ - bool retry; /* need to reallocate block address */ - int compr_blocks; /* # of compressed block addresses */ - bool encrypted; /* indicate file is encrypted */ - bool post_read; /* require post read */ + unsigned int compr_blocks; /* # of compressed block addresses */ + unsigned int need_lock:8; /* indicate we need to lock cp_rwsem */ + unsigned int version:8; /* version of the node */ + unsigned int submitted:1; /* indicate IO submission */ + unsigned int in_list:1; /* indicate fio is in io_list */ + unsigned int is_por:1; /* indicate IO is from recovery or not */ + unsigned int retry:1; /* need to reallocate block address */ + unsigned int encrypted:1; /* indicate file is encrypted */ + unsigned int post_read:1; /* require post read */ enum iostat_type io_type; /* io type */ struct writeback_control *io_wbc; /* writeback control */ struct bio **bio; /* bio for ipu */ sector_t *last_block; /* last block number in bio */ - unsigned char version; /* version of the node */ }; struct bio_entry { @@ -1384,8 +1384,6 @@ enum { MEMORY_MODE_LOW, /* memory mode for low memry devices */ }; - - static inline int f2fs_test_bit(unsigned int nr, char *addr); static inline void f2fs_set_bit(unsigned int nr, char *addr); static inline void f2fs_clear_bit(unsigned int nr, char *addr); @@ -1396,19 +1394,17 @@ static inline void f2fs_clear_bit(unsigned int nr, char *addr); * Layout A: lowest bit should be 1 * | bit0 = 1 | bit1 | bit2 | ... | bit MAX | private data .... | * bit 0 PAGE_PRIVATE_NOT_POINTER - * bit 1 PAGE_PRIVATE_ATOMIC_WRITE - * bit 2 PAGE_PRIVATE_DUMMY_WRITE - * bit 3 PAGE_PRIVATE_ONGOING_MIGRATION - * bit 4 PAGE_PRIVATE_INLINE_INODE - * bit 5 PAGE_PRIVATE_REF_RESOURCE - * bit 6- f2fs private data + * bit 1 PAGE_PRIVATE_DUMMY_WRITE + * bit 2 PAGE_PRIVATE_ONGOING_MIGRATION + * bit 3 PAGE_PRIVATE_INLINE_INODE + * bit 4 PAGE_PRIVATE_REF_RESOURCE + * bit 5- f2fs private data * * Layout B: lowest bit should be 0 * page.private is a wrapped pointer. */ enum { PAGE_PRIVATE_NOT_POINTER, /* private contains non-pointer data */ - PAGE_PRIVATE_ATOMIC_WRITE, /* data page from atomic write path */ PAGE_PRIVATE_DUMMY_WRITE, /* data page for padding aligned IO */ PAGE_PRIVATE_ONGOING_MIGRATION, /* data page which is on-going migrating */ PAGE_PRIVATE_INLINE_INODE, /* inode page contains inline data */ @@ -1450,22 +1446,18 @@ static inline void clear_page_private_##name(struct page *page) \ } PAGE_PRIVATE_GET_FUNC(nonpointer, NOT_POINTER); -PAGE_PRIVATE_GET_FUNC(reference, REF_RESOURCE); PAGE_PRIVATE_GET_FUNC(inline, INLINE_INODE); PAGE_PRIVATE_GET_FUNC(gcing, ONGOING_MIGRATION); -PAGE_PRIVATE_GET_FUNC(atomic, ATOMIC_WRITE); PAGE_PRIVATE_GET_FUNC(dummy, DUMMY_WRITE); PAGE_PRIVATE_SET_FUNC(reference, REF_RESOURCE); PAGE_PRIVATE_SET_FUNC(inline, INLINE_INODE); PAGE_PRIVATE_SET_FUNC(gcing, ONGOING_MIGRATION); -PAGE_PRIVATE_SET_FUNC(atomic, ATOMIC_WRITE); PAGE_PRIVATE_SET_FUNC(dummy, DUMMY_WRITE); PAGE_PRIVATE_CLEAR_FUNC(reference, REF_RESOURCE); PAGE_PRIVATE_CLEAR_FUNC(inline, INLINE_INODE); PAGE_PRIVATE_CLEAR_FUNC(gcing, ONGOING_MIGRATION); -PAGE_PRIVATE_CLEAR_FUNC(atomic, ATOMIC_WRITE); PAGE_PRIVATE_CLEAR_FUNC(dummy, DUMMY_WRITE); static inline unsigned long get_page_private_data(struct page *page) @@ -1679,6 +1671,7 @@ struct f2fs_sb_info { /* The threshold used for hot and warm data seperation*/ unsigned int hot_data_age_threshold; unsigned int warm_data_age_threshold; + unsigned int last_age_weight; /* basic filesystem units */ unsigned int log_sectors_per_block; /* log2 sectors per block */ @@ -1864,8 +1857,9 @@ struct f2fs_sb_info { #ifdef CONFIG_F2FS_IOSTAT /* For app/fs IO statistics */ spinlock_t iostat_lock; - unsigned long long rw_iostat[NR_IO_TYPE]; - unsigned long long prev_rw_iostat[NR_IO_TYPE]; + unsigned long long iostat_count[NR_IO_TYPE]; + unsigned long long iostat_bytes[NR_IO_TYPE]; + unsigned long long prev_iostat_bytes[NR_IO_TYPE]; bool iostat_enable; unsigned long iostat_next_period; unsigned int iostat_period_ms; @@ -1877,12 +1871,10 @@ struct f2fs_sb_info { }; #ifdef CONFIG_F2FS_FAULT_INJECTION -#define f2fs_show_injection_info(sbi, type) \ - printk_ratelimited("%sF2FS-fs (%s) : inject %s in %s of %pS\n", \ - KERN_INFO, sbi->sb->s_id, \ - f2fs_fault_name[type], \ - __func__, __builtin_return_address(0)) -static inline bool time_to_inject(struct f2fs_sb_info *sbi, int type) +#define time_to_inject(sbi, type) __time_to_inject(sbi, type, __func__, \ + __builtin_return_address(0)) +static inline bool __time_to_inject(struct f2fs_sb_info *sbi, int type, + const char *func, const char *parent_func) { struct f2fs_fault_info *ffi = &F2FS_OPTION(sbi).fault_info; @@ -1895,12 +1887,14 @@ static inline bool time_to_inject(struct f2fs_sb_info *sbi, int type) atomic_inc(&ffi->inject_ops); if (atomic_read(&ffi->inject_ops) >= ffi->inject_rate) { atomic_set(&ffi->inject_ops, 0); + printk_ratelimited("%sF2FS-fs (%s) : inject %s in %s of %pS\n", + KERN_INFO, sbi->sb->s_id, f2fs_fault_name[type], + func, parent_func); return true; } return false; } #else -#define f2fs_show_injection_info(sbi, type) do { } while (0) static inline bool time_to_inject(struct f2fs_sb_info *sbi, int type) { return false; @@ -2233,10 +2227,8 @@ static inline void f2fs_lock_op(struct f2fs_sb_info *sbi) static inline int f2fs_trylock_op(struct f2fs_sb_info *sbi) { - if (time_to_inject(sbi, FAULT_LOCK_OP)) { - f2fs_show_injection_info(sbi, FAULT_LOCK_OP); + if (time_to_inject(sbi, FAULT_LOCK_OP)) return 0; - } return f2fs_down_read_trylock(&sbi->cp_rwsem); } @@ -2324,7 +2316,6 @@ static inline int inc_valid_block_count(struct f2fs_sb_info *sbi, return ret; if (time_to_inject(sbi, FAULT_BLOCK)) { - f2fs_show_injection_info(sbi, FAULT_BLOCK); release = *count; goto release_quota; } @@ -2604,10 +2595,8 @@ static inline int inc_valid_node_count(struct f2fs_sb_info *sbi, return err; } - if (time_to_inject(sbi, FAULT_BLOCK)) { - f2fs_show_injection_info(sbi, FAULT_BLOCK); + if (time_to_inject(sbi, FAULT_BLOCK)) goto enospc; - } spin_lock(&sbi->stat_lock); @@ -2731,11 +2720,8 @@ static inline struct page *f2fs_grab_cache_page(struct address_space *mapping, if (page) return page; - if (time_to_inject(F2FS_M_SB(mapping), FAULT_PAGE_ALLOC)) { - f2fs_show_injection_info(F2FS_M_SB(mapping), - FAULT_PAGE_ALLOC); + if (time_to_inject(F2FS_M_SB(mapping), FAULT_PAGE_ALLOC)) return NULL; - } } if (!for_write) @@ -2752,10 +2738,8 @@ static inline struct page *f2fs_pagecache_get_page( struct address_space *mapping, pgoff_t index, int fgp_flags, gfp_t gfp_mask) { - if (time_to_inject(F2FS_M_SB(mapping), FAULT_PAGE_GET)) { - f2fs_show_injection_info(F2FS_M_SB(mapping), FAULT_PAGE_GET); + if (time_to_inject(F2FS_M_SB(mapping), FAULT_PAGE_GET)) return NULL; - } return pagecache_get_page(mapping, index, fgp_flags, gfp_mask); } @@ -2805,10 +2789,8 @@ static inline void *f2fs_kmem_cache_alloc(struct kmem_cache *cachep, if (nofail) return f2fs_kmem_cache_alloc_nofail(cachep, flags); - if (time_to_inject(sbi, FAULT_SLAB_ALLOC)) { - f2fs_show_injection_info(sbi, FAULT_SLAB_ALLOC); + if (time_to_inject(sbi, FAULT_SLAB_ALLOC)) return NULL; - } return kmem_cache_alloc(cachep, flags); } @@ -3382,10 +3364,8 @@ static inline bool is_dot_dotdot(const u8 *name, size_t len) static inline void *f2fs_kmalloc(struct f2fs_sb_info *sbi, size_t size, gfp_t flags) { - if (time_to_inject(sbi, FAULT_KMALLOC)) { - f2fs_show_injection_info(sbi, FAULT_KMALLOC); + if (time_to_inject(sbi, FAULT_KMALLOC)) return NULL; - } return kmalloc(size, flags); } @@ -3399,10 +3379,8 @@ static inline void *f2fs_kzalloc(struct f2fs_sb_info *sbi, static inline void *f2fs_kvmalloc(struct f2fs_sb_info *sbi, size_t size, gfp_t flags) { - if (time_to_inject(sbi, FAULT_KVMALLOC)) { - f2fs_show_injection_info(sbi, FAULT_KVMALLOC); + if (time_to_inject(sbi, FAULT_KVMALLOC)) return NULL; - } return kvmalloc(size, flags); } @@ -3788,8 +3766,8 @@ int __init f2fs_init_bioset(void); void f2fs_destroy_bioset(void); int f2fs_init_bio_entry_cache(void); void f2fs_destroy_bio_entry_cache(void); -void f2fs_submit_bio(struct f2fs_sb_info *sbi, - struct bio *bio, enum page_type type); +void f2fs_submit_read_bio(struct f2fs_sb_info *sbi, struct bio *bio, + enum page_type type); int f2fs_init_write_merge_io(struct f2fs_sb_info *sbi); void f2fs_submit_merged_write(struct f2fs_sb_info *sbi, enum page_type type); void f2fs_submit_merged_write_cond(struct f2fs_sb_info *sbi, @@ -3808,7 +3786,7 @@ void f2fs_set_data_blkaddr(struct dnode_of_data *dn); void f2fs_update_data_blkaddr(struct dnode_of_data *dn, block_t blkaddr); int f2fs_reserve_new_blocks(struct dnode_of_data *dn, blkcnt_t count); int f2fs_reserve_new_block(struct dnode_of_data *dn); -int f2fs_get_block(struct dnode_of_data *dn, pgoff_t index); +int f2fs_get_block_locked(struct dnode_of_data *dn, pgoff_t index); int f2fs_reserve_block(struct dnode_of_data *dn, pgoff_t index); struct page *f2fs_get_read_data_page(struct inode *inode, pgoff_t index, blk_opf_t op_flags, bool for_write, pgoff_t *next_pgofs); @@ -3819,9 +3797,7 @@ struct page *f2fs_get_lock_data_page(struct inode *inode, pgoff_t index, struct page *f2fs_get_new_data_page(struct inode *inode, struct page *ipage, pgoff_t index, bool new_i_size); int f2fs_do_write_data_page(struct f2fs_io_info *fio); -void f2fs_do_map_lock(struct f2fs_sb_info *sbi, int flag, bool lock); -int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, - int create, int flag); +int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, int flag); int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo, u64 start, u64 len); int f2fs_encrypt_one_page(struct f2fs_io_info *fio); @@ -4161,6 +4137,7 @@ void f2fs_leave_shrinker(struct f2fs_sb_info *sbi); /* * extent_cache.c */ +bool sanity_check_extent_cache(struct inode *inode); struct rb_entry *f2fs_lookup_rb_tree(struct rb_root_cached *root, struct rb_entry *cached_re, unsigned int ofs); struct rb_node **f2fs_lookup_rb_tree_ext(struct f2fs_sb_info *sbi, @@ -4190,6 +4167,8 @@ void f2fs_destroy_extent_cache(void); void f2fs_init_read_extent_tree(struct inode *inode, struct page *ipage); bool f2fs_lookup_read_extent_cache(struct inode *inode, pgoff_t pgofs, struct extent_info *ei); +bool f2fs_lookup_read_extent_cache_block(struct inode *inode, pgoff_t index, + block_t *blkaddr); void f2fs_update_read_extent_cache(struct dnode_of_data *dn); void f2fs_update_read_extent_cache_range(struct dnode_of_data *dn, pgoff_t fofs, block_t blkaddr, unsigned int len); @@ -4259,7 +4238,7 @@ bool f2fs_compress_write_end(struct inode *inode, void *fsdata, int f2fs_truncate_partial_cluster(struct inode *inode, u64 from, bool lock); void f2fs_compress_write_end_io(struct bio *bio, struct page *page); bool f2fs_is_compress_backend_ready(struct inode *inode); -int f2fs_init_compress_mempool(void); +int __init f2fs_init_compress_mempool(void); void f2fs_destroy_compress_mempool(void); void f2fs_decompress_cluster(struct decompress_io_ctx *dic, bool in_task); void f2fs_end_read_compressed_page(struct page *page, bool failed, @@ -4328,7 +4307,7 @@ static inline struct page *f2fs_compress_control_page(struct page *page) WARN_ON_ONCE(1); return ERR_PTR(-EINVAL); } -static inline int f2fs_init_compress_mempool(void) { return 0; } +static inline int __init f2fs_init_compress_mempool(void) { return 0; } static inline void f2fs_destroy_compress_mempool(void) { } static inline void f2fs_decompress_cluster(struct decompress_io_ctx *dic, bool in_task) { } @@ -4381,9 +4360,8 @@ static inline int set_compress_context(struct inode *inode) if ((F2FS_I(inode)->i_compress_algorithm == COMPRESS_LZ4 || F2FS_I(inode)->i_compress_algorithm == COMPRESS_ZSTD) && F2FS_OPTION(sbi).compress_level) - F2FS_I(inode)->i_compress_flag |= - F2FS_OPTION(sbi).compress_level << - COMPRESS_LEVEL_OFFSET; + F2FS_I(inode)->i_compress_level = + F2FS_OPTION(sbi).compress_level; F2FS_I(inode)->i_flags |= F2FS_COMPR_FL; set_inode_flag(inode, FI_COMPRESSED_FILE); stat_inc_compr_inode(inode); diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c index b90617639743..15dabeac4690 100644 --- a/fs/f2fs/file.c +++ b/fs/f2fs/file.c @@ -113,10 +113,8 @@ static vm_fault_t f2fs_vm_page_mkwrite(struct vm_fault *vmf) if (need_alloc) { /* block allocation */ - f2fs_do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, true); set_new_dnode(&dn, inode, NULL, NULL, 0); - err = f2fs_get_block(&dn, page->index); - f2fs_do_map_lock(sbi, F2FS_GET_BLOCK_PRE_AIO, false); + err = f2fs_get_block_locked(&dn, page->index); } #ifdef CONFIG_F2FS_FS_COMPRESSION @@ -305,7 +303,7 @@ static int f2fs_do_sync_file(struct file *file, loff_t start, loff_t end, * for OPU case, during fsync(), node can be persisted before * data when lower device doesn't support write barrier, result * in data corruption after SPO. - * So for strict fsync mode, force to use atomic write sematics + * So for strict fsync mode, force to use atomic write semantics * to keep write order in between data/node and last node to * avoid potential data corruption. */ @@ -619,7 +617,7 @@ void f2fs_truncate_data_blocks_range(struct dnode_of_data *dn, int count) fofs = f2fs_start_bidx_of_node(ofs_of_node(dn->node_page), dn->inode) + ofs; f2fs_update_read_extent_cache_range(dn, fofs, 0, len); - f2fs_update_age_extent_cache_range(dn, fofs, nr_free); + f2fs_update_age_extent_cache_range(dn, fofs, len); dec_valid_block_count(sbi, dn->inode, nr_free); } dn->ofs_in_node = ofs; @@ -784,10 +782,8 @@ int f2fs_truncate(struct inode *inode) trace_f2fs_truncate(inode); - if (time_to_inject(F2FS_I_SB(inode), FAULT_TRUNCATE)) { - f2fs_show_injection_info(F2FS_I_SB(inode), FAULT_TRUNCATE); + if (time_to_inject(F2FS_I_SB(inode), FAULT_TRUNCATE)) return -EIO; - } err = f2fs_dquot_initialize(inode); if (err) @@ -1112,7 +1108,7 @@ int f2fs_truncate_hole(struct inode *inode, pgoff_t pg_start, pgoff_t pg_end) return 0; } -static int punch_hole(struct inode *inode, loff_t offset, loff_t len) +static int f2fs_punch_hole(struct inode *inode, loff_t offset, loff_t len) { pgoff_t pg_start, pg_end; loff_t off_start, off_end; @@ -1498,6 +1494,7 @@ static int f2fs_do_zero_range(struct dnode_of_data *dn, pgoff_t start, } f2fs_update_read_extent_cache_range(dn, start, 0, index - start); + f2fs_update_age_extent_cache_range(dn, start, index - start); return ret; } @@ -1684,7 +1681,7 @@ static int f2fs_insert_range(struct inode *inode, loff_t offset, loff_t len) return ret; } -static int expand_inode_data(struct inode *inode, loff_t offset, +static int f2fs_expand_inode_data(struct inode *inode, loff_t offset, loff_t len, int mode) { struct f2fs_sb_info *sbi = F2FS_I_SB(inode); @@ -1697,7 +1694,7 @@ static int expand_inode_data(struct inode *inode, loff_t offset, .err_gc_skipped = true, .nr_free_secs = 0 }; pgoff_t pg_start, pg_end; - loff_t new_size = i_size_read(inode); + loff_t new_size; loff_t off_end; block_t expanded = 0; int err; @@ -1745,7 +1742,7 @@ next_alloc: f2fs_unlock_op(sbi); map.m_seg_type = CURSEG_COLD_DATA_PINNED; - err = f2fs_map_blocks(inode, &map, 1, F2FS_GET_BLOCK_PRE_DIO); + err = f2fs_map_blocks(inode, &map, F2FS_GET_BLOCK_PRE_DIO); file_dont_truncate(inode); f2fs_up_write(&sbi->pin_sem); @@ -1758,7 +1755,7 @@ next_alloc: map.m_len = expanded; } else { - err = f2fs_map_blocks(inode, &map, 1, F2FS_GET_BLOCK_PRE_AIO); + err = f2fs_map_blocks(inode, &map, F2FS_GET_BLOCK_PRE_AIO); expanded = map.m_len; } out_err: @@ -1809,7 +1806,7 @@ static long f2fs_fallocate(struct file *file, int mode, return -EOPNOTSUPP; /* - * Pinned file should not support partial trucation since the block + * Pinned file should not support partial truncation since the block * can be used by applications. */ if ((f2fs_compressed_file(inode) || f2fs_is_pinned_file(inode)) && @@ -1832,7 +1829,7 @@ static long f2fs_fallocate(struct file *file, int mode, if (offset >= inode->i_size) goto out; - ret = punch_hole(inode, offset, len); + ret = f2fs_punch_hole(inode, offset, len); } else if (mode & FALLOC_FL_COLLAPSE_RANGE) { ret = f2fs_collapse_range(inode, offset, len); } else if (mode & FALLOC_FL_ZERO_RANGE) { @@ -1840,7 +1837,7 @@ static long f2fs_fallocate(struct file *file, int mode, } else if (mode & FALLOC_FL_INSERT_RANGE) { ret = f2fs_insert_range(inode, offset, len); } else { - ret = expand_inode_data(inode, offset, len, mode); + ret = f2fs_expand_inode_data(inode, offset, len, mode); } if (!ret) { @@ -1859,14 +1856,17 @@ out: static int f2fs_release_file(struct inode *inode, struct file *filp) { /* - * f2fs_relase_file is called at every close calls. So we should + * f2fs_release_file is called at every close calls. So we should * not drop any inmemory pages by close called by other process. */ if (!(filp->f_mode & FMODE_WRITE) || atomic_read(&inode->i_writecount) != 1) return 0; + inode_lock(inode); f2fs_abort_atomic_write(inode, true); + inode_unlock(inode); + return 0; } @@ -1880,8 +1880,13 @@ static int f2fs_file_flush(struct file *file, fl_owner_t id) * until all the writers close its file. Since this should be done * before dropping file lock, it needs to do in ->flush. */ - if (F2FS_I(inode)->atomic_write_task == current) + if (F2FS_I(inode)->atomic_write_task == current && + (current->flags & PF_EXITING)) { + inode_lock(inode); f2fs_abort_atomic_write(inode, true); + inode_unlock(inode); + } + return 0; } @@ -2087,19 +2092,28 @@ static int f2fs_ioc_start_atomic_write(struct file *filp, bool truncate) goto out; } - /* Create a COW inode for atomic write */ - pinode = f2fs_iget(inode->i_sb, fi->i_pino); - if (IS_ERR(pinode)) { - f2fs_up_write(&fi->i_gc_rwsem[WRITE]); - ret = PTR_ERR(pinode); - goto out; - } + /* Check if the inode already has a COW inode */ + if (fi->cow_inode == NULL) { + /* Create a COW inode for atomic write */ + pinode = f2fs_iget(inode->i_sb, fi->i_pino); + if (IS_ERR(pinode)) { + f2fs_up_write(&fi->i_gc_rwsem[WRITE]); + ret = PTR_ERR(pinode); + goto out; + } - ret = f2fs_get_tmpfile(idmap, pinode, &fi->cow_inode); - iput(pinode); - if (ret) { - f2fs_up_write(&fi->i_gc_rwsem[WRITE]); - goto out; + ret = f2fs_get_tmpfile(idmap, pinode, &fi->cow_inode); + iput(pinode); + if (ret) { + f2fs_up_write(&fi->i_gc_rwsem[WRITE]); + goto out; + } + + set_inode_flag(fi->cow_inode, FI_COW_FILE); + clear_inode_flag(fi->cow_inode, FI_INLINE_DATA); + } else { + /* Reuse the already created COW inode */ + f2fs_do_truncate_blocks(fi->cow_inode, 0, true); } f2fs_write_inode(inode, NULL); @@ -2107,8 +2121,6 @@ static int f2fs_ioc_start_atomic_write(struct file *filp, bool truncate) stat_inc_atomic_inode(inode); set_inode_flag(inode, FI_ATOMIC_FILE); - set_inode_flag(fi->cow_inode, FI_COW_FILE); - clear_inode_flag(fi->cow_inode, FI_INLINE_DATA); isize = i_size_read(inode); fi->original_i_size = isize; @@ -2338,6 +2350,7 @@ static int f2fs_ioc_get_encryption_pwsalt(struct file *filp, unsigned long arg) { struct inode *inode = file_inode(filp); struct f2fs_sb_info *sbi = F2FS_I_SB(inode); + u8 encrypt_pw_salt[16]; int err; if (!f2fs_sb_has_encrypt(sbi)) @@ -2362,12 +2375,14 @@ static int f2fs_ioc_get_encryption_pwsalt(struct file *filp, unsigned long arg) goto out_err; } got_it: - if (copy_to_user((__u8 __user *)arg, sbi->raw_super->encrypt_pw_salt, - 16)) - err = -EFAULT; + memcpy(encrypt_pw_salt, sbi->raw_super->encrypt_pw_salt, 16); out_err: f2fs_up_write(&sbi->sb_lock); mnt_drop_write_file(filp); + + if (!err && copy_to_user((__u8 __user *)arg, encrypt_pw_salt, 16)) + err = -EFAULT; + return err; } @@ -2524,7 +2539,7 @@ static int f2fs_ioc_gc_range(struct file *filp, unsigned long arg) return __f2fs_ioc_gc_range(filp, &range); } -static int f2fs_ioc_write_checkpoint(struct file *filp, unsigned long arg) +static int f2fs_ioc_write_checkpoint(struct file *filp) { struct inode *inode = file_inode(filp); struct f2fs_sb_info *sbi = F2FS_I_SB(inode); @@ -2606,7 +2621,7 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi, */ while (map.m_lblk < pg_end) { map.m_len = pg_end - map.m_lblk; - err = f2fs_map_blocks(inode, &map, 0, F2FS_GET_BLOCK_DEFAULT); + err = f2fs_map_blocks(inode, &map, F2FS_GET_BLOCK_DEFAULT); if (err) goto out; @@ -2653,7 +2668,7 @@ static int f2fs_defragment_range(struct f2fs_sb_info *sbi, do_map: map.m_len = pg_end - map.m_lblk; - err = f2fs_map_blocks(inode, &map, 0, F2FS_GET_BLOCK_DEFAULT); + err = f2fs_map_blocks(inode, &map, F2FS_GET_BLOCK_DEFAULT); if (err) goto clear_out; @@ -3227,7 +3242,7 @@ int f2fs_precache_extents(struct inode *inode) map.m_len = end - map.m_lblk; f2fs_down_write(&fi->i_gc_rwsem[WRITE]); - err = f2fs_map_blocks(inode, &map, 0, F2FS_GET_BLOCK_PRECACHE); + err = f2fs_map_blocks(inode, &map, F2FS_GET_BLOCK_PRECACHE); f2fs_up_write(&fi->i_gc_rwsem[WRITE]); if (err) return err; @@ -3238,7 +3253,7 @@ int f2fs_precache_extents(struct inode *inode) return 0; } -static int f2fs_ioc_precache_extents(struct file *filp, unsigned long arg) +static int f2fs_ioc_precache_extents(struct file *filp) { return f2fs_precache_extents(file_inode(filp)); } @@ -3942,7 +3957,7 @@ static int f2fs_ioc_set_compress_option(struct file *filp, unsigned long arg) goto out; } - if (inode->i_size != 0) { + if (F2FS_HAS_BLOCKS(inode)) { ret = -EFBIG; goto out; } @@ -3995,7 +4010,7 @@ static int redirty_blocks(struct inode *inode, pgoff_t page_idx, int len) return ret; } -static int f2fs_ioc_decompress_file(struct file *filp, unsigned long arg) +static int f2fs_ioc_decompress_file(struct file *filp) { struct inode *inode = file_inode(filp); struct f2fs_sb_info *sbi = F2FS_I_SB(inode); @@ -4068,7 +4083,7 @@ out: return ret; } -static int f2fs_ioc_compress_file(struct file *filp, unsigned long arg) +static int f2fs_ioc_compress_file(struct file *filp) { struct inode *inode = file_inode(filp); struct f2fs_sb_info *sbi = F2FS_I_SB(inode); @@ -4184,7 +4199,7 @@ static long __f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) case F2FS_IOC_GARBAGE_COLLECT_RANGE: return f2fs_ioc_gc_range(filp, arg); case F2FS_IOC_WRITE_CHECKPOINT: - return f2fs_ioc_write_checkpoint(filp, arg); + return f2fs_ioc_write_checkpoint(filp); case F2FS_IOC_DEFRAGMENT: return f2fs_ioc_defragment(filp, arg); case F2FS_IOC_MOVE_RANGE: @@ -4198,7 +4213,7 @@ static long __f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) case F2FS_IOC_SET_PIN_FILE: return f2fs_ioc_set_pin_file(filp, arg); case F2FS_IOC_PRECACHE_EXTENTS: - return f2fs_ioc_precache_extents(filp, arg); + return f2fs_ioc_precache_extents(filp); case F2FS_IOC_RESIZE_FS: return f2fs_ioc_resize_fs(filp, arg); case FS_IOC_ENABLE_VERITY: @@ -4224,9 +4239,9 @@ static long __f2fs_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) case F2FS_IOC_SET_COMPRESS_OPTION: return f2fs_ioc_set_compress_option(filp, arg); case F2FS_IOC_DECOMPRESS_FILE: - return f2fs_ioc_decompress_file(filp, arg); + return f2fs_ioc_decompress_file(filp); case F2FS_IOC_COMPRESS_FILE: - return f2fs_ioc_compress_file(filp, arg); + return f2fs_ioc_compress_file(filp); default: return -ENOTTY; } @@ -4341,6 +4356,27 @@ out: return ret; } +static void f2fs_trace_rw_file_path(struct kiocb *iocb, size_t count, int rw) +{ + struct inode *inode = file_inode(iocb->ki_filp); + char *buf, *path; + + buf = f2fs_kmalloc(F2FS_I_SB(inode), PATH_MAX, GFP_KERNEL); + if (!buf) + return; + path = dentry_path_raw(file_dentry(iocb->ki_filp), buf, PATH_MAX); + if (IS_ERR(path)) + goto free_buf; + if (rw == WRITE) + trace_f2fs_datawrite_start(inode, iocb->ki_pos, count, + current->pid, path, current->comm); + else + trace_f2fs_dataread_start(inode, iocb->ki_pos, count, + current->pid, path, current->comm); +free_buf: + kfree(buf); +} + static ssize_t f2fs_file_read_iter(struct kiocb *iocb, struct iov_iter *to) { struct inode *inode = file_inode(iocb->ki_filp); @@ -4350,24 +4386,9 @@ static ssize_t f2fs_file_read_iter(struct kiocb *iocb, struct iov_iter *to) if (!f2fs_is_compress_backend_ready(inode)) return -EOPNOTSUPP; - if (trace_f2fs_dataread_start_enabled()) { - char *p = f2fs_kmalloc(F2FS_I_SB(inode), PATH_MAX, GFP_KERNEL); - char *path; - - if (!p) - goto skip_read_trace; + if (trace_f2fs_dataread_start_enabled()) + f2fs_trace_rw_file_path(iocb, iov_iter_count(to), READ); - path = dentry_path_raw(file_dentry(iocb->ki_filp), p, PATH_MAX); - if (IS_ERR(path)) { - kfree(p); - goto skip_read_trace; - } - - trace_f2fs_dataread_start(inode, pos, iov_iter_count(to), - current->pid, path, current->comm); - kfree(p); - } -skip_read_trace: if (f2fs_should_use_dio(inode, iocb, to)) { ret = f2fs_dio_read_iter(iocb, to); } else { @@ -4466,7 +4487,7 @@ static int f2fs_preallocate_blocks(struct kiocb *iocb, struct iov_iter *iter, flag = F2FS_GET_BLOCK_PRE_AIO; } - ret = f2fs_map_blocks(inode, &map, 1, flag); + ret = f2fs_map_blocks(inode, &map, flag); /* -ENOSPC|-EDQUOT are fine to report the number of allocated blocks. */ if (ret < 0 && !((ret == -ENOSPC || ret == -EDQUOT) && map.m_len > 0)) return ret; @@ -4673,24 +4694,9 @@ static ssize_t f2fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) if (preallocated < 0) { ret = preallocated; } else { - if (trace_f2fs_datawrite_start_enabled()) { - char *p = f2fs_kmalloc(F2FS_I_SB(inode), - PATH_MAX, GFP_KERNEL); - char *path; - - if (!p) - goto skip_write_trace; - path = dentry_path_raw(file_dentry(iocb->ki_filp), - p, PATH_MAX); - if (IS_ERR(path)) { - kfree(p); - goto skip_write_trace; - } - trace_f2fs_datawrite_start(inode, orig_pos, orig_count, - current->pid, path, current->comm); - kfree(p); - } -skip_write_trace: + if (trace_f2fs_datawrite_start_enabled()) + f2fs_trace_rw_file_path(iocb, orig_count, WRITE); + /* Do the actual write. */ ret = dio ? f2fs_dio_write_iter(iocb, from, &may_need_sync) : @@ -4823,6 +4829,7 @@ long f2fs_compat_ioctl(struct file *file, unsigned int cmd, unsigned long arg) case F2FS_IOC32_MOVE_RANGE: return f2fs_compat_ioc_move_range(file, arg); case F2FS_IOC_START_ATOMIC_WRITE: + case F2FS_IOC_START_ATOMIC_REPLACE: case F2FS_IOC_COMMIT_ATOMIC_WRITE: case F2FS_IOC_START_VOLATILE_WRITE: case F2FS_IOC_RELEASE_VOLATILE_WRITE: diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c index 6e2cae3d2e71..0a9dfa459860 100644 --- a/fs/f2fs/gc.c +++ b/fs/f2fs/gc.c @@ -57,7 +57,7 @@ static int gc_thread_func(void *data) /* give it a try one time */ if (gc_th->gc_wake) - gc_th->gc_wake = 0; + gc_th->gc_wake = false; if (try_to_freeze()) { stat_other_skip_bggc_count(sbi); @@ -72,11 +72,9 @@ static int gc_thread_func(void *data) continue; } - if (time_to_inject(sbi, FAULT_CHECKPOINT)) { - f2fs_show_injection_info(sbi, FAULT_CHECKPOINT); + if (time_to_inject(sbi, FAULT_CHECKPOINT)) f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_FAULT_INJECT); - } if (!sb_start_write_trylock(sbi->sb)) { stat_other_skip_bggc_count(sbi); @@ -185,7 +183,7 @@ int f2fs_start_gc_thread(struct f2fs_sb_info *sbi) gc_th->max_sleep_time = DEF_GC_THREAD_MAX_SLEEP_TIME; gc_th->no_gc_sleep_time = DEF_GC_THREAD_NOGC_SLEEP_TIME; - gc_th->gc_wake = 0; + gc_th->gc_wake = false; sbi->gc_thread = gc_th; init_waitqueue_head(&sbi->gc_thread->gc_wait_queue_head); @@ -1150,7 +1148,6 @@ static int ra_data_block(struct inode *inode, pgoff_t index) struct address_space *mapping = inode->i_mapping; struct dnode_of_data dn; struct page *page; - struct extent_info ei = {0, }; struct f2fs_io_info fio = { .sbi = sbi, .ino = inode->i_ino, @@ -1159,8 +1156,8 @@ static int ra_data_block(struct inode *inode, pgoff_t index) .op = REQ_OP_READ, .op_flags = 0, .encrypted_page = NULL, - .in_list = false, - .retry = false, + .in_list = 0, + .retry = 0, }; int err; @@ -1168,8 +1165,8 @@ static int ra_data_block(struct inode *inode, pgoff_t index) if (!page) return -ENOMEM; - if (f2fs_lookup_read_extent_cache(inode, index, &ei)) { - dn.data_blkaddr = ei.blk + index - ei.fofs; + if (f2fs_lookup_read_extent_cache_block(inode, index, + &dn.data_blkaddr)) { if (unlikely(!f2fs_is_valid_blkaddr(sbi, dn.data_blkaddr, DATA_GENERIC_ENHANCE_READ))) { err = -EFSCORRUPTED; @@ -1248,8 +1245,8 @@ static int move_data_block(struct inode *inode, block_t bidx, .op = REQ_OP_READ, .op_flags = 0, .encrypted_page = NULL, - .in_list = false, - .retry = false, + .in_list = 0, + .retry = 0, }; struct dnode_of_data dn; struct f2fs_summary sum; @@ -1365,7 +1362,6 @@ static int move_data_block(struct inode *inode, block_t bidx, dec_page_count(fio.sbi, F2FS_DIRTY_META); set_page_writeback(fio.encrypted_page); - ClearPageError(page); fio.op = REQ_OP_WRITE; fio.op_flags = REQ_SYNC; diff --git a/fs/f2fs/gc.h b/fs/f2fs/gc.h index 19b956c2d697..15bd1d680f67 100644 --- a/fs/f2fs/gc.h +++ b/fs/f2fs/gc.h @@ -41,7 +41,7 @@ struct f2fs_gc_kthread { unsigned int no_gc_sleep_time; /* for changing gc mode */ - unsigned int gc_wake; + bool gc_wake; /* for GC_MERGE mount option */ wait_queue_head_t fggc_wq; /* diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c index 21a495234ffd..72269e7efd26 100644 --- a/fs/f2fs/inline.c +++ b/fs/f2fs/inline.c @@ -174,7 +174,6 @@ int f2fs_convert_inline_page(struct dnode_of_data *dn, struct page *page) /* write data page to try to make data consistent */ set_page_writeback(page); - ClearPageError(page); fio.old_blkaddr = dn->data_blkaddr; set_inode_flag(dn->inode, FI_HOT_DATA); f2fs_outplace_write_data(dn, &fio); @@ -422,18 +421,17 @@ static int f2fs_move_inline_dirents(struct inode *dir, struct page *ipage, dentry_blk = page_address(page); + /* + * Start by zeroing the full block, to ensure that all unused space is + * zeroed and no uninitialized memory is leaked to disk. + */ + memset(dentry_blk, 0, F2FS_BLKSIZE); + make_dentry_ptr_inline(dir, &src, inline_dentry); make_dentry_ptr_block(dir, &dst, dentry_blk); /* copy data from inline dentry block to new dentry block */ memcpy(dst.bitmap, src.bitmap, src.nr_bitmap); - memset(dst.bitmap + src.nr_bitmap, 0, dst.nr_bitmap - src.nr_bitmap); - /* - * we do not need to zero out remainder part of dentry and filename - * field, since we have used bitmap for marking the usage status of - * them, besides, we can also ignore copying/zeroing reserved space - * of dentry block, because them haven't been used so far. - */ memcpy(dst.dentry, src.dentry, SIZE_OF_DIR_ENTRY * src.max); memcpy(dst.filename, src.filename, src.max * F2FS_SLOT_LEN); diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c index ff6cf66ed46b..7d2e2c0dba65 100644 --- a/fs/f2fs/inode.c +++ b/fs/f2fs/inode.c @@ -262,22 +262,6 @@ static bool sanity_check_inode(struct inode *inode, struct page *node_page) return false; } - if (fi->extent_tree[EX_READ]) { - struct extent_info *ei = &fi->extent_tree[EX_READ]->largest; - - if (ei->len && - (!f2fs_is_valid_blkaddr(sbi, ei->blk, - DATA_GENERIC_ENHANCE) || - !f2fs_is_valid_blkaddr(sbi, ei->blk + ei->len - 1, - DATA_GENERIC_ENHANCE))) { - set_sbi_flag(sbi, SBI_NEED_FSCK); - f2fs_warn(sbi, "%s: inode (ino=%lx) extent info [%u, %u, %u] is incorrect, run fsck to fix", - __func__, inode->i_ino, - ei->blk, ei->fofs, ei->len); - return false; - } - } - if (f2fs_sanity_check_inline_data(inode)) { set_sbi_flag(sbi, SBI_NEED_FSCK); f2fs_warn(sbi, "%s: inode (ino=%lx, mode=%u) should not have inline_data, run fsck to fix", @@ -413,12 +397,6 @@ static int do_read_inode(struct inode *inode) fi->i_inline_xattr_size = 0; } - if (!sanity_check_inode(inode, node_page)) { - f2fs_put_page(node_page, 1); - f2fs_handle_error(sbi, ERROR_CORRUPTED_INODE); - return -EFSCORRUPTED; - } - /* check data exist */ if (f2fs_has_inline_data(inode) && !f2fs_exist_data(inode)) __recover_inline_status(inode, node_page); @@ -466,11 +444,17 @@ static int do_read_inode(struct inode *inode) (fi->i_flags & F2FS_COMPR_FL)) { if (F2FS_FITS_IN_INODE(ri, fi->i_extra_isize, i_log_cluster_size)) { + unsigned short compress_flag; + atomic_set(&fi->i_compr_blocks, le64_to_cpu(ri->i_compr_blocks)); fi->i_compress_algorithm = ri->i_compress_algorithm; fi->i_log_cluster_size = ri->i_log_cluster_size; - fi->i_compress_flag = le16_to_cpu(ri->i_compress_flag); + compress_flag = le16_to_cpu(ri->i_compress_flag); + fi->i_compress_level = compress_flag >> + COMPRESS_LEVEL_OFFSET; + fi->i_compress_flag = compress_flag & + (BIT(COMPRESS_LEVEL_OFFSET) - 1); fi->i_cluster_size = 1 << fi->i_log_cluster_size; set_inode_flag(inode, FI_COMPRESSED_FILE); } @@ -482,6 +466,18 @@ static int do_read_inode(struct inode *inode) f2fs_init_read_extent_tree(inode, node_page); f2fs_init_age_extent_tree(inode); + if (!sanity_check_inode(inode, node_page)) { + f2fs_put_page(node_page, 1); + f2fs_handle_error(sbi, ERROR_CORRUPTED_INODE); + return -EFSCORRUPTED; + } + + if (!sanity_check_extent_cache(inode)) { + f2fs_put_page(node_page, 1); + f2fs_handle_error(sbi, ERROR_CORRUPTED_INODE); + return -EFSCORRUPTED; + } + f2fs_put_page(node_page, 1); stat_inc_inline_xattr(inode); @@ -686,13 +682,17 @@ void f2fs_update_inode(struct inode *inode, struct page *node_page) if (f2fs_sb_has_compression(F2FS_I_SB(inode)) && F2FS_FITS_IN_INODE(ri, F2FS_I(inode)->i_extra_isize, i_log_cluster_size)) { + unsigned short compress_flag; + ri->i_compr_blocks = cpu_to_le64(atomic_read( &F2FS_I(inode)->i_compr_blocks)); ri->i_compress_algorithm = F2FS_I(inode)->i_compress_algorithm; - ri->i_compress_flag = - cpu_to_le16(F2FS_I(inode)->i_compress_flag); + compress_flag = F2FS_I(inode)->i_compress_flag | + F2FS_I(inode)->i_compress_level << + COMPRESS_LEVEL_OFFSET; + ri->i_compress_flag = cpu_to_le16(compress_flag); ri->i_log_cluster_size = F2FS_I(inode)->i_log_cluster_size; } @@ -714,18 +714,19 @@ void f2fs_update_inode_page(struct inode *inode) { struct f2fs_sb_info *sbi = F2FS_I_SB(inode); struct page *node_page; + int count = 0; retry: node_page = f2fs_get_node_page(sbi, inode->i_ino); if (IS_ERR(node_page)) { int err = PTR_ERR(node_page); - if (err == -ENOMEM) { - cond_resched(); + /* The node block was truncated. */ + if (err == -ENOENT) + return; + + if (err == -ENOMEM || ++count <= DEFAULT_RETRY_IO_COUNT) goto retry; - } else if (err != -ENOENT) { - f2fs_stop_checkpoint(sbi, false, - STOP_CP_REASON_UPDATE_INODE); - } + f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_UPDATE_INODE); return; } f2fs_update_inode(inode, node_page); @@ -766,11 +767,18 @@ int f2fs_write_inode(struct inode *inode, struct writeback_control *wbc) void f2fs_evict_inode(struct inode *inode) { struct f2fs_sb_info *sbi = F2FS_I_SB(inode); - nid_t xnid = F2FS_I(inode)->i_xattr_nid; + struct f2fs_inode_info *fi = F2FS_I(inode); + nid_t xnid = fi->i_xattr_nid; int err = 0; f2fs_abort_atomic_write(inode, true); + if (fi->cow_inode) { + clear_inode_flag(fi->cow_inode, FI_COW_FILE); + iput(fi->cow_inode); + fi->cow_inode = NULL; + } + trace_f2fs_evict_inode(inode); truncate_inode_pages_final(&inode->i_data); @@ -809,10 +817,8 @@ retry: if (F2FS_HAS_BLOCKS(inode)) err = f2fs_truncate(inode); - if (time_to_inject(sbi, FAULT_EVICT_INODE)) { - f2fs_show_injection_info(sbi, FAULT_EVICT_INODE); + if (time_to_inject(sbi, FAULT_EVICT_INODE)) err = -EIO; - } if (!err) { f2fs_lock_op(sbi); @@ -857,7 +863,7 @@ no_delete: stat_dec_inline_inode(inode); stat_dec_compr_inode(inode); stat_sub_compr_blocks(inode, - atomic_read(&F2FS_I(inode)->i_compr_blocks)); + atomic_read(&fi->i_compr_blocks)); if (likely(!f2fs_cp_error(sbi) && !is_sbi_flag_set(sbi, SBI_CP_DISABLED))) diff --git a/fs/f2fs/iostat.c b/fs/f2fs/iostat.c index 3166a8939ed4..3d5bfb1ad585 100644 --- a/fs/f2fs/iostat.c +++ b/fs/f2fs/iostat.c @@ -14,91 +14,79 @@ #include "iostat.h" #include <trace/events/f2fs.h> -#define NUM_PREALLOC_IOSTAT_CTXS 128 static struct kmem_cache *bio_iostat_ctx_cache; static mempool_t *bio_iostat_ctx_pool; +static inline unsigned long long iostat_get_avg_bytes(struct f2fs_sb_info *sbi, + enum iostat_type type) +{ + return sbi->iostat_count[type] ? div64_u64(sbi->iostat_bytes[type], + sbi->iostat_count[type]) : 0; +} + +#define IOSTAT_INFO_SHOW(name, type) \ + seq_printf(seq, "%-23s %-16llu %-16llu %-16llu\n", \ + name":", sbi->iostat_bytes[type], \ + sbi->iostat_count[type], \ + iostat_get_avg_bytes(sbi, type)) + int __maybe_unused iostat_info_seq_show(struct seq_file *seq, void *offset) { struct super_block *sb = seq->private; struct f2fs_sb_info *sbi = F2FS_SB(sb); - time64_t now = ktime_get_real_seconds(); if (!sbi->iostat_enable) return 0; - seq_printf(seq, "time: %-16llu\n", now); + seq_printf(seq, "time: %-16llu\n", ktime_get_real_seconds()); + seq_printf(seq, "\t\t\t%-16s %-16s %-16s\n", + "io_bytes", "count", "avg_bytes"); /* print app write IOs */ seq_puts(seq, "[WRITE]\n"); - seq_printf(seq, "app buffered data: %-16llu\n", - sbi->rw_iostat[APP_BUFFERED_IO]); - seq_printf(seq, "app direct data: %-16llu\n", - sbi->rw_iostat[APP_DIRECT_IO]); - seq_printf(seq, "app mapped data: %-16llu\n", - sbi->rw_iostat[APP_MAPPED_IO]); - seq_printf(seq, "app buffered cdata: %-16llu\n", - sbi->rw_iostat[APP_BUFFERED_CDATA_IO]); - seq_printf(seq, "app mapped cdata: %-16llu\n", - sbi->rw_iostat[APP_MAPPED_CDATA_IO]); + IOSTAT_INFO_SHOW("app buffered data", APP_BUFFERED_IO); + IOSTAT_INFO_SHOW("app direct data", APP_DIRECT_IO); + IOSTAT_INFO_SHOW("app mapped data", APP_MAPPED_IO); + IOSTAT_INFO_SHOW("app buffered cdata", APP_BUFFERED_CDATA_IO); + IOSTAT_INFO_SHOW("app mapped cdata", APP_MAPPED_CDATA_IO); /* print fs write IOs */ - seq_printf(seq, "fs data: %-16llu\n", - sbi->rw_iostat[FS_DATA_IO]); - seq_printf(seq, "fs cdata: %-16llu\n", - sbi->rw_iostat[FS_CDATA_IO]); - seq_printf(seq, "fs node: %-16llu\n", - sbi->rw_iostat[FS_NODE_IO]); - seq_printf(seq, "fs meta: %-16llu\n", - sbi->rw_iostat[FS_META_IO]); - seq_printf(seq, "fs gc data: %-16llu\n", - sbi->rw_iostat[FS_GC_DATA_IO]); - seq_printf(seq, "fs gc node: %-16llu\n", - sbi->rw_iostat[FS_GC_NODE_IO]); - seq_printf(seq, "fs cp data: %-16llu\n", - sbi->rw_iostat[FS_CP_DATA_IO]); - seq_printf(seq, "fs cp node: %-16llu\n", - sbi->rw_iostat[FS_CP_NODE_IO]); - seq_printf(seq, "fs cp meta: %-16llu\n", - sbi->rw_iostat[FS_CP_META_IO]); + IOSTAT_INFO_SHOW("fs data", FS_DATA_IO); + IOSTAT_INFO_SHOW("fs cdata", FS_CDATA_IO); + IOSTAT_INFO_SHOW("fs node", FS_NODE_IO); + IOSTAT_INFO_SHOW("fs meta", FS_META_IO); + IOSTAT_INFO_SHOW("fs gc data", FS_GC_DATA_IO); + IOSTAT_INFO_SHOW("fs gc node", FS_GC_NODE_IO); + IOSTAT_INFO_SHOW("fs cp data", FS_CP_DATA_IO); + IOSTAT_INFO_SHOW("fs cp node", FS_CP_NODE_IO); + IOSTAT_INFO_SHOW("fs cp meta", FS_CP_META_IO); /* print app read IOs */ seq_puts(seq, "[READ]\n"); - seq_printf(seq, "app buffered data: %-16llu\n", - sbi->rw_iostat[APP_BUFFERED_READ_IO]); - seq_printf(seq, "app direct data: %-16llu\n", - sbi->rw_iostat[APP_DIRECT_READ_IO]); - seq_printf(seq, "app mapped data: %-16llu\n", - sbi->rw_iostat[APP_MAPPED_READ_IO]); - seq_printf(seq, "app buffered cdata: %-16llu\n", - sbi->rw_iostat[APP_BUFFERED_CDATA_READ_IO]); - seq_printf(seq, "app mapped cdata: %-16llu\n", - sbi->rw_iostat[APP_MAPPED_CDATA_READ_IO]); + IOSTAT_INFO_SHOW("app buffered data", APP_BUFFERED_READ_IO); + IOSTAT_INFO_SHOW("app direct data", APP_DIRECT_READ_IO); + IOSTAT_INFO_SHOW("app mapped data", APP_MAPPED_READ_IO); + IOSTAT_INFO_SHOW("app buffered cdata", APP_BUFFERED_CDATA_READ_IO); + IOSTAT_INFO_SHOW("app mapped cdata", APP_MAPPED_CDATA_READ_IO); /* print fs read IOs */ - seq_printf(seq, "fs data: %-16llu\n", - sbi->rw_iostat[FS_DATA_READ_IO]); - seq_printf(seq, "fs gc data: %-16llu\n", - sbi->rw_iostat[FS_GDATA_READ_IO]); - seq_printf(seq, "fs cdata: %-16llu\n", - sbi->rw_iostat[FS_CDATA_READ_IO]); - seq_printf(seq, "fs node: %-16llu\n", - sbi->rw_iostat[FS_NODE_READ_IO]); - seq_printf(seq, "fs meta: %-16llu\n", - sbi->rw_iostat[FS_META_READ_IO]); + IOSTAT_INFO_SHOW("fs data", FS_DATA_READ_IO); + IOSTAT_INFO_SHOW("fs gc data", FS_GDATA_READ_IO); + IOSTAT_INFO_SHOW("fs cdata", FS_CDATA_READ_IO); + IOSTAT_INFO_SHOW("fs node", FS_NODE_READ_IO); + IOSTAT_INFO_SHOW("fs meta", FS_META_READ_IO); /* print other IOs */ seq_puts(seq, "[OTHER]\n"); - seq_printf(seq, "fs discard: %-16llu\n", - sbi->rw_iostat[FS_DISCARD]); + IOSTAT_INFO_SHOW("fs discard", FS_DISCARD_IO); + IOSTAT_INFO_SHOW("fs flush", FS_FLUSH_IO); return 0; } static inline void __record_iostat_latency(struct f2fs_sb_info *sbi) { - int io, idx = 0; - unsigned int cnt; + int io, idx; struct f2fs_iostat_latency iostat_lat[MAX_IO_TYPE][NR_PAGE_TYPE]; struct iostat_lat_info *io_lat = sbi->iostat_io_lat; unsigned long flags; @@ -106,12 +94,11 @@ static inline void __record_iostat_latency(struct f2fs_sb_info *sbi) spin_lock_irqsave(&sbi->iostat_lat_lock, flags); for (idx = 0; idx < MAX_IO_TYPE; idx++) { for (io = 0; io < NR_PAGE_TYPE; io++) { - cnt = io_lat->bio_cnt[idx][io]; iostat_lat[idx][io].peak_lat = jiffies_to_msecs(io_lat->peak_lat[idx][io]); - iostat_lat[idx][io].cnt = cnt; - iostat_lat[idx][io].avg_lat = cnt ? - jiffies_to_msecs(io_lat->sum_lat[idx][io]) / cnt : 0; + iostat_lat[idx][io].cnt = io_lat->bio_cnt[idx][io]; + iostat_lat[idx][io].avg_lat = iostat_lat[idx][io].cnt ? + jiffies_to_msecs(io_lat->sum_lat[idx][io]) / iostat_lat[idx][io].cnt : 0; io_lat->sum_lat[idx][io] = 0; io_lat->peak_lat[idx][io] = 0; io_lat->bio_cnt[idx][io] = 0; @@ -141,9 +128,9 @@ static inline void f2fs_record_iostat(struct f2fs_sb_info *sbi) msecs_to_jiffies(sbi->iostat_period_ms); for (i = 0; i < NR_IO_TYPE; i++) { - iostat_diff[i] = sbi->rw_iostat[i] - - sbi->prev_rw_iostat[i]; - sbi->prev_rw_iostat[i] = sbi->rw_iostat[i]; + iostat_diff[i] = sbi->iostat_bytes[i] - + sbi->prev_iostat_bytes[i]; + sbi->prev_iostat_bytes[i] = sbi->iostat_bytes[i]; } spin_unlock_irqrestore(&sbi->iostat_lock, flags); @@ -159,8 +146,9 @@ void f2fs_reset_iostat(struct f2fs_sb_info *sbi) spin_lock_irq(&sbi->iostat_lock); for (i = 0; i < NR_IO_TYPE; i++) { - sbi->rw_iostat[i] = 0; - sbi->prev_rw_iostat[i] = 0; + sbi->iostat_count[i] = 0; + sbi->iostat_bytes[i] = 0; + sbi->prev_iostat_bytes[i] = 0; } spin_unlock_irq(&sbi->iostat_lock); @@ -169,6 +157,13 @@ void f2fs_reset_iostat(struct f2fs_sb_info *sbi) spin_unlock_irq(&sbi->iostat_lat_lock); } +static inline void __f2fs_update_iostat(struct f2fs_sb_info *sbi, + enum iostat_type type, unsigned long long io_bytes) +{ + sbi->iostat_bytes[type] += io_bytes; + sbi->iostat_count[type]++; +} + void f2fs_update_iostat(struct f2fs_sb_info *sbi, struct inode *inode, enum iostat_type type, unsigned long long io_bytes) { @@ -178,33 +173,33 @@ void f2fs_update_iostat(struct f2fs_sb_info *sbi, struct inode *inode, return; spin_lock_irqsave(&sbi->iostat_lock, flags); - sbi->rw_iostat[type] += io_bytes; + __f2fs_update_iostat(sbi, type, io_bytes); if (type == APP_BUFFERED_IO || type == APP_DIRECT_IO) - sbi->rw_iostat[APP_WRITE_IO] += io_bytes; + __f2fs_update_iostat(sbi, APP_WRITE_IO, io_bytes); if (type == APP_BUFFERED_READ_IO || type == APP_DIRECT_READ_IO) - sbi->rw_iostat[APP_READ_IO] += io_bytes; + __f2fs_update_iostat(sbi, APP_READ_IO, io_bytes); #ifdef CONFIG_F2FS_FS_COMPRESSION if (inode && f2fs_compressed_file(inode)) { if (type == APP_BUFFERED_IO) - sbi->rw_iostat[APP_BUFFERED_CDATA_IO] += io_bytes; + __f2fs_update_iostat(sbi, APP_BUFFERED_CDATA_IO, io_bytes); if (type == APP_BUFFERED_READ_IO) - sbi->rw_iostat[APP_BUFFERED_CDATA_READ_IO] += io_bytes; + __f2fs_update_iostat(sbi, APP_BUFFERED_CDATA_READ_IO, io_bytes); if (type == APP_MAPPED_READ_IO) - sbi->rw_iostat[APP_MAPPED_CDATA_READ_IO] += io_bytes; + __f2fs_update_iostat(sbi, APP_MAPPED_CDATA_READ_IO, io_bytes); if (type == APP_MAPPED_IO) - sbi->rw_iostat[APP_MAPPED_CDATA_IO] += io_bytes; + __f2fs_update_iostat(sbi, APP_MAPPED_CDATA_IO, io_bytes); if (type == FS_DATA_READ_IO) - sbi->rw_iostat[FS_CDATA_READ_IO] += io_bytes; + __f2fs_update_iostat(sbi, FS_CDATA_READ_IO, io_bytes); if (type == FS_DATA_IO) - sbi->rw_iostat[FS_CDATA_IO] += io_bytes; + __f2fs_update_iostat(sbi, FS_CDATA_IO, io_bytes); } #endif @@ -214,49 +209,48 @@ void f2fs_update_iostat(struct f2fs_sb_info *sbi, struct inode *inode, } static inline void __update_iostat_latency(struct bio_iostat_ctx *iostat_ctx, - int rw, bool is_sync) + enum iostat_lat_type lat_type) { unsigned long ts_diff; - unsigned int iotype = iostat_ctx->type; + unsigned int page_type = iostat_ctx->type; struct f2fs_sb_info *sbi = iostat_ctx->sbi; struct iostat_lat_info *io_lat = sbi->iostat_io_lat; - int idx; unsigned long flags; if (!sbi->iostat_enable) return; ts_diff = jiffies - iostat_ctx->submit_ts; - if (iotype >= META_FLUSH) - iotype = META; - - if (rw == 0) { - idx = READ_IO; - } else { - if (is_sync) - idx = WRITE_SYNC_IO; - else - idx = WRITE_ASYNC_IO; + if (page_type == META_FLUSH) { + page_type = META; + } else if (page_type >= NR_PAGE_TYPE) { + f2fs_warn(sbi, "%s: %d over NR_PAGE_TYPE", __func__, page_type); + return; } spin_lock_irqsave(&sbi->iostat_lat_lock, flags); - io_lat->sum_lat[idx][iotype] += ts_diff; - io_lat->bio_cnt[idx][iotype]++; - if (ts_diff > io_lat->peak_lat[idx][iotype]) - io_lat->peak_lat[idx][iotype] = ts_diff; + io_lat->sum_lat[lat_type][page_type] += ts_diff; + io_lat->bio_cnt[lat_type][page_type]++; + if (ts_diff > io_lat->peak_lat[lat_type][page_type]) + io_lat->peak_lat[lat_type][page_type] = ts_diff; spin_unlock_irqrestore(&sbi->iostat_lat_lock, flags); } -void iostat_update_and_unbind_ctx(struct bio *bio, int rw) +void iostat_update_and_unbind_ctx(struct bio *bio) { struct bio_iostat_ctx *iostat_ctx = bio->bi_private; - bool is_sync = bio->bi_opf & REQ_SYNC; + enum iostat_lat_type lat_type; - if (rw == 0) - bio->bi_private = iostat_ctx->post_read_ctx; - else + if (op_is_write(bio_op(bio))) { + lat_type = bio->bi_opf & REQ_SYNC ? + WRITE_SYNC_IO : WRITE_ASYNC_IO; bio->bi_private = iostat_ctx->sbi; - __update_iostat_latency(iostat_ctx, rw, is_sync); + } else { + lat_type = READ_IO; + bio->bi_private = iostat_ctx->post_read_ctx; + } + + __update_iostat_latency(iostat_ctx, lat_type); mempool_free(iostat_ctx, bio_iostat_ctx_pool); } diff --git a/fs/f2fs/iostat.h b/fs/f2fs/iostat.h index 2c048307b6e0..eb99d05cf272 100644 --- a/fs/f2fs/iostat.h +++ b/fs/f2fs/iostat.h @@ -8,20 +8,21 @@ struct bio_post_read_ctx; +enum iostat_lat_type { + READ_IO = 0, + WRITE_SYNC_IO, + WRITE_ASYNC_IO, + MAX_IO_TYPE, +}; + #ifdef CONFIG_F2FS_IOSTAT +#define NUM_PREALLOC_IOSTAT_CTXS 128 #define DEFAULT_IOSTAT_PERIOD_MS 3000 #define MIN_IOSTAT_PERIOD_MS 100 /* maximum period of iostat tracing is 1 day */ #define MAX_IOSTAT_PERIOD_MS 8640000 -enum { - READ_IO, - WRITE_SYNC_IO, - WRITE_ASYNC_IO, - MAX_IO_TYPE, -}; - struct iostat_lat_info { unsigned long sum_lat[MAX_IO_TYPE][NR_PAGE_TYPE]; /* sum of io latencies */ unsigned long peak_lat[MAX_IO_TYPE][NR_PAGE_TYPE]; /* peak io latency */ @@ -57,7 +58,7 @@ static inline struct bio_post_read_ctx *get_post_read_ctx(struct bio *bio) return iostat_ctx->post_read_ctx; } -extern void iostat_update_and_unbind_ctx(struct bio *bio, int rw); +extern void iostat_update_and_unbind_ctx(struct bio *bio); extern void iostat_alloc_and_bind_ctx(struct f2fs_sb_info *sbi, struct bio *bio, struct bio_post_read_ctx *ctx); extern int f2fs_init_iostat_processing(void); @@ -67,7 +68,7 @@ extern void f2fs_destroy_iostat(struct f2fs_sb_info *sbi); #else static inline void f2fs_update_iostat(struct f2fs_sb_info *sbi, struct inode *inode, enum iostat_type type, unsigned long long io_bytes) {} -static inline void iostat_update_and_unbind_ctx(struct bio *bio, int rw) {} +static inline void iostat_update_and_unbind_ctx(struct bio *bio) {} static inline void iostat_alloc_and_bind_ctx(struct f2fs_sb_info *sbi, struct bio *bio, struct bio_post_read_ctx *ctx) {} static inline void iostat_update_submit_ctx(struct bio *bio, diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c index d8e01bbbf27f..11fc4c8036a9 100644 --- a/fs/f2fs/namei.c +++ b/fs/f2fs/namei.c @@ -926,9 +926,6 @@ static int f2fs_tmpfile(struct mnt_idmap *idmap, struct inode *dir, static int f2fs_create_whiteout(struct mnt_idmap *idmap, struct inode *dir, struct inode **whiteout) { - if (unlikely(f2fs_cp_error(F2FS_I_SB(dir)))) - return -EIO; - return __f2fs_tmpfile(idmap, dir, NULL, S_IFCHR | WHITEOUT_MODE, true, whiteout); } @@ -966,7 +963,7 @@ static int f2fs_rename(struct mnt_idmap *idmap, struct inode *old_dir, /* * If new_inode is null, the below renaming flow will - * add a link in old_dir which can conver inline_dir. + * add a link in old_dir which can convert inline_dir. * After then, if we failed to get the entry due to other * reasons like ENOMEM, we had to remove the new entry. * Instead of adding such the error handling routine, let's diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c index cf997356d9f9..bd1dad523796 100644 --- a/fs/f2fs/node.c +++ b/fs/f2fs/node.c @@ -1587,7 +1587,7 @@ static int __write_node_page(struct page *page, bool atomic, bool *submitted, .op_flags = wbc_to_write_flags(wbc), .page = page, .encrypted_page = NULL, - .submitted = false, + .submitted = 0, .io_type = io_type, .io_wbc = wbc, }; @@ -1651,7 +1651,6 @@ static int __write_node_page(struct page *page, bool atomic, bool *submitted, } set_page_writeback(page); - ClearPageError(page); fio.old_blkaddr = ni.blk_addr; f2fs_do_write_node_page(nid, &fio); @@ -2083,8 +2082,6 @@ int f2fs_wait_on_node_pages_writeback(struct f2fs_sb_info *sbi, spin_unlock_irqrestore(&sbi->fsync_node_lock, flags); f2fs_wait_on_page_writeback(page, NODE, true, false); - if (TestClearPageError(page)) - ret = -EIO; put_page(page); @@ -2548,10 +2545,8 @@ bool f2fs_alloc_nid(struct f2fs_sb_info *sbi, nid_t *nid) struct f2fs_nm_info *nm_i = NM_I(sbi); struct free_nid *i = NULL; retry: - if (time_to_inject(sbi, FAULT_ALLOC_NID)) { - f2fs_show_injection_info(sbi, FAULT_ALLOC_NID); + if (time_to_inject(sbi, FAULT_ALLOC_NID)) return false; - } spin_lock(&nm_i->nid_list_lock); diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c index ae3c4e5474ef..227e25836173 100644 --- a/fs/f2fs/segment.c +++ b/fs/f2fs/segment.c @@ -192,18 +192,18 @@ void f2fs_abort_atomic_write(struct inode *inode, bool clean) if (!f2fs_is_atomic_file(inode)) return; - clear_inode_flag(fi->cow_inode, FI_COW_FILE); - iput(fi->cow_inode); - fi->cow_inode = NULL; release_atomic_write_cnt(inode); clear_inode_flag(inode, FI_ATOMIC_COMMITTED); clear_inode_flag(inode, FI_ATOMIC_REPLACE); clear_inode_flag(inode, FI_ATOMIC_FILE); stat_dec_atomic_inode(inode); + F2FS_I(inode)->atomic_write_task = NULL; + if (clean) { truncate_inode_pages_final(inode->i_mapping); f2fs_i_size_write(inode, fi->original_i_size); + fi->original_i_size = 0; } } @@ -255,6 +255,9 @@ retry: } f2fs_put_dnode(&dn); + + trace_f2fs_replace_atomic_write_block(inode, F2FS_I(inode)->cow_inode, + index, *old_addr, new_addr, recover); return 0; } @@ -262,19 +265,24 @@ static void __complete_revoke_list(struct inode *inode, struct list_head *head, bool revoke) { struct revoke_entry *cur, *tmp; + pgoff_t start_index = 0; bool truncate = is_inode_flag_set(inode, FI_ATOMIC_REPLACE); list_for_each_entry_safe(cur, tmp, head, list) { - if (revoke) + if (revoke) { __replace_atomic_write_block(inode, cur->index, cur->old_addr, NULL, true); + } else if (truncate) { + f2fs_truncate_hole(inode, start_index, cur->index); + start_index = cur->index + 1; + } list_del(&cur->list); kmem_cache_free(revoke_entry_slab, cur); } if (!revoke && truncate) - f2fs_do_truncate_blocks(inode, 0, false); + f2fs_do_truncate_blocks(inode, start_index * PAGE_SIZE, false); } static int __f2fs_commit_atomic_write(struct inode *inode) @@ -384,10 +392,8 @@ int f2fs_commit_atomic_write(struct inode *inode) */ void f2fs_balance_fs(struct f2fs_sb_info *sbi, bool need) { - if (time_to_inject(sbi, FAULT_CHECKPOINT)) { - f2fs_show_injection_info(sbi, FAULT_CHECKPOINT); + if (time_to_inject(sbi, FAULT_CHECKPOINT)) f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_FAULT_INJECT); - } /* balance_fs_bg is able to be pending */ if (need && excess_cached_nats(sbi)) @@ -508,6 +514,8 @@ static int __submit_flush_wait(struct f2fs_sb_info *sbi, trace_f2fs_issue_flush(bdev, test_opt(sbi, NOBARRIER), test_opt(sbi, FLUSH_MERGE), ret); + if (!ret) + f2fs_update_iostat(sbi, NULL, FS_FLUSH_IO, 0); return ret; } @@ -1059,7 +1067,7 @@ static void __init_discard_policy(struct f2fs_sb_info *sbi, dpolicy->granularity = granularity; dpolicy->max_requests = dcc->max_discard_request; - dpolicy->io_aware_gran = MAX_PLIST_NUM; + dpolicy->io_aware_gran = dcc->discard_io_aware_gran; dpolicy->timeout = false; if (discard_type == DPOLICY_BG) { @@ -1095,9 +1103,8 @@ static void __update_discard_tree_range(struct f2fs_sb_info *sbi, block_t start, block_t len); /* this function is copied from blkdev_issue_discard from block/blk-lib.c */ static int __submit_discard_cmd(struct f2fs_sb_info *sbi, - struct discard_policy *dpolicy, - struct discard_cmd *dc, - unsigned int *issued) + struct discard_policy *dpolicy, + struct discard_cmd *dc, int *issued) { struct block_device *bdev = dc->bdev; unsigned int max_discard_blocks = @@ -1141,7 +1148,6 @@ static int __submit_discard_cmd(struct f2fs_sb_info *sbi, dc->len += len; if (time_to_inject(sbi, FAULT_DISCARD)) { - f2fs_show_injection_info(sbi, FAULT_DISCARD); err = -EIO; } else { err = __blkdev_issue_discard(bdev, @@ -1186,7 +1192,7 @@ static int __submit_discard_cmd(struct f2fs_sb_info *sbi, atomic_inc(&dcc->issued_discard); - f2fs_update_iostat(sbi, NULL, FS_DISCARD, len * F2FS_BLKSIZE); + f2fs_update_iostat(sbi, NULL, FS_DISCARD_IO, len * F2FS_BLKSIZE); lstart += len; start += len; @@ -1378,8 +1384,8 @@ static void __queue_discard_cmd(struct f2fs_sb_info *sbi, mutex_unlock(&SM_I(sbi)->dcc_info->cmd_lock); } -static unsigned int __issue_discard_cmd_orderly(struct f2fs_sb_info *sbi, - struct discard_policy *dpolicy) +static void __issue_discard_cmd_orderly(struct f2fs_sb_info *sbi, + struct discard_policy *dpolicy, int *issued) { struct discard_cmd_control *dcc = SM_I(sbi)->dcc_info; struct discard_cmd *prev_dc = NULL, *next_dc = NULL; @@ -1387,7 +1393,6 @@ static unsigned int __issue_discard_cmd_orderly(struct f2fs_sb_info *sbi, struct discard_cmd *dc; struct blk_plug plug; unsigned int pos = dcc->next_pos; - unsigned int issued = 0; bool io_interrupted = false; mutex_lock(&dcc->cmd_lock); @@ -1414,9 +1419,9 @@ static unsigned int __issue_discard_cmd_orderly(struct f2fs_sb_info *sbi, } dcc->next_pos = dc->lstart + dc->len; - err = __submit_discard_cmd(sbi, dpolicy, dc, &issued); + err = __submit_discard_cmd(sbi, dpolicy, dc, issued); - if (issued >= dpolicy->max_requests) + if (*issued >= dpolicy->max_requests) break; next: node = rb_next(&dc->rb_node); @@ -1432,10 +1437,8 @@ next: mutex_unlock(&dcc->cmd_lock); - if (!issued && io_interrupted) - issued = -1; - - return issued; + if (!(*issued) && io_interrupted) + *issued = -1; } static unsigned int __wait_all_discard_cmd(struct f2fs_sb_info *sbi, struct discard_policy *dpolicy); @@ -1463,8 +1466,10 @@ retry: if (i + 1 < dpolicy->granularity) break; - if (i + 1 < dcc->max_ordered_discard && dpolicy->ordered) - return __issue_discard_cmd_orderly(sbi, dpolicy); + if (i + 1 < dcc->max_ordered_discard && dpolicy->ordered) { + __issue_discard_cmd_orderly(sbi, dpolicy, &issued); + return issued; + } pend_list = &dcc->pend_list[i]; @@ -1609,9 +1614,9 @@ static unsigned int __wait_all_discard_cmd(struct f2fs_sb_info *sbi, return __wait_discard_cmd_range(sbi, dpolicy, 0, UINT_MAX); /* wait all */ - __init_discard_policy(sbi, &dp, DPOLICY_FSTRIM, 1); + __init_discard_policy(sbi, &dp, DPOLICY_FSTRIM, MIN_DISCARD_GRANULARITY); discard_blks = __wait_discard_cmd_range(sbi, &dp, 0, UINT_MAX); - __init_discard_policy(sbi, &dp, DPOLICY_UMOUNT, 1); + __init_discard_policy(sbi, &dp, DPOLICY_UMOUNT, MIN_DISCARD_GRANULARITY); discard_blks += __wait_discard_cmd_range(sbi, &dp, 0, UINT_MAX); return discard_blks; @@ -1653,7 +1658,14 @@ void f2fs_stop_discard_thread(struct f2fs_sb_info *sbi) } } -/* This comes from f2fs_put_super */ +/** + * f2fs_issue_discard_timeout() - Issue all discard cmd within UMOUNT_DISCARD_TIMEOUT + * @sbi: the f2fs_sb_info data for discard cmd to issue + * + * When UMOUNT_DISCARD_TIMEOUT is exceeded, all remaining discard commands will be dropped + * + * Return true if issued all discard cmd or no discard cmd need issue, otherwise return false. + */ bool f2fs_issue_discard_timeout(struct f2fs_sb_info *sbi) { struct discard_cmd_control *dcc = SM_I(sbi)->dcc_info; @@ -1661,7 +1673,7 @@ bool f2fs_issue_discard_timeout(struct f2fs_sb_info *sbi) bool dropped; if (!atomic_read(&dcc->discard_cmd_cnt)) - return false; + return true; __init_discard_policy(sbi, &dpolicy, DPOLICY_UMOUNT, dcc->discard_granularity); @@ -1672,7 +1684,7 @@ bool f2fs_issue_discard_timeout(struct f2fs_sb_info *sbi) __wait_all_discard_cmd(sbi, NULL); f2fs_bug_on(sbi, atomic_read(&dcc->discard_cmd_cnt)); - return dropped; + return !dropped; } static int issue_discard_thread(void *data) @@ -1694,13 +1706,14 @@ static int issue_discard_thread(void *data) if (sbi->gc_mode == GC_URGENT_HIGH || !f2fs_available_free_memory(sbi, DISCARD_CACHE)) - __init_discard_policy(sbi, &dpolicy, DPOLICY_FORCE, 1); + __init_discard_policy(sbi, &dpolicy, DPOLICY_FORCE, + MIN_DISCARD_GRANULARITY); else __init_discard_policy(sbi, &dpolicy, DPOLICY_BG, dcc->discard_granularity); if (dcc->discard_wake) - dcc->discard_wake = 0; + dcc->discard_wake = false; /* clean up pending candidates before going to sleep */ if (atomic_read(&dcc->queued_discard)) @@ -2065,6 +2078,7 @@ static int create_discard_cmd_control(struct f2fs_sb_info *sbi) if (!dcc) return -ENOMEM; + dcc->discard_io_aware_gran = MAX_PLIST_NUM; dcc->discard_granularity = DEFAULT_DISCARD_GRANULARITY; dcc->max_ordered_discard = DEFAULT_MAX_ORDERED_DISCARD_GRANULARITY; if (F2FS_OPTION(sbi).discard_unit == DISCARD_UNIT_SEGMENT) @@ -2327,17 +2341,13 @@ bool f2fs_is_checkpointed_data(struct f2fs_sb_info *sbi, block_t blkaddr) return is_cp; } -/* - * This function should be resided under the curseg_mutex lock - */ -static void __add_sum_entry(struct f2fs_sb_info *sbi, int type, - struct f2fs_summary *sum) +static unsigned short f2fs_curseg_valid_blocks(struct f2fs_sb_info *sbi, int type) { struct curseg_info *curseg = CURSEG_I(sbi, type); - void *addr = curseg->sum_blk; - addr += curseg->next_blkoff * sizeof(struct f2fs_summary); - memcpy(addr, sum, sizeof(struct f2fs_summary)); + if (sbi->ckpt->alloc_type[type] == SSR) + return sbi->blocks_per_seg; + return curseg->next_blkoff; } /* @@ -2349,15 +2359,11 @@ int f2fs_npages_for_summary_flush(struct f2fs_sb_info *sbi, bool for_ra) int i, sum_in_page; for (i = CURSEG_HOT_DATA; i <= CURSEG_COLD_DATA; i++) { - if (sbi->ckpt->alloc_type[i] == SSR) - valid_sum_count += sbi->blocks_per_seg; - else { - if (for_ra) - valid_sum_count += le16_to_cpu( - F2FS_CKPT(sbi)->cur_data_blkoff[i]); - else - valid_sum_count += curseg_blkoff(sbi, i); - } + if (sbi->ckpt->alloc_type[i] != SSR && for_ra) + valid_sum_count += + le16_to_cpu(F2FS_CKPT(sbi)->cur_data_blkoff[i]); + else + valid_sum_count += f2fs_curseg_valid_blocks(sbi, i); } sum_in_page = (PAGE_SIZE - 2 * SUM_JOURNAL_SIZE - @@ -2628,30 +2634,10 @@ static int __next_free_blkoff(struct f2fs_sb_info *sbi, return __find_rev_next_zero_bit(target_map, sbi->blocks_per_seg, start); } -/* - * If a segment is written by LFS manner, next block offset is just obtained - * by increasing the current block offset. However, if a segment is written by - * SSR manner, next block offset obtained by calling __next_free_blkoff - */ -static void __refresh_next_blkoff(struct f2fs_sb_info *sbi, - struct curseg_info *seg) +static int f2fs_find_next_ssr_block(struct f2fs_sb_info *sbi, + struct curseg_info *seg) { - if (seg->alloc_type == SSR) { - seg->next_blkoff = - __next_free_blkoff(sbi, seg->segno, - seg->next_blkoff + 1); - } else { - seg->next_blkoff++; - if (F2FS_OPTION(sbi).fs_mode == FS_MODE_FRAGMENT_BLK) { - /* To allocate block chunks in different sizes, use random number */ - if (--seg->fragment_remained_chunk <= 0) { - seg->fragment_remained_chunk = - get_random_u32_inclusive(1, sbi->max_fragment_chunk); - seg->next_blkoff += - get_random_u32_inclusive(1, sbi->max_fragment_hole); - } - } - } + return __next_free_blkoff(sbi, seg->segno, seg->next_blkoff + 1); } bool f2fs_segment_has_free_slot(struct f2fs_sb_info *sbi, int segno) @@ -2909,33 +2895,23 @@ static void __allocate_new_segment(struct f2fs_sb_info *sbi, int type, struct curseg_info *curseg = CURSEG_I(sbi, type); unsigned int old_segno; - if (!curseg->inited) - goto alloc; - - if (force || curseg->next_blkoff || - get_valid_blocks(sbi, curseg->segno, new_sec)) - goto alloc; - - if (!get_ckpt_valid_blocks(sbi, curseg->segno, new_sec)) + if (!force && curseg->inited && + !curseg->next_blkoff && + !get_valid_blocks(sbi, curseg->segno, new_sec) && + !get_ckpt_valid_blocks(sbi, curseg->segno, new_sec)) return; -alloc: + old_segno = curseg->segno; new_curseg(sbi, type, true); stat_inc_seg_type(sbi, curseg); locate_dirty_segment(sbi, old_segno); } -static void __allocate_new_section(struct f2fs_sb_info *sbi, - int type, bool force) -{ - __allocate_new_segment(sbi, type, true, force); -} - void f2fs_allocate_new_section(struct f2fs_sb_info *sbi, int type, bool force) { f2fs_down_read(&SM_I(sbi)->curseg_lock); down_write(&SIT_I(sbi)->sentry_lock); - __allocate_new_section(sbi, type, force); + __allocate_new_segment(sbi, type, true, force); up_write(&SIT_I(sbi)->sentry_lock); f2fs_up_read(&SM_I(sbi)->curseg_lock); } @@ -3113,13 +3089,6 @@ out: return err; } -static bool __has_curseg_space(struct f2fs_sb_info *sbi, - struct curseg_info *curseg) -{ - return curseg->next_blkoff < f2fs_usable_blks_in_seg(sbi, - curseg->segno); -} - int f2fs_rw_hint_to_seg_type(enum rw_hint hint) { switch (hint) { @@ -3238,6 +3207,19 @@ static int __get_segment_type(struct f2fs_io_info *fio) return type; } +static void f2fs_randomize_chunk(struct f2fs_sb_info *sbi, + struct curseg_info *seg) +{ + /* To allocate block chunks in different sizes, use random number */ + if (--seg->fragment_remained_chunk > 0) + return; + + seg->fragment_remained_chunk = + get_random_u32_inclusive(1, sbi->max_fragment_chunk); + seg->next_blkoff += + get_random_u32_inclusive(1, sbi->max_fragment_hole); +} + void f2fs_allocate_data_block(struct f2fs_sb_info *sbi, struct page *page, block_t old_blkaddr, block_t *new_blkaddr, struct f2fs_summary *sum, int type, @@ -3248,6 +3230,7 @@ void f2fs_allocate_data_block(struct f2fs_sb_info *sbi, struct page *page, unsigned long long old_mtime; bool from_gc = (type == CURSEG_ALL_DATA_ATGC); struct seg_entry *se = NULL; + bool segment_full = false; f2fs_down_read(&SM_I(sbi)->curseg_lock); @@ -3266,15 +3249,16 @@ void f2fs_allocate_data_block(struct f2fs_sb_info *sbi, struct page *page, f2fs_wait_discard_bio(sbi, *new_blkaddr); - /* - * __add_sum_entry should be resided under the curseg_mutex - * because, this function updates a summary entry in the - * current summary block. - */ - __add_sum_entry(sbi, type, sum); - - __refresh_next_blkoff(sbi, curseg); - + curseg->sum_blk->entries[curseg->next_blkoff] = *sum; + if (curseg->alloc_type == SSR) { + curseg->next_blkoff = f2fs_find_next_ssr_block(sbi, curseg); + } else { + curseg->next_blkoff++; + if (F2FS_OPTION(sbi).fs_mode == FS_MODE_FRAGMENT_BLK) + f2fs_randomize_chunk(sbi, curseg); + } + if (curseg->next_blkoff >= f2fs_usable_blks_in_seg(sbi, curseg->segno)) + segment_full = true; stat_inc_block_count(sbi, curseg); if (from_gc) { @@ -3293,10 +3277,11 @@ void f2fs_allocate_data_block(struct f2fs_sb_info *sbi, struct page *page, if (GET_SEGNO(sbi, old_blkaddr) != NULL_SEGNO) update_sit_entry(sbi, old_blkaddr, -1); - if (!__has_curseg_space(sbi, curseg)) { - /* - * Flush out current segment and replace it with new segment. - */ + /* + * If the current segment is full, flush it out and replace it with a + * new segment. + */ + if (segment_full) { if (from_gc) { get_atssr_segment(sbi, type, se->type, AT_SSR, se->mtime); @@ -3331,10 +3316,10 @@ void f2fs_allocate_data_block(struct f2fs_sb_info *sbi, struct page *page, struct f2fs_bio_info *io; if (F2FS_IO_ALIGNED(sbi)) - fio->retry = false; + fio->retry = 0; INIT_LIST_HEAD(&fio->list); - fio->in_list = true; + fio->in_list = 1; io = sbi->write_io[fio->type] + fio->temp; spin_lock(&io->io_lock); list_add_tail(&fio->list, &io->io_list); @@ -3415,14 +3400,13 @@ void f2fs_do_write_meta_page(struct f2fs_sb_info *sbi, struct page *page, .new_blkaddr = page->index, .page = page, .encrypted_page = NULL, - .in_list = false, + .in_list = 0, }; if (unlikely(page->index >= MAIN_BLKADDR(sbi))) fio.op_flags &= ~REQ_META; set_page_writeback(page); - ClearPageError(page); f2fs_submit_page_write(&fio); stat_inc_meta_count(sbi, page->index); @@ -3487,7 +3471,7 @@ int f2fs_inplace_write_data(struct f2fs_io_info *fio) stat_inc_inplace_blocks(fio->sbi); - if (fio->bio && !(SM_I(sbi)->ipu_policy & (1 << F2FS_IPU_NOCACHE))) + if (fio->bio && !IS_F2FS_IPU_NOCACHE(sbi)) err = f2fs_merge_page_bio(fio); else err = f2fs_submit_page_bio(fio); @@ -3576,7 +3560,7 @@ void f2fs_do_replace_block(struct f2fs_sb_info *sbi, struct f2fs_summary *sum, } curseg->next_blkoff = GET_BLKOFF_FROM_SEG0(sbi, new_blkaddr); - __add_sum_entry(sbi, type, sum); + curseg->sum_blk->entries[curseg->next_blkoff] = *sum; if (!recover_curseg || recover_newaddr) { if (!from_gc) @@ -3634,7 +3618,7 @@ void f2fs_wait_on_page_writeback(struct page *page, /* submit cached LFS IO */ f2fs_submit_merged_write_cond(sbi, NULL, page, 0, type); - /* sbumit cached IPU IO */ + /* submit cached IPU IO */ f2fs_submit_merged_ipu_write(sbi, NULL, page); if (ordered) { wait_on_page_writeback(page); @@ -3885,15 +3869,8 @@ static void write_compacted_summaries(struct f2fs_sb_info *sbi, block_t blkaddr) /* Step 3: write summary entries */ for (i = CURSEG_HOT_DATA; i <= CURSEG_COLD_DATA; i++) { - unsigned short blkoff; - seg_i = CURSEG_I(sbi, i); - if (sbi->ckpt->alloc_type[i] == SSR) - blkoff = sbi->blocks_per_seg; - else - blkoff = curseg_blkoff(sbi, i); - - for (j = 0; j < blkoff; j++) { + for (j = 0; j < f2fs_curseg_valid_blocks(sbi, i); j++) { if (!page) { page = f2fs_grab_meta_page(sbi, blkaddr++); kaddr = (unsigned char *)page_address(page); @@ -5126,7 +5103,7 @@ int f2fs_build_segment_manager(struct f2fs_sb_info *sbi) sm_info->rec_prefree_segments = DEF_MAX_RECLAIM_PREFREE_SEGMENTS; if (!f2fs_lfs_mode(sbi)) - sm_info->ipu_policy = 1 << F2FS_IPU_FSYNC; + sm_info->ipu_policy = BIT(F2FS_IPU_FSYNC); sm_info->min_ipu_util = DEF_MIN_IPU_UTIL; sm_info->min_fsync_blocks = DEF_MIN_FSYNC_BLOCKS; sm_info->min_seq_blocks = sbi->blocks_per_seg; diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h index 3ad1b7b6fa94..efdb7fc3b797 100644 --- a/fs/f2fs/segment.h +++ b/fs/f2fs/segment.h @@ -670,6 +670,9 @@ static inline int utilization(struct f2fs_sb_info *sbi) #define SMALL_VOLUME_SEGMENTS (16 * 512) /* 16GB */ +#define F2FS_IPU_DISABLE 0 + +/* Modification on enum should be synchronized with ipu_mode_names array */ enum { F2FS_IPU_FORCE, F2FS_IPU_SSR, @@ -679,8 +682,29 @@ enum { F2FS_IPU_ASYNC, F2FS_IPU_NOCACHE, F2FS_IPU_HONOR_OPU_WRITE, + F2FS_IPU_MAX, }; +static inline bool IS_F2FS_IPU_DISABLE(struct f2fs_sb_info *sbi) +{ + return SM_I(sbi)->ipu_policy == F2FS_IPU_DISABLE; +} + +#define F2FS_IPU_POLICY(name) \ +static inline bool IS_##name(struct f2fs_sb_info *sbi) \ +{ \ + return SM_I(sbi)->ipu_policy & BIT(name); \ +} + +F2FS_IPU_POLICY(F2FS_IPU_FORCE); +F2FS_IPU_POLICY(F2FS_IPU_SSR); +F2FS_IPU_POLICY(F2FS_IPU_UTIL); +F2FS_IPU_POLICY(F2FS_IPU_SSR_UTIL); +F2FS_IPU_POLICY(F2FS_IPU_FSYNC); +F2FS_IPU_POLICY(F2FS_IPU_ASYNC); +F2FS_IPU_POLICY(F2FS_IPU_NOCACHE); +F2FS_IPU_POLICY(F2FS_IPU_HONOR_OPU_WRITE); + static inline unsigned int curseg_segno(struct f2fs_sb_info *sbi, int type) { @@ -695,15 +719,10 @@ static inline unsigned char curseg_alloc_type(struct f2fs_sb_info *sbi, return curseg->alloc_type; } -static inline unsigned short curseg_blkoff(struct f2fs_sb_info *sbi, int type) -{ - struct curseg_info *curseg = CURSEG_I(sbi, type); - return curseg->next_blkoff; -} - -static inline void check_seg_range(struct f2fs_sb_info *sbi, unsigned int segno) +static inline bool valid_main_segno(struct f2fs_sb_info *sbi, + unsigned int segno) { - f2fs_bug_on(sbi, segno > TOTAL_SEGS(sbi) - 1); + return segno <= (MAIN_SEGS(sbi) - 1); } static inline void verify_fio_blkaddr(struct f2fs_io_info *fio) @@ -758,7 +777,7 @@ static inline int check_block_count(struct f2fs_sb_info *sbi, /* check segment usage, and check boundary of a given segment number */ if (unlikely(GET_SIT_VBLOCKS(raw_sit) > usable_blks_per_seg - || segno > TOTAL_SEGS(sbi) - 1)) { + || !valid_main_segno(sbi, segno))) { f2fs_err(sbi, "Wrong valid blocks %d or segno %u", GET_SIT_VBLOCKS(raw_sit), segno); set_sbi_flag(sbi, SBI_NEED_FSCK); @@ -775,7 +794,7 @@ static inline pgoff_t current_sit_addr(struct f2fs_sb_info *sbi, unsigned int offset = SIT_BLOCK_OFFSET(start); block_t blk_addr = sit_i->sit_base_addr + offset; - check_seg_range(sbi, start); + f2fs_bug_on(sbi, !valid_main_segno(sbi, start)); #ifdef CONFIG_F2FS_CHECK_FS if (f2fs_test_bit(offset, sit_i->sit_bitmap) != @@ -924,6 +943,6 @@ static inline void wake_up_discard_thread(struct f2fs_sb_info *sbi, bool force) if (!wakeup || !is_idle(sbi, DISCARD_TIME)) return; wake_up: - dcc->discard_wake = 1; + dcc->discard_wake = true; wake_up_interruptible_all(&dcc->discard_wait_queue); } diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c index 64d3556d61a5..fbaaabbcd6de 100644 --- a/fs/f2fs/super.c +++ b/fs/f2fs/super.c @@ -1288,19 +1288,18 @@ default_check: * zone alignment optimization. This is optional for host-aware * devices, but mandatory for host-managed zoned block devices. */ -#ifndef CONFIG_BLK_DEV_ZONED - if (f2fs_sb_has_blkzoned(sbi)) { - f2fs_err(sbi, "Zoned block device support is not enabled"); - return -EINVAL; - } -#endif if (f2fs_sb_has_blkzoned(sbi)) { +#ifdef CONFIG_BLK_DEV_ZONED if (F2FS_OPTION(sbi).discard_unit != DISCARD_UNIT_SECTION) { f2fs_info(sbi, "Zoned block device doesn't need small discard, set discard_unit=section by default"); F2FS_OPTION(sbi).discard_unit = DISCARD_UNIT_SECTION; } +#else + f2fs_err(sbi, "Zoned block device support is not enabled"); + return -EINVAL; +#endif } #ifdef CONFIG_F2FS_FS_COMPRESSION @@ -1341,12 +1340,12 @@ default_check: } if (test_opt(sbi, DISABLE_CHECKPOINT) && f2fs_lfs_mode(sbi)) { - f2fs_err(sbi, "LFS not compatible with checkpoint=disable"); + f2fs_err(sbi, "LFS is not compatible with checkpoint=disable"); return -EINVAL; } if (test_opt(sbi, ATGC) && f2fs_lfs_mode(sbi)) { - f2fs_err(sbi, "LFS not compatible with ATGC"); + f2fs_err(sbi, "LFS is not compatible with ATGC"); return -EINVAL; } @@ -1366,10 +1365,8 @@ static struct inode *f2fs_alloc_inode(struct super_block *sb) { struct f2fs_inode_info *fi; - if (time_to_inject(F2FS_SB(sb), FAULT_SLAB_ALLOC)) { - f2fs_show_injection_info(F2FS_SB(sb), FAULT_SLAB_ALLOC); + if (time_to_inject(F2FS_SB(sb), FAULT_SLAB_ALLOC)) return NULL; - } fi = alloc_inode_sb(sb, f2fs_inode_cachep, GFP_F2FS_ZERO); if (!fi) @@ -1424,8 +1421,6 @@ static int f2fs_drop_inode(struct inode *inode) atomic_inc(&inode->i_count); spin_unlock(&inode->i_lock); - f2fs_abort_atomic_write(inode, true); - /* should remain fi->extent_tree for writepage */ f2fs_destroy_extent_node(inode); @@ -1543,7 +1538,7 @@ static void f2fs_put_super(struct super_block *sb) { struct f2fs_sb_info *sbi = F2FS_SB(sb); int i; - bool dropped; + bool done; /* unregister procfs/sysfs entries in advance to avoid race case */ f2fs_unregister_sysfs(sbi); @@ -1573,9 +1568,8 @@ static void f2fs_put_super(struct super_block *sb) } /* be sure to wait for any on-going discard commands */ - dropped = f2fs_issue_discard_timeout(sbi); - - if (f2fs_realtime_discard_enable(sbi) && !sbi->discard_blks && !dropped) { + done = f2fs_issue_discard_timeout(sbi); + if (f2fs_realtime_discard_enable(sbi) && !sbi->discard_blks && done) { struct cp_control cpc = { .reason = CP_UMOUNT | CP_TRIMMED, }; @@ -1900,15 +1894,24 @@ static int f2fs_show_options(struct seq_file *seq, struct dentry *root) if (test_opt(sbi, GC_MERGE)) seq_puts(seq, ",gc_merge"); + else + seq_puts(seq, ",nogc_merge"); if (test_opt(sbi, DISABLE_ROLL_FORWARD)) seq_puts(seq, ",disable_roll_forward"); if (test_opt(sbi, NORECOVERY)) seq_puts(seq, ",norecovery"); - if (test_opt(sbi, DISCARD)) + if (test_opt(sbi, DISCARD)) { seq_puts(seq, ",discard"); - else + if (F2FS_OPTION(sbi).discard_unit == DISCARD_UNIT_BLOCK) + seq_printf(seq, ",discard_unit=%s", "block"); + else if (F2FS_OPTION(sbi).discard_unit == DISCARD_UNIT_SEGMENT) + seq_printf(seq, ",discard_unit=%s", "segment"); + else if (F2FS_OPTION(sbi).discard_unit == DISCARD_UNIT_SECTION) + seq_printf(seq, ",discard_unit=%s", "section"); + } else { seq_puts(seq, ",nodiscard"); + } if (test_opt(sbi, NOHEAP)) seq_puts(seq, ",no_heap"); else @@ -2032,13 +2035,6 @@ static int f2fs_show_options(struct seq_file *seq, struct dentry *root) if (test_opt(sbi, ATGC)) seq_puts(seq, ",atgc"); - if (F2FS_OPTION(sbi).discard_unit == DISCARD_UNIT_BLOCK) - seq_printf(seq, ",discard_unit=%s", "block"); - else if (F2FS_OPTION(sbi).discard_unit == DISCARD_UNIT_SEGMENT) - seq_printf(seq, ",discard_unit=%s", "segment"); - else if (F2FS_OPTION(sbi).discard_unit == DISCARD_UNIT_SECTION) - seq_printf(seq, ",discard_unit=%s", "section"); - if (F2FS_OPTION(sbi).memory_mode == MEMORY_MODE_NORMAL) seq_printf(seq, ",memory=%s", "normal"); else if (F2FS_OPTION(sbi).memory_mode == MEMORY_MODE_LOW) @@ -2300,6 +2296,12 @@ static int f2fs_remount(struct super_block *sb, int *flags, char *data) } } #endif + if (f2fs_lfs_mode(sbi) && !IS_F2FS_IPU_DISABLE(sbi)) { + err = -EINVAL; + f2fs_warn(sbi, "LFS is not compatible with IPU"); + goto restore_opts; + } + /* disallow enable atgc dynamically */ if (no_atgc == !!test_opt(sbi, ATGC)) { err = -EINVAL; @@ -2589,10 +2591,8 @@ retry: int f2fs_dquot_initialize(struct inode *inode) { - if (time_to_inject(F2FS_I_SB(inode), FAULT_DQUOT_INIT)) { - f2fs_show_injection_info(F2FS_I_SB(inode), FAULT_DQUOT_INIT); + if (time_to_inject(F2FS_I_SB(inode), FAULT_DQUOT_INIT)) return -ESRCH; - } return dquot_initialize(inode); } @@ -4083,8 +4083,9 @@ static void f2fs_tuning_parameters(struct f2fs_sb_info *sbi) if (f2fs_block_unit_discard(sbi)) SM_I(sbi)->dcc_info->discard_granularity = MIN_DISCARD_GRANULARITY; - SM_I(sbi)->ipu_policy = 1 << F2FS_IPU_FORCE | - 1 << F2FS_IPU_HONOR_OPU_WRITE; + if (!f2fs_lfs_mode(sbi)) + SM_I(sbi)->ipu_policy = BIT(F2FS_IPU_FORCE) | + BIT(F2FS_IPU_HONOR_OPU_WRITE); } sbi->readdir_ra = true; diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c index 83a366f3ee80..0b19163c90d4 100644 --- a/fs/f2fs/sysfs.c +++ b/fs/f2fs/sysfs.c @@ -473,6 +473,17 @@ out: return count; } + if (!strcmp(a->attr.name, "discard_io_aware_gran")) { + if (t > MAX_PLIST_NUM) + return -EINVAL; + if (!f2fs_block_unit_discard(sbi)) + return -EINVAL; + if (t == *ui) + return count; + *ui = t; + return count; + } + if (!strcmp(a->attr.name, "discard_granularity")) { if (t == 0 || t > MAX_PLIST_NUM) return -EINVAL; @@ -511,7 +522,7 @@ out: } else if (t == 1) { sbi->gc_mode = GC_URGENT_HIGH; if (sbi->gc_thread) { - sbi->gc_thread->gc_wake = 1; + sbi->gc_thread->gc_wake = true; wake_up_interruptible_all( &sbi->gc_thread->gc_wait_queue_head); wake_up_discard_thread(sbi, true); @@ -521,7 +532,7 @@ out: } else if (t == 3) { sbi->gc_mode = GC_URGENT_MID; if (sbi->gc_thread) { - sbi->gc_thread->gc_wake = 1; + sbi->gc_thread->gc_wake = true; wake_up_interruptible_all( &sbi->gc_thread->gc_wait_queue_head); } @@ -678,7 +689,16 @@ out: } if (!strcmp(a->attr.name, "warm_data_age_threshold")) { - if (t == 0 || t <= sbi->hot_data_age_threshold) + if (t <= sbi->hot_data_age_threshold) + return -EINVAL; + if (t == *ui) + return count; + *ui = (unsigned int)t; + return count; + } + + if (!strcmp(a->attr.name, "last_age_weight")) { + if (t > 100) return -EINVAL; if (t == *ui) return count; @@ -686,6 +706,15 @@ out: return count; } + if (!strcmp(a->attr.name, "ipu_policy")) { + if (t >= BIT(F2FS_IPU_MAX)) + return -EINVAL; + if (t && f2fs_lfs_mode(sbi)) + return -EINVAL; + SM_I(sbi)->ipu_policy = (unsigned int)t; + return count; + } + *ui = (unsigned int)t; return count; @@ -825,6 +854,7 @@ F2FS_RW_ATTR(DCC_INFO, discard_cmd_control, max_discard_request, max_discard_req F2FS_RW_ATTR(DCC_INFO, discard_cmd_control, min_discard_issue_time, min_discard_issue_time); F2FS_RW_ATTR(DCC_INFO, discard_cmd_control, mid_discard_issue_time, mid_discard_issue_time); F2FS_RW_ATTR(DCC_INFO, discard_cmd_control, max_discard_issue_time, max_discard_issue_time); +F2FS_RW_ATTR(DCC_INFO, discard_cmd_control, discard_io_aware_gran, discard_io_aware_gran); F2FS_RW_ATTR(DCC_INFO, discard_cmd_control, discard_urgent_util, discard_urgent_util); F2FS_RW_ATTR(DCC_INFO, discard_cmd_control, discard_granularity, discard_granularity); F2FS_RW_ATTR(DCC_INFO, discard_cmd_control, max_ordered_discard, max_ordered_discard); @@ -944,6 +974,7 @@ F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, revoked_atomic_block, revoked_atomic_block) /* For block age extent cache */ F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, hot_data_age_threshold, hot_data_age_threshold); F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, warm_data_age_threshold, warm_data_age_threshold); +F2FS_RW_ATTR(F2FS_SBI, f2fs_sb_info, last_age_weight, last_age_weight); #define ATTR_LIST(name) (&f2fs_attr_##name.attr) static struct attribute *f2fs_attrs[] = { @@ -960,6 +991,7 @@ static struct attribute *f2fs_attrs[] = { ATTR_LIST(min_discard_issue_time), ATTR_LIST(mid_discard_issue_time), ATTR_LIST(max_discard_issue_time), + ATTR_LIST(discard_io_aware_gran), ATTR_LIST(discard_urgent_util), ATTR_LIST(discard_granularity), ATTR_LIST(max_ordered_discard), @@ -1042,6 +1074,7 @@ static struct attribute *f2fs_attrs[] = { ATTR_LIST(revoked_atomic_block), ATTR_LIST(hot_data_age_threshold), ATTR_LIST(warm_data_age_threshold), + ATTR_LIST(last_age_weight), NULL, }; ATTRIBUTE_GROUPS(f2fs); @@ -1129,13 +1162,13 @@ static const struct sysfs_ops f2fs_attr_ops = { .store = f2fs_attr_store, }; -static struct kobj_type f2fs_sb_ktype = { +static const struct kobj_type f2fs_sb_ktype = { .default_groups = f2fs_groups, .sysfs_ops = &f2fs_attr_ops, .release = f2fs_sb_release, }; -static struct kobj_type f2fs_ktype = { +static const struct kobj_type f2fs_ktype = { .sysfs_ops = &f2fs_attr_ops, }; @@ -1143,7 +1176,7 @@ static struct kset f2fs_kset = { .kobj = {.ktype = &f2fs_ktype}, }; -static struct kobj_type f2fs_feat_ktype = { +static const struct kobj_type f2fs_feat_ktype = { .default_groups = f2fs_feat_groups, .sysfs_ops = &f2fs_attr_ops, }; @@ -1184,7 +1217,7 @@ static const struct sysfs_ops f2fs_stat_attr_ops = { .store = f2fs_stat_attr_store, }; -static struct kobj_type f2fs_stat_ktype = { +static const struct kobj_type f2fs_stat_ktype = { .default_groups = f2fs_stat_groups, .sysfs_ops = &f2fs_stat_attr_ops, .release = f2fs_stat_kobj_release, @@ -1211,7 +1244,7 @@ static const struct sysfs_ops f2fs_feature_list_attr_ops = { .show = f2fs_sb_feat_attr_show, }; -static struct kobj_type f2fs_feature_list_ktype = { +static const struct kobj_type f2fs_feature_list_ktype = { .default_groups = f2fs_sb_feat_groups, .sysfs_ops = &f2fs_feature_list_attr_ops, .release = f2fs_feature_list_kobj_release, diff --git a/fs/f2fs/verity.c b/fs/f2fs/verity.c index f320ed8172ec..4fc95f353a7a 100644 --- a/fs/f2fs/verity.c +++ b/fs/f2fs/verity.c @@ -81,7 +81,7 @@ static int pagecache_write(struct inode *inode, const void *buf, size_t count, size_t n = min_t(size_t, count, PAGE_SIZE - offset_in_page(pos)); struct page *page; - void *fsdata; + void *fsdata = NULL; int res; res = aops->write_begin(NULL, mapping, pos, n, &page, &fsdata); diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h index ee0d75d9a302..1701f25117ea 100644 --- a/include/linux/f2fs_fs.h +++ b/include/linux/f2fs_fs.h @@ -315,7 +315,7 @@ struct f2fs_inode { __u8 i_log_cluster_size; /* log of cluster size */ __le16 i_compress_flag; /* compress flag */ /* 0 bit: chksum flag - * [10,15] bits: compress level + * [8,15] bits: compress level */ __le32 i_extra_end[0]; /* for attribute size calculation */ } __packed; diff --git a/include/trace/events/f2fs.h b/include/trace/events/f2fs.h index 31d994e6b4ca..1322d34a5dfc 100644 --- a/include/trace/events/f2fs.h +++ b/include/trace/events/f2fs.h @@ -569,10 +569,10 @@ TRACE_EVENT(f2fs_file_write_iter, ); TRACE_EVENT(f2fs_map_blocks, - TP_PROTO(struct inode *inode, struct f2fs_map_blocks *map, - int create, int flag, int ret), + TP_PROTO(struct inode *inode, struct f2fs_map_blocks *map, int flag, + int ret), - TP_ARGS(inode, map, create, flag, ret), + TP_ARGS(inode, map, flag, ret), TP_STRUCT__entry( __field(dev_t, dev) @@ -584,7 +584,6 @@ TRACE_EVENT(f2fs_map_blocks, __field(int, m_seg_type) __field(bool, m_may_create) __field(bool, m_multidev_dio) - __field(int, create) __field(int, flag) __field(int, ret) ), @@ -599,7 +598,6 @@ TRACE_EVENT(f2fs_map_blocks, __entry->m_seg_type = map->m_seg_type; __entry->m_may_create = map->m_may_create; __entry->m_multidev_dio = map->m_multidev_dio; - __entry->create = create; __entry->flag = flag; __entry->ret = ret; ), @@ -607,7 +605,7 @@ TRACE_EVENT(f2fs_map_blocks, TP_printk("dev = (%d,%d), ino = %lu, file offset = %llu, " "start blkaddr = 0x%llx, len = 0x%llx, flags = %u, " "seg_type = %d, may_create = %d, multidevice = %d, " - "create = %d, flag = %d, err = %d", + "flag = %d, err = %d", show_dev_ino(__entry), (unsigned long long)__entry->m_lblk, (unsigned long long)__entry->m_pblk, @@ -616,7 +614,6 @@ TRACE_EVENT(f2fs_map_blocks, __entry->m_seg_type, __entry->m_may_create, __entry->m_multidev_dio, - __entry->create, __entry->flag, __entry->ret) ); @@ -1293,6 +1290,43 @@ DEFINE_EVENT(f2fs__page, f2fs_vm_page_mkwrite, TP_ARGS(page, type) ); +TRACE_EVENT(f2fs_replace_atomic_write_block, + + TP_PROTO(struct inode *inode, struct inode *cow_inode, pgoff_t index, + block_t old_addr, block_t new_addr, bool recovery), + + TP_ARGS(inode, cow_inode, index, old_addr, new_addr, recovery), + + TP_STRUCT__entry( + __field(dev_t, dev) + __field(ino_t, ino) + __field(ino_t, cow_ino) + __field(pgoff_t, index) + __field(block_t, old_addr) + __field(block_t, new_addr) + __field(bool, recovery) + ), + + TP_fast_assign( + __entry->dev = inode->i_sb->s_dev; + __entry->ino = inode->i_ino; + __entry->cow_ino = cow_inode->i_ino; + __entry->index = index; + __entry->old_addr = old_addr; + __entry->new_addr = new_addr; + __entry->recovery = recovery; + ), + + TP_printk("dev = (%d,%d), ino = %lu, cow_ino = %lu, index = %lu, " + "old_addr = 0x%llx, new_addr = 0x%llx, recovery = %d", + show_dev_ino(__entry), + __entry->cow_ino, + (unsigned long)__entry->index, + (unsigned long long)__entry->old_addr, + (unsigned long long)__entry->new_addr, + __entry->recovery) +); + TRACE_EVENT(f2fs_filemap_fault, TP_PROTO(struct inode *inode, pgoff_t index, unsigned long ret), @@ -1975,7 +2009,7 @@ TRACE_EVENT(f2fs_iostat, __entry->fs_cdrio = iostat[FS_CDATA_READ_IO]; __entry->fs_nrio = iostat[FS_NODE_READ_IO]; __entry->fs_mrio = iostat[FS_META_READ_IO]; - __entry->fs_discard = iostat[FS_DISCARD]; + __entry->fs_discard = iostat[FS_DISCARD_IO]; ), TP_printk("dev = (%d,%d), " @@ -2048,33 +2082,33 @@ TRACE_EVENT(f2fs_iostat_latency, TP_fast_assign( __entry->dev = sbi->sb->s_dev; - __entry->d_rd_peak = iostat_lat[0][DATA].peak_lat; - __entry->d_rd_avg = iostat_lat[0][DATA].avg_lat; - __entry->d_rd_cnt = iostat_lat[0][DATA].cnt; - __entry->n_rd_peak = iostat_lat[0][NODE].peak_lat; - __entry->n_rd_avg = iostat_lat[0][NODE].avg_lat; - __entry->n_rd_cnt = iostat_lat[0][NODE].cnt; - __entry->m_rd_peak = iostat_lat[0][META].peak_lat; - __entry->m_rd_avg = iostat_lat[0][META].avg_lat; - __entry->m_rd_cnt = iostat_lat[0][META].cnt; - __entry->d_wr_s_peak = iostat_lat[1][DATA].peak_lat; - __entry->d_wr_s_avg = iostat_lat[1][DATA].avg_lat; - __entry->d_wr_s_cnt = iostat_lat[1][DATA].cnt; - __entry->n_wr_s_peak = iostat_lat[1][NODE].peak_lat; - __entry->n_wr_s_avg = iostat_lat[1][NODE].avg_lat; - __entry->n_wr_s_cnt = iostat_lat[1][NODE].cnt; - __entry->m_wr_s_peak = iostat_lat[1][META].peak_lat; - __entry->m_wr_s_avg = iostat_lat[1][META].avg_lat; - __entry->m_wr_s_cnt = iostat_lat[1][META].cnt; - __entry->d_wr_as_peak = iostat_lat[2][DATA].peak_lat; - __entry->d_wr_as_avg = iostat_lat[2][DATA].avg_lat; - __entry->d_wr_as_cnt = iostat_lat[2][DATA].cnt; - __entry->n_wr_as_peak = iostat_lat[2][NODE].peak_lat; - __entry->n_wr_as_avg = iostat_lat[2][NODE].avg_lat; - __entry->n_wr_as_cnt = iostat_lat[2][NODE].cnt; - __entry->m_wr_as_peak = iostat_lat[2][META].peak_lat; - __entry->m_wr_as_avg = iostat_lat[2][META].avg_lat; - __entry->m_wr_as_cnt = iostat_lat[2][META].cnt; + __entry->d_rd_peak = iostat_lat[READ_IO][DATA].peak_lat; + __entry->d_rd_avg = iostat_lat[READ_IO][DATA].avg_lat; + __entry->d_rd_cnt = iostat_lat[READ_IO][DATA].cnt; + __entry->n_rd_peak = iostat_lat[READ_IO][NODE].peak_lat; + __entry->n_rd_avg = iostat_lat[READ_IO][NODE].avg_lat; + __entry->n_rd_cnt = iostat_lat[READ_IO][NODE].cnt; + __entry->m_rd_peak = iostat_lat[READ_IO][META].peak_lat; + __entry->m_rd_avg = iostat_lat[READ_IO][META].avg_lat; + __entry->m_rd_cnt = iostat_lat[READ_IO][META].cnt; + __entry->d_wr_s_peak = iostat_lat[WRITE_SYNC_IO][DATA].peak_lat; + __entry->d_wr_s_avg = iostat_lat[WRITE_SYNC_IO][DATA].avg_lat; + __entry->d_wr_s_cnt = iostat_lat[WRITE_SYNC_IO][DATA].cnt; + __entry->n_wr_s_peak = iostat_lat[WRITE_SYNC_IO][NODE].peak_lat; + __entry->n_wr_s_avg = iostat_lat[WRITE_SYNC_IO][NODE].avg_lat; + __entry->n_wr_s_cnt = iostat_lat[WRITE_SYNC_IO][NODE].cnt; + __entry->m_wr_s_peak = iostat_lat[WRITE_SYNC_IO][META].peak_lat; + __entry->m_wr_s_avg = iostat_lat[WRITE_SYNC_IO][META].avg_lat; + __entry->m_wr_s_cnt = iostat_lat[WRITE_SYNC_IO][META].cnt; + __entry->d_wr_as_peak = iostat_lat[WRITE_ASYNC_IO][DATA].peak_lat; + __entry->d_wr_as_avg = iostat_lat[WRITE_ASYNC_IO][DATA].avg_lat; + __entry->d_wr_as_cnt = iostat_lat[WRITE_ASYNC_IO][DATA].cnt; + __entry->n_wr_as_peak = iostat_lat[WRITE_ASYNC_IO][NODE].peak_lat; + __entry->n_wr_as_avg = iostat_lat[WRITE_ASYNC_IO][NODE].avg_lat; + __entry->n_wr_as_cnt = iostat_lat[WRITE_ASYNC_IO][NODE].cnt; + __entry->m_wr_as_peak = iostat_lat[WRITE_ASYNC_IO][META].peak_lat; + __entry->m_wr_as_avg = iostat_lat[WRITE_ASYNC_IO][META].avg_lat; + __entry->m_wr_as_cnt = iostat_lat[WRITE_ASYNC_IO][META].cnt; ), TP_printk("dev = (%d,%d), " |