summaryrefslogtreecommitdiffstats
path: root/fs/gfs2
Commit message (Collapse)AuthorAgeFilesLines
* gfs2: Fix regression in freeze_go_syncBob Peterson2020-11-181-1/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | Patch 541656d3a513 ("gfs2: freeze should work on read-only mounts") changed the check for glock state in function freeze_go_sync() from "gl->gl_state == LM_ST_SHARED" to "gl->gl_req == LM_ST_EXCLUSIVE". That's wrong and it regressed gfs2's freeze/thaw mechanism because it caused only the freezing node (which requests the glock in EX) to queue freeze work. All nodes go through this go_sync code path during the freeze to drop their SHared hold on the freeze glock, allowing the freezing node to acquire it in EXclusive mode. But all the nodes must freeze access to the file system locally, so they ALL must queue freeze work. The freeze_work calls freeze_func, which makes a request to reacquire the freeze glock in SH, effectively blocking until the thaw from the EX holder. Once thawed, the freezing node drops its EX hold on the freeze glock, then the (blocked) freeze_func reacquires the freeze glock in SH again (on all nodes, including the freezer) so all nodes go back to a thawed state. This patch changes the check back to gl_state == LM_ST_SHARED like it was prior to 541656d3a513. Fixes: 541656d3a513 ("gfs2: freeze should work on read-only mounts") Cc: stable@vger.kernel.org # v5.8+ Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: Fix case in which ail writes are done to jdata holesBob Peterson2020-11-122-1/+3
| | | | | | | | | | | | | | | | | | | | Patch b2a846dbef4e ("gfs2: Ignore journal log writes for jdata holes") tried (unsuccessfully) to fix a case in which writes were done to jdata blocks, the blocks are sent to the ail list, then a punch_hole or truncate operation caused the blocks to be freed. In other words, the ail items are for jdata holes. Before b2a846dbef4e, the jdata hole caused function gfs2_block_map to return -EIO, which was eventually interpreted as an IO error to the journal, and then withdraw. This patch changes function gfs2_get_block_noalloc, which is only used for jdata writes, so it returns -ENODATA rather than -EIO, and when -ENODATA is returned to gfs2_ail1_start_one, the error is ignored. We can safely ignore it because gfs2_ail1_start_one is only called when the jdata pages have already been written and truncated, so the ail1 content no longer applies. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* Revert "gfs2: Ignore journal log writes for jdata holes"Bob Peterson2020-11-121-6/+2
| | | | | | | | | | | | | | | | | | | This reverts commit b2a846dbef4ef54ef032f0f5ee188c609a0278a7. That commit changed the behavior of function gfs2_block_map to return -ENODATA in cases where a hole (IOMAP_HOLE) is encountered and create is false. While that fixed the intended problem for jdata, it also broke other callers of gfs2_block_map such as some jdata block reads. Before the patch, an encountered hole would be skipped and the buffer seen as unmapped by the caller. The patch changed the behavior to return -ENODATA, which is interpreted as an error by the caller. The -ENODATA return code should be restricted to the specific case where jdata holes are encountered during ail1 writes. That will be done in a later patch. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: fix possible reference leak in gfs2_check_blk_typeZhang Qilong2020-11-121-5/+5
| | | | | | | | In the fail path of gfs2_check_blk_type, forgetting to call gfs2_glock_dq_uninit will result in rgd_gh reference leak. Signed-off-by: Zhang Qilong <zhangqilong3@huawei.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: Wake up when sd_glock_disposal becomes zeroAlexander Aring2020-11-031-1/+2
| | | | | | | | | | | | | | Commit fc0e38dae645 ("GFS2: Fix glock deallocation race") fixed a sd_glock_disposal accounting bug by adding a missing atomic_dec statement, but it failed to wake up sd_glock_wait when that decrement causes sd_glock_disposal to reach zero. As a consequence, gfs2_gl_hash_clear can now run into a 10-minute timeout instead of being woken up. Add the missing wakeup. Fixes: fc0e38dae645 ("GFS2: Fix glock deallocation race") Cc: stable@vger.kernel.org # v2.6.39+ Signed-off-by: Alexander Aring <aahringo@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: Don't call cancel_delayed_work_sync from within delete work functionAndreas Gruenbacher2020-11-021-1/+2
| | | | | | | | | | | | | | Right now, we can end up calling cancel_delayed_work_sync from within delete_work_func via gfs2_lookup_by_inum -> gfs2_inode_lookup -> gfs2_cancel_delete_work. When that happens, it will result in a deadlock. Instead, gfs2_inode_lookup should skip the call to gfs2_cancel_delete_work when called from delete_work_func (blktype == GFS2_BLKST_UNLINKED). Reported-by: Alexander Ahring Oder Aring <aahringo@redhat.com> Fixes: a0e3cc65fa29 ("gfs2: Turn gl_delete into a delayed work") Cc: stable@vger.kernel.org # v5.8+ Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: check for live vs. read-only file system in gfs2_fitrimBob Peterson2020-10-291-0/+3
| | | | | | | | | | | | | | | | | | | | | Before this patch, gfs2_fitrim was not properly checking for a "live" file system. If the file system had something to trim and the file system was read-only (or spectator) it would start the trim, but when it starts the transaction, gfs2_trans_begin returns -EROFS (read-only file system) and it errors out. However, if the file system was already trimmed so there's no work to do, it never called gfs2_trans_begin. That code is bypassed so it never returns the error. Instead, it returns a good return code with 0 work. All this makes for inconsistent behavior: The same fstrim command can return -EROFS in one case and 0 in another. This tripped up xfstests generic/537 which reports the error as: +fstrim with unrecovered metadata just ate your filesystem This patch adds a check for a "live" (iow, active journal, iow, RW) file system, and if not, returns the error properly. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: don't initialize statfs_change inodes in spectator modeBob Peterson2020-10-291-4/+8
| | | | | | | | | | | | | Before commit 97fd734ba17e, the local statfs_changeX inode was never initialized for spectator mounts. However, it still checks for spectator mounts when unmounting everything. There's no good reason to lookup the statfs_changeX files because spectators cannot perform recovery. It still, however, needs the master statfs file for statfs calls. This patch adds the check for spectator mounts to init_statfs. Fixes: 97fd734ba17e ("gfs2: lookup local statfs inodes prior to journal recovery") Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: Split up gfs2_meta_sync into inode and rgrp versionsBob Peterson2020-10-295-40/+52
| | | | | | | | | | | | | | | Before this patch, function gfs2_meta_sync called filemap_fdatawrite to write the address space for the metadata being synced. That's great for inodes, but resource groups all point to the same superblock-address space, sdp->sd_aspace. Each rgrp has its own range of blocks on which it should operate. That meant every time an rgrp's metadata was synced, it would write all of them instead of just the range. This patch eliminates function gfs2_meta_sync and tailors specific metasync functions for inodes and rgrps. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: init_journal's undo directive should also undo the statfs inodesBob Peterson2020-10-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Hi, Before this patch, function init_journal's "undo" directive jumped to label fail_jinode_gh. But now that it does statfs initialization, it needs to jump to fail_statfs instead. Failure to do so means that mount failures after init_journal is successful will neglect to let go of the proper statfs information, stranding the statfs_changeX inodes. This makes it impossible to free its glocks, and results in: gfs2: fsid=sda.s: G: s:EX n:2/805f f:Dqob t:EX d:UN/603701000 a:0 v:0 r:4 m:200 p:1 gfs2: fsid=sda.s: H: s:EX f:H e:0 p:1397947 [(ended)] init_journal+0x548/0x890 [gfs2] gfs2: fsid=sda.s: I: n:6/32863 t:8 f:0x00 d:0x00000201 s:24 p:0 gfs2: fsid=sda.s: G: s:SH n:5/805f f:Dqob t:SH d:UN/603712000 a:0 v:0 r:3 m:200 p:0 gfs2: fsid=sda.s: H: s:SH f:EH e:0 p:1397947 [(ended)] gfs2_inode_lookup+0x1fb/0x410 [gfs2] VFS: Busy inodes after unmount of sda. Self-destruct in 5 seconds. Have a nice day... The next time the file system is mounted, it then reuses the same glocks, which ends in a kernel NULL pointer dereference when trying to dump the reused glock. This patch makes the "undo" function of init_journal jump to fail_statfs so the statfs files are properly deconstructed upon failure. Fixes: 97fd734ba17e ("gfs2: lookup local statfs inodes prior to journal recovery") Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: Add missing truncate_inode_pages_final for sd_aspaceBob Peterson2020-10-291-0/+1
| | | | | | | | | | | | | Gfs2 creates an address space for its rgrps called sd_aspace, but it never called truncate_inode_pages_final on it. This confused vfs greatly which tried to reference the address space after gfs2 had freed the superblock that contained it. This patch adds a call to truncate_inode_pages_final for sd_aspace, thus avoiding the use-after-free. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: Free rd_bits later in gfs2_clear_rgrpd to fix use-after-freeBob Peterson2020-10-291-1/+1
| | | | | | | | | | Function gfs2_clear_rgrpd calls kfree(rgd->rd_bits) before calling return_all_reservations, but return_all_reservations still dereferences rgd->rd_bits in __rs_deltree. Fix that by moving the call to kfree below the call to return_all_reservations. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: Recover statfs info in journal headAbhi Das2020-10-233-1/+106
| | | | | | | | | | | | | | | | | | Apply the outstanding statfs changes in the journal head to the master statfs file. Zero out the local statfs file for good measure. Previously, statfs updates would be read in from the local statfs inode and synced to the master statfs inode during recovery. We now use the statfs updates in the journal head to update the master statfs inode instead of reading in from the local statfs inode. To preserve backward compatibility with kernels that can't do this, we still need to keep the local statfs inode up to date by writing changes to it. At some point in the future, we can do away with the local statfs inodes altogether and keep the statfs changes solely in the journal. Signed-off-by: Abhi Das <adas@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: lookup local statfs inodes prior to journal recoveryAbhi Das2020-10-234-36/+139
| | | | | | | | | | | | We need to lookup the master statfs inode and the local statfs inodes earlier in the mount process (in init_journal) so journal recovery can use them when it attempts to recover the statfs info. We lookup all the local statfs inodes and store them in a linked list to allow a node to recover statfs info for other nodes in the cluster. Signed-off-by: Abhi Das <adas@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: Add fields for statfs info in struct gfs2_log_header_hostAbhi Das2020-10-204-1/+11
| | | | | | | | | And read these in __get_log_header() from the log header. Also make gfs2_statfs_change_out() non-static so it can be used outside of super.c Signed-off-by: Abhi Das <adas@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: Ignore subsequent errors after withdraw in rgrp_go_syncAndreas Gruenbacher2020-10-201-1/+1
| | | | | | | Once a withdraw has occurred, ignore errors that are the consequence of the withdraw. Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: Eliminate gl_vmBob Peterson2020-10-203-30/+16
| | | | | | | | | | | | | | | | | | | The gfs2_glock structure has a gl_vm member, introduced in commit 7005c3e4ae428 ("GFS2: Use range based functions for rgrp sync/invalidation"), which stores the location of resource groups within their address space. This structure is in a union with iopen glock specific fields. It was introduced because at unmount time, the resource group objects were destroyed before flushing out any pending resource group glock work, and flushing out such work could require flushing / truncating the address space. Since commit b3422cacdd7e6 ("gfs2: Rework how rgrp buffer_heads are managed"), any pending resource group glock work is flushed out before destroying the resource group objects. So the resource group objects will now always exist in rgrp_go_sync and rgrp_go_inval, and we now simply compute the gl_vm values where needed instead of caching them. This also eliminates the union. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: Only access gl_delete for iopen glocksBob Peterson2020-10-201-4/+7
| | | | | | | | | | | | | Only initialize gl_delete for iopen glocks, but more importantly, only access it for iopen glocks in flush_delete_work: flush_delete_work is called for different types of glocks including rgrp glocks, and those use gl_vm which is in a union with gl_delete. Without this fix, we'll end up clobbering gl_vm, which results in general memory corruption. Fixes: a0e3cc65fa29 ("gfs2: Turn gl_delete into a delayed work") Cc: stable@vger.kernel.org # v5.8+ Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: Fix comments to glock_hash_walkBob Peterson2020-10-201-2/+1
| | | | | | | | The comments before function glock_hash_walk had the wrong name and an extra parameter. This simply fixes the comments. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: eliminate GLF_QUEUED flag in favor of list_empty(gl_holders)Bob Peterson2020-10-153-10/+3
| | | | | | | | | | | | | | | | | | | | | | | | | Before this patch, glock.c maintained a flag, GLF_QUEUED, which indicated when a glock had a holder queued. It was only checked for inode glocks, although set and cleared by all glocks, and it was only used to determine whether the glock should be held for the minimum hold time before releasing. The problem is that the flag is not accurate at all. If a process holds the glock, the flag is set. When they dequeue the glock, it only cleared the flag in cases when the state actually changed. So if the state doesn't change, the flag may still be set, even when nothing is queued. This happens to iopen glocks often: the get held in SH, then the file is closed, but the glock remains in SH mode. We don't need a special flag to indicate this: we can simply tell whether the glock has any items queued to the holders queue. It's a waste of cpu time to maintain it. This patch eliminates the flag in favor of simply checking list_empty on the glock holders. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: Ignore journal log writes for jdata holesBob Peterson2020-10-151-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | When flushing out its ail1 list, gfs2_write_jdata_page calls function __block_write_full_page passing in function gfs2_get_block_noalloc. But there was a problem when a process wrote to a jdata file, then truncated it or punched a hole, leaving references to the blocks within the new hole in its ail list, which are to be written to the journal log. In writing them to the journal, after calling gfs2_block_map, function gfs2_get_block_noalloc determined that the (hole-punched) block was not mapped, so it returned -EIO to generic_writepages, which passed it back to gfs2_ail1_start_one. This, in turn, performed a withdraw, assuming there was a real IO error writing to the journal. This might be a valid error when writing metadata to the journal, but for journaled data writes, it does not warrant a withdraw. This patch adds a check to function gfs2_block_map that makes an exception for journaled data writes that correspond to jdata holes: If the iomap get function returns a block type of IOMAP_HOLE, it instead returns -ENODATA which does not cause the withdraw. Other errors are returned as before. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: simplify gfs2_block_mapBob Peterson2020-10-151-9/+5
| | | | | | | | Function gfs2_block_map had a lot of redundancy between its create and no_create paths. This patch simplifies the code to eliminate the redundancy. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: Only set PageChecked if we have a transactionBob Peterson2020-10-151-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With jdata writes, we frequently got into situations where gfs2 deadlocked because of this calling sequence: gfs2_ail1_start gfs2_ail1_flush - for every tr on the sd_ail1_list: gfs2_ail1_start_one - for every bd on the tr's tr_ail1_list: generic_writepages write_cache_pages passing __writepage() calls clear_page_dirty_for_io which calls set_page_dirty: which calls jdata_set_page_dirty which sets PageChecked. __writepage() calls mapping->a_ops->writepage AKA gfs2_jdata_writepage However, gfs2_jdata_writepage checks if PageChecked is set, and if so, it ignores the write and redirties the page. The problem is that write_cache_pages calls clear_page_dirty_for_io, which often calls set_page_dirty(). See comments in page-writeback.c starting with "Yes, Virginia". If it's jdata, set_page_dirty will call jdata_set_page_dirty which will set PageChecked. That causes a conflict because it makes it look like the page has been redirtied by another writer, in which case we need to skip writing it and redirty the page. That ends up in a deadlock because it isn't a "real" writer and nothing will ever clear PageChecked. If we do have a real writer, it will have started a transaction. So this patch checks if a transaction is in use, and if not, it skips setting PageChecked. That way, the page will be dirtied, cleaned, and written appropriately. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: don't lock sd_ail_lock in gfs2_releasepageBob Peterson2020-10-151-3/+0
| | | | | | | | | | | | | | | Patch 380f7c65a7eb3288e4b6812acf3474a1de230707 changed gfs2_releasepage so that it held the sd_ail_lock spin_lock for most of its processing. It did this for some mysterious undocumented bug somewhere in the evict code path. But in the nine years since, evict has been reworked and fixed many times, and so have the transactions and ail list. I can't see a reason to hold the sd_ail_lock unless it's protecting the actual ail lists hung off the transactions. Therefore, this patch removes the locking to increase speed and efficiency, and to further help us rework the log flush code to be more concurrent with transactions. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: make gfs2_ail1_empty_one return the count of active itemsBob Peterson2020-10-151-4/+8
| | | | | | | | | | | This patch is one baby step toward simplifying the journal management. It simply changes function gfs2_ail1_empty_one from a void to an int and makes it return a count of active items. This allows the caller to check the return code rather than list_empty on the tr_ail1_list. This way we can, in a later patch, combine transaction ail1 and ail2 lists. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: Wipe jdata and ail1 in gfs2_journal_wipe, formerly gfs2_meta_wipeBob Peterson2020-10-156-12/+86
| | | | | | | | | | | | | | | | | | | | | | | | Before this patch, when blocks were freed, it called gfs2_meta_wipe to take the metadata out of the pending journal blocks. It did this mostly by calling another function called gfs2_remove_from_journal. This is shortsighted because it does not do anything with jdata blocks which may also be in the journal. This patch expands the function so that it wipes out jdata blocks from the journal as well, and it wipes it from the ail1 list if it hasn't been written back yet. Since it now processes jdata blocks as well, the function has been renamed from gfs2_meta_wipe to gfs2_journal_wipe. New function gfs2_ail1_wipe wants a static view of the ail list, so it locks the sd_ail_lock when removing items. To accomplish this, function gfs2_remove_from_journal no longer locks the sd_ail_lock, and it's now the caller's responsibility to do so. I was going to make sd_ail_lock locking conditional, but the practice is generally frowned upon. For details, see: https://lwn.net/Articles/109066/ Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: enhance log_blocks trace point to show log blocks freeBob Peterson2020-10-151-2/+4
| | | | | | | | | | This patch adds some code to enhance the log_blocks trace point. It reports the number of free log blocks. This makes the trace point much more useful, especially for debugging performance problems when we can tell when the journal gets full and needs to wait for flushes, etc. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: add missing log_blocks trace points in gfs2_write_revokesBob Peterson2020-10-151-2/+10
| | | | | | | | | | | | Function gfs2_write_revokes was incrementing and decrementing the number of log blocks free, but there was never a log_blocks trace point for it. Thus, the free blocks from a log_blocks trace would jump around mysteriously. This patch adds the missing trace points so the trace makes more sense. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: rename gfs2_write_full_page to gfs2_write_jdata_page, remove parmBob Peterson2020-10-151-5/+10
| | | | | | | | | | | | | | | | Since the function is only used for writing jdata pages, this patch simply renames function gfs2_write_full_page to a more appropriate name: gfs2_write_jdata_page. This makes the code easier to understand. The function was only called in one place, which passed in a pointer to function gfs2_get_block_noalloc. The function doesn't need to be passed in. Therefore, this also eliminates the unnecessary parameter to increase efficiency. I also took the liberty of cleaning up the function comments. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: add validation checks for size of superblockAnant Thazhemadam2020-10-151-7/+11
| | | | | | | | | | | | | | | | | In gfs2_check_sb(), no validation checks are performed with regards to the size of the superblock. syzkaller detected a slab-out-of-bounds bug that was primarily caused because the block size for a superblock was set to zero. A valid size for a superblock is a power of 2 between 512 and PAGE_SIZE. Performing validation checks and ensuring that the size of the superblock is valid fixes this bug. Reported-by: syzbot+af90d47a37376844e731@syzkaller.appspotmail.com Tested-by: syzbot+af90d47a37376844e731@syzkaller.appspotmail.com Suggested-by: Andrew Price <anprice@redhat.com> Signed-off-by: Anant Thazhemadam <anant.thazhemadam@gmail.com> [Minor code reordering.] Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: use-after-free in sysfs deregistrationJamie Iles2020-10-144-18/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | syzkaller found the following splat with CONFIG_DEBUG_KOBJECT_RELEASE=y: Read of size 1 at addr ffff000028e896b8 by task kworker/1:2/228 CPU: 1 PID: 228 Comm: kworker/1:2 Tainted: G S 5.9.0-rc8+ #101 Hardware name: linux,dummy-virt (DT) Workqueue: events kobject_delayed_cleanup Call trace: dump_backtrace+0x0/0x4d8 show_stack+0x34/0x48 dump_stack+0x174/0x1f8 print_address_description.constprop.0+0x5c/0x550 kasan_report+0x13c/0x1c0 __asan_report_load1_noabort+0x34/0x60 memcmp+0xd0/0xd8 gfs2_uevent+0xc4/0x188 kobject_uevent_env+0x54c/0x1240 kobject_uevent+0x2c/0x40 __kobject_del+0x190/0x1d8 kobject_delayed_cleanup+0x2bc/0x3b8 process_one_work+0x96c/0x18c0 worker_thread+0x3f0/0xc30 kthread+0x390/0x498 ret_from_fork+0x10/0x18 Allocated by task 1110: kasan_save_stack+0x28/0x58 __kasan_kmalloc.isra.0+0xc8/0xe8 kasan_kmalloc+0x10/0x20 kmem_cache_alloc_trace+0x1d8/0x2f0 alloc_super+0x64/0x8c0 sget_fc+0x110/0x620 get_tree_bdev+0x190/0x648 gfs2_get_tree+0x50/0x228 vfs_get_tree+0x84/0x2e8 path_mount+0x1134/0x1da8 do_mount+0x124/0x138 __arm64_sys_mount+0x164/0x238 el0_svc_common.constprop.0+0x15c/0x598 do_el0_svc+0x60/0x150 el0_svc+0x34/0xb0 el0_sync_handler+0xc8/0x5b4 el0_sync+0x15c/0x180 Freed by task 228: kasan_save_stack+0x28/0x58 kasan_set_track+0x28/0x40 kasan_set_free_info+0x24/0x48 __kasan_slab_free+0x118/0x190 kasan_slab_free+0x14/0x20 slab_free_freelist_hook+0x6c/0x210 kfree+0x13c/0x460 Use the same pattern as f2fs + ext4 where the kobject destruction must complete before allowing the FS itself to be freed. This means that we need an explicit free_sbd in the callers. Cc: Bob Peterson <rpeterso@redhat.com> Cc: Andreas Gruenbacher <agruenba@redhat.com> Signed-off-by: Jamie Iles <jamie@nuviainc.com> [Also go to fail_free when init_names fails.] Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: Fix NULL pointer dereference in gfs2_rgrp_dumpAndrew Price2020-10-144-9/+15
| | | | | | | | | | | | | | | | | | | When an rindex entry is found to be corrupt, compute_bitstructs() calls gfs2_consist_rgrpd() which calls gfs2_rgrp_dump() like this: gfs2_rgrp_dump(NULL, rgd->rd_gl, fs_id_buf); gfs2_rgrp_dump then dereferences the gl without checking it and we get BUG: KASAN: null-ptr-deref in gfs2_rgrp_dump+0x28/0x280 because there's no rgrp glock involved while reading the rindex on mount. Fix this by changing gfs2_rgrp_dump to take an rgrp argument. Reported-by: syzbot+43fa87986bdd31df9de6@syzkaller.appspotmail.com Signed-off-by: Andrew Price <anprice@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: use iomap for buffered I/O in ordered and writeback modeChristoph Hellwig2020-10-143-34/+57
| | | | | | | | | | | | Switch to using the iomap readpage and writepage helpers for all I/O in the ordered and writeback modes, and thus eliminate using buffer_heads for I/O in these cases. The journaled data mode is left untouched. (Andreas Gruenbacher: In gfs2_unstuffer_page, switch from mark_buffer_dirty to set_page_dirty instead of accidentally leaving the page / buffer clean.) Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: call truncate_inode_pages_final for address space glocksBob Peterson2020-10-141-1/+6
| | | | | | | | | | Before this patch, we were not calling truncate_inode_pages_final for the address space for glocks, which left the possibility of a leak. We now take care of the problem instead of complaining, and we do it during glock tear-down.. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: simplify the logic in gfs2_evict_inodeBob Peterson2020-10-141-9/+4
| | | | | | | | | Now that we've factored out the deleted and undeleted dinode cases in gfs2_evict_inode, we can greatly simplify the logic. Now the function is easy to read and understand. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: factor evict_linked_inode out of gfs2_evict_inodeBob Peterson2020-10-141-18/+34
| | | | | | | | | Now that we've factored out the delete-dinode case to simplify gfs2_evict_inode, we take it a step further and factor out the other case: where we don't delete the inode. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: further simplify gfs2_evict_inode with new func evict_should_deleteBob Peterson2020-10-141-45/+77
| | | | | | | | | This patch further simplifies function gfs2_evict_inode() by adding a new function evict_should_delete. The function may also lock the inode glock. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: factor evict_unlinked_inode out of gfs2_evict_inodeBob Peterson2020-10-141-27/+40
| | | | | | | | | | | Function gfs2_evict_inode is way too big, complex and unreadable. This is a baby step toward breaking it apart to be more readable. It factors out the portion that deletes the online bits for a dinode that is unlinked and needs to be deleted. A future patch will factor out more. (If I factor out too much, the patch itself becomes unreadable). Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: rename variable error to ret in gfs2_evict_inodeBob Peterson2020-10-141-18/+18
| | | | | | | | | Function gfs2_evict_inode is too big and unreadable. This patch is just a baby step toward improving that. This first step just renames variable error to ret. This will help make future patches more readable. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: convert to use DEFINE_SEQ_ATTRIBUTE macroLiu Shixin2020-10-141-18/+2
| | | | | | | Use DEFINE_SEQ_ATTRIBUTE macro to simplify the code. Signed-off-by: Liu Shixin <liushixin2@huawei.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: Fix bad comment for trans_drainBob Peterson2020-10-141-1/+1
| | | | | Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* gfs2: Make sure we don't miss any delayed withdrawsAndreas Gruenbacher2020-10-143-30/+43
| | | | | | | | | | | | | | | | | Commit ca399c96e96e changes gfs2_log_flush to not withdraw the filesystem while holding the log flush lock, but it fails to check if the filesystem needs to be withdrawn once the log flush lock has been released. Likewise, commit f05b86db314d depends on gfs2_log_flush to trigger for delayed withdraws. Add that and clean up the code flow somewhat. In gfs2_put_super, add a check for delayed withdraws that have been missed to prevent these kinds of bugs in the future. Fixes: ca399c96e96e ("gfs2: flesh out delayed withdraw for gfs2_log_flush") Fixes: f05b86db314d ("gfs2: Prepare to withdraw as soon as an IO error occurs in log write") Cc: stable@vger.kernel.org # v5.7+: 462582b99b607: gfs2: add some much needed cleanup for log flushes that fail Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* Merge tag 'gfs2-v5.9-rc2-fixes' of ↵Linus Torvalds2020-08-282-0/+32
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2 Pull gfs2 fix from Andreas Gruenbacher: "Fix a memory leak on filesystem withdraw. We didn't detect this bug because we have slab merging on by default (CONFIG_SLAB_MERGE_DEFAULT). Adding 'slub_nomerge' to the kernel command line exposed the problem" * tag 'gfs2-v5.9-rc2-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2: gfs2: add some much needed cleanup for log flushes that fail
| * gfs2: add some much needed cleanup for log flushes that failBob Peterson2020-08-242-0/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a log flush fails due to io errors, it signals the failure but does not clean up after itself very well. This is because buffers are added to the transaction tr_buf and tr_databuf queue, but the io error causes gfs2_log_flush to bypass the "after_commit" functions responsible for dequeueing the bd elements. If the bd elements are added to the ail list before the error, function ail_drain takes care of dequeueing them. But if they haven't gotten that far, the elements are forgotten and make the transactions unable to be freed. This patch introduces new function trans_drain which drains the bd elements from the transaction so they can be freed properly. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
* | treewide: Use fallthrough pseudo-keywordGustavo A. R. Silva2020-08-232-3/+3
|/ | | | | | | | | | Replace the existing /* fall through */ comments and its variants with the new pseudo-keyword macro fallthrough[1]. Also, remove unnecessary fall-through markings when it is the case. [1] https://www.kernel.org/doc/html/v5.7/process/deprecated.html?highlight=fallthrough#implicit-switch-case-fall-through Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
* Merge tag 'gfs2-for-5.9' of ↵Linus Torvalds2020-08-106-61/+81
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2 Pull gfs2 updates from Andreas Gruenbacher: - Make sure transactions won't be started recursively in gfs2_block_zero_range (bug introduced in 5.4 when switching to iomap_zero_range) - Fix a glock holder refcount leak introduced in the iopen glock locking scheme rework merged in 5.8. - A few other small improvements (debugging, stack usage, comment fixes). * tag 'gfs2-for-5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2: gfs2: When gfs2_dirty_inode gets a glock error, dump the glock gfs2: Never call gfs2_block_zero_range with an open transaction gfs2: print details on transactions that aren't properly ended gfs2: Fix inaccurate comment fs: Fix typo in comment gfs2: Fix refcount leak in gfs2_glock_poke gfs2: Pass glock holder to gfs2_file_direct_{read,write} gfs2: Add some flags missing from glock output
| * gfs2: When gfs2_dirty_inode gets a glock error, dump the glockBob Peterson2020-08-071-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Before this patch, if function gfs2_dirty_inode got an error when trying to lock the inode glock, it complained, but it didn't say what glock or inode had the problem. In this case, it almost always means that dinode_in found an error with the dinode in the file system. So it makes sense to dump the glock, which tells us the location of the dinode in the file system. That will allow us to analyze the corruption from the metadata. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
| * gfs2: Never call gfs2_block_zero_range with an open transactionBob Peterson2020-08-071-30/+39
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before this patch, some functions started transactions then they called gfs2_block_zero_range. However, gfs2_block_zero_range, like writes, can start transactions, which results in a recursive transaction error. For example: do_shrink trunc_start gfs2_trans_begin <------------------------------------------------ gfs2_block_zero_range iomap_zero_range(inode, from, length, NULL, &gfs2_iomap_ops); iomap_apply ... iomap_zero_range_actor iomap_begin gfs2_iomap_begin gfs2_iomap_begin_write actor (iomap_zero_range_actor) iomap_zero iomap_write_begin gfs2_iomap_page_prepare gfs2_trans_begin <------------------------ This patch reorders the callers of gfs2_block_zero_range so that they only start their transactions after the call. It also adds a BUG_ON to ensure this doesn't happen again. Fixes: 2257e468a63b ("gfs2: implement gfs2_block_zero_range using iomap_zero_range") Cc: stable@vger.kernel.org # v5.5+ Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
| * gfs2: print details on transactions that aren't properly endedBob Peterson2020-08-071-13/+16
| | | | | | | | | | | | | | | | | | | | If function gfs2_trans_begin is called with another transaction active it BUGs out, but it doesn't give any details about the duplicate. This patch moves function gfs2_print_trans and calls it when this situation arises for better debugging. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
| * gfs2: Fix inaccurate commentBob Peterson2020-08-071-1/+1
| | | | | | | | | | | | | | The comment regarding journal flush thresholds is wrong. This patch fixes it. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>