summaryrefslogtreecommitdiffstats
path: root/fs
Commit message (Collapse)AuthorAgeFilesLines
* Merge tag 'xfs-5.12-merge-6' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linuxLinus Torvalds2021-02-287-64/+84
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull more xfs updates from Darrick Wong: "The most notable fix here prevents premature reuse of freed metadata blocks, and adding the ability to detect accidental nested transactions, which are not allowed here. - Restore a disused sysctl control knob that was inadvertently dropped during the merge window to avoid fstests regressions. - Don't speculatively release freed blocks from the busy list until we're actually allocating them, which fixes a rare log recovery regression. - Don't nest transactions when scanning for free space. - Add an idiot^Wmaintainer light to detect nested transactions. ;)" * tag 'xfs-5.12-merge-6' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux: xfs: use current->journal_info for detecting transaction recursion xfs: don't nest transactions when scanning for eofblocks xfs: don't reuse busy extents on extent trim xfs: restore speculative_cow_prealloc_lifetime sysctl
| * xfs: use current->journal_info for detecting transaction recursionDave Chinner2021-02-255-26/+60
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Because the iomap code using PF_MEMALLOC_NOFS to detect transaction recursion in XFS is just wrong. Remove it from the iomap code and replace it with XFS specific internal checks using current->journal_info instead. [djwong: This change also realigns the lifetime of NOFS flag changes to match the incore transaction, instead of the inconsistent scheme we have now.] Fixes: 9070733b4efa ("xfs: abstract PF_FSTRANS to PF_MEMALLOC_NOFS") Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Darrick J. Wong <djwong@kernel.org> Signed-off-by: Darrick J. Wong <djwong@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de>
| * xfs: don't nest transactions when scanning for eofblocksDarrick J. Wong2021-02-251-3/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Brian Foster reported a lockdep warning on xfs/167: ============================================ WARNING: possible recursive locking detected 5.11.0-rc4 #35 Tainted: G W I -------------------------------------------- fsstress/17733 is trying to acquire lock: ffff8e0fd1d90650 (sb_internal){++++}-{0:0}, at: xfs_free_eofblocks+0x104/0x1d0 [xfs] but task is already holding lock: ffff8e0fd1d90650 (sb_internal){++++}-{0:0}, at: xfs_trans_alloc_inode+0x5f/0x160 [xfs] stack backtrace: CPU: 38 PID: 17733 Comm: fsstress Tainted: G W I 5.11.0-rc4 #35 Hardware name: Dell Inc. PowerEdge R740/01KPX8, BIOS 1.6.11 11/20/2018 Call Trace: dump_stack+0x8b/0xb0 __lock_acquire.cold+0x159/0x2ab lock_acquire+0x116/0x370 xfs_trans_alloc+0x1ad/0x310 [xfs] xfs_free_eofblocks+0x104/0x1d0 [xfs] xfs_blockgc_scan_inode+0x24/0x60 [xfs] xfs_inode_walk_ag+0x202/0x4b0 [xfs] xfs_inode_walk+0x66/0xc0 [xfs] xfs_trans_alloc+0x160/0x310 [xfs] xfs_trans_alloc_inode+0x5f/0x160 [xfs] xfs_alloc_file_space+0x105/0x300 [xfs] xfs_file_fallocate+0x270/0x460 [xfs] vfs_fallocate+0x14d/0x3d0 __x64_sys_fallocate+0x3e/0x70 do_syscall_64+0x33/0x40 entry_SYSCALL_64_after_hwframe+0x44/0xa9 The cause of this is the new code that spurs a scan to garbage collect speculative preallocations if we fail to reserve enough blocks while allocating a transaction. While the warning itself is a fairly benign lockdep complaint, it does expose a potential livelock if the rwsem behavior ever changes with regards to nesting read locks when someone's waiting for a write lock. Fix this by freeing the transaction and jumping back to xfs_trans_alloc like this patch in the V4 submission[1]. [1] https://lore.kernel.org/linux-xfs/161142798066.2171939.9311024588681972086.stgit@magnolia/ Fixes: a1a7d05a0576 ("xfs: flush speculative space allocations when we run out of space") Reported-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Darrick J. Wong <djwong@kernel.org> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Allison Henderson <allison.henderson@oracle.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
| * xfs: don't reuse busy extents on extent trimBrian Foster2021-02-251-14/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Freed extents are marked busy from the point the freeing transaction commits until the associated CIL context is checkpointed to the log. This prevents reuse and overwrite of recently freed blocks before the changes are committed to disk, which can lead to corruption after a crash. The exception to this rule is that metadata allocation is allowed to reuse busy extents because metadata changes are also logged. As of commit 97d3ac75e5e0 ("xfs: exact busy extent tracking"), XFS has allowed modification or complete invalidation of outstanding busy extents for metadata allocations. This implementation assumes that use of the associated extent is imminent, which is not always the case. For example, the trimmed extent might not satisfy the minimum length of the allocation request, or the allocation algorithm might be involved in a search for the optimal result based on locality. generic/019 reproduces a corruption caused by this scenario. First, a metadata block (usually a bmbt or symlink block) is freed from an inode. A subsequent bmbt split on an unrelated inode attempts a near mode allocation request that invalidates the busy block during the search, but does not ultimately allocate it. Due to the busy state invalidation, the block is no longer considered busy to subsequent allocation. A direct I/O write request immediately allocates the block and writes to it. Finally, the filesystem crashes while in a state where the initial metadata block free had not committed to the on-disk log. After recovery, the original metadata block is in its original location as expected, but has been corrupted by the aforementioned dio. This demonstrates that it is fundamentally unsafe to modify busy extent state for extents that are not guaranteed to be allocated. This applies to pretty much all of the code paths that currently trim busy extents for one reason or another. Therefore to address this problem, drop the reuse mechanism from the busy extent trim path. This code already knows how to return partial non-busy ranges of the targeted free extent and higher level code tracks the busy state of the allocation attempt. If a block allocation fails where one or more candidate extents is busy, we force the log and retry the allocation. Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <djwong@kernel.org> Signed-off-by: Darrick J. Wong <djwong@kernel.org> Reviewed-by: Chandan Babu R <chandanrlinux@gmail.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
| * xfs: restore speculative_cow_prealloc_lifetime sysctlDarrick J. Wong2021-02-241-21/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In commit 9669f51de5c0 I tried to get rid of the undocumented cow gc lifetime knob. The knob's function was never documented and it now doesn't really have a function since eof and cow gc have been consolidated. Regrettably, xfs/231 relies on it and regresses on for-next. I did not succeed at getting far enough through fstests patch review for the fixup to land in time. Restore the sysctl knob, document what it did (does?), put it on the deprecation schedule, and rip out a redundant function. Fixes: 9669f51de5c0 ("xfs: consolidate the eofblocks and cowblocks workers") Signed-off-by: Darrick J. Wong <djwong@kernel.org> Reviewed-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
* | Merge tag 'block-5.12-2021-02-27' of git://git.kernel.dk/linux-blockLinus Torvalds2021-02-2811-25/+19
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull more block updates from Jens Axboe: "A few stragglers (and one due to me missing it originally), and fixes for changes in this merge window mostly. In particular: - blktrace cleanups (Chaitanya, Greg) - Kill dead blk_pm_* functions (Bart) - Fixes for the bio alloc changes (Christoph) - Fix for the partition changes (Christoph, Ming) - Fix for turning off iopoll with polled IO inflight (Jeffle) - nbd disconnect fix (Josef) - loop fsync error fix (Mauricio) - kyber update depth fix (Yang) - max_sectors alignment fix (Mikulas) - Add bio_max_segs helper (Matthew)" * tag 'block-5.12-2021-02-27' of git://git.kernel.dk/linux-block: (21 commits) block: Add bio_max_segs blktrace: fix documentation for blk_fill_rw() block: memory allocations in bounce_clone_bio must not fail block: remove the gfp_mask argument to bounce_clone_bio block: fix bounce_clone_bio for passthrough bios block-crypto-fallback: use a bio_set for splitting bios block: fix logging on capacity change blk-settings: align max_sectors on "logical_block_size" boundary block: reopen the device in blkdev_reread_part block: don't skip empty device in in disk_uevent blktrace: remove debugfs file dentries from struct blk_trace nbd: handle device refs for DESTROY_ON_DISCONNECT properly kyber: introduce kyber_depth_updated() loop: fix I/O error on fsync() in detached loop devices block: fix potential IO hang when turning off io_poll block: get rid of the trace rq insert wrapper blktrace: fix blk_rq_merge documentation blktrace: fix blk_rq_issue documentation blktrace: add blk_fill_rwbs documentation comment block: remove superfluous param in blk_fill_rwbs() ...
| * | block: Add bio_max_segsMatthew Wilcox (Oracle)2021-02-2611-25/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's often inconvenient to use BIO_MAX_PAGES due to min() requiring the sign to be the same. Introduce bio_max_segs() and change BIO_MAX_PAGES to be unsigned to make it easier for the users. Reviewed-by: Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | | Merge tag 'io_uring-worker.v3-2021-02-25' of git://git.kernel.dk/linux-blockLinus Torvalds2021-02-275-1052/+661
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull io_uring thread rewrite from Jens Axboe: "This converts the io-wq workers to be forked off the tasks in question instead of being kernel threads that assume various bits of the original task identity. This kills > 400 lines of code from io_uring/io-wq, and it's the worst part of the code. We've had several bugs in this area, and the worry is always that we could be missing some pieces for file types doing unusual things (recent /dev/tty example comes to mind, userfaultfd reads installing file descriptors is another fun one... - both of which need special handling, and I bet it's not the last weird oddity we'll find). With these identical workers, we can have full confidence that we're never missing anything. That, in itself, is a huge win. Outside of that, it's also more efficient since we're not wasting space and code on tracking state, or switching between different states. I'm sure we're going to find little things to patch up after this series, but testing has been pretty thorough, from the usual regression suite to production. Any issue that may crop up should be manageable. There's also a nice series of further reductions we can do on top of this, but I wanted to get the meat of it out sooner rather than later. The general worry here isn't that it's fundamentally broken. Most of the little issues we've found over the last week have been related to just changes in how thread startup/exit is done, since that's the main difference between using kthreads and these kinds of threads. In fact, if all goes according to plan, I want to get this into the 5.10 and 5.11 stable branches as well. That said, the changes outside of io_uring/io-wq are: - arch setup, simple one-liner to each arch copy_thread() implementation. - Removal of net and proc restrictions for io_uring, they are no longer needed or useful" * tag 'io_uring-worker.v3-2021-02-25' of git://git.kernel.dk/linux-block: (30 commits) io-wq: remove now unused IO_WQ_BIT_ERROR io_uring: fix SQPOLL thread handling over exec io-wq: improve manager/worker handling over exec io_uring: ensure SQPOLL startup is triggered before error shutdown io-wq: make buffered file write hashed work map per-ctx io-wq: fix race around io_worker grabbing io-wq: fix races around manager/worker creation and task exit io_uring: ensure io-wq context is always destroyed for tasks arch: ensure parisc/powerpc handle PF_IO_WORKER in copy_thread() io_uring: cleanup ->user usage io-wq: remove nr_process accounting io_uring: flag new native workers with IORING_FEAT_NATIVE_WORKERS net: remove cmsg restriction from io_uring based send/recvmsg calls Revert "proc: don't allow async path resolution of /proc/self components" Revert "proc: don't allow async path resolution of /proc/thread-self components" io_uring: move SQPOLL thread io-wq forked worker io-wq: make io_wq_fork_thread() available to other users io-wq: only remove worker from free_list, if it was there io_uring: remove io_identity io_uring: remove any grabbing of context ...
| * | | io-wq: remove now unused IO_WQ_BIT_ERRORJens Axboe2021-02-251-10/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This flag is now dead, remove it. Fixes: 1cbd9c2bcf02 ("io-wq: don't create any IO workers upfront") Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | io_uring: fix SQPOLL thread handling over execJens Axboe2021-02-251-1/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Just like the changes for io-wq, ensure that we re-fork the SQPOLL thread if the owner execs. Mark the ctx sq thread as sqo_exec if it dies, and the ring as needing a wakeup which will force the task to enter the kernel. When it does, setup the new thread and proceed as usual. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | io-wq: improve manager/worker handling over execJens Axboe2021-02-253-23/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | exec will cancel any threads, including the ones that io-wq is using. This isn't a problem, in fact we'd prefer it to be that way since it means we know that any async work cancels naturally without having to handle it proactively. But it does mean that we need to setup a new manager, as the manager and workers are gone. Handle this at queue time, and cancel work if we fail. Since the manager can go away without us noticing, ensure that the manager itself holds a reference to the 'wq' as well. Rename io_wq_destroy() to io_wq_put() to reflect that. In the future we can now simplify exec cancelation handling, for now just leave it the same. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | io_uring: ensure SQPOLL startup is triggered before error shutdownJens Axboe2021-02-251-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | syzbot reports the following hang: INFO: task syz-executor.0:12538 can't die for more than 143 seconds. task:syz-executor.0 state:D stack:28352 pid:12538 ppid: 8423 flags:0x00004004 Call Trace: context_switch kernel/sched/core.c:4324 [inline] __schedule+0x90c/0x21a0 kernel/sched/core.c:5075 schedule+0xcf/0x270 kernel/sched/core.c:5154 schedule_timeout+0x1db/0x250 kernel/time/timer.c:1868 do_wait_for_common kernel/sched/completion.c:85 [inline] __wait_for_common kernel/sched/completion.c:106 [inline] wait_for_common kernel/sched/completion.c:117 [inline] wait_for_completion+0x168/0x270 kernel/sched/completion.c:138 io_sq_thread_finish+0x96/0x580 fs/io_uring.c:7152 io_sq_offload_create fs/io_uring.c:7929 [inline] io_uring_create fs/io_uring.c:9465 [inline] io_uring_setup+0x1fb2/0x2c20 fs/io_uring.c:9550 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xae which is due to exiting after the SQPOLL thread has been created, but hasn't been started yet. Ensure that we always complete the startup side when waiting for it to exit. Reported-by: syzbot+c927c937cba8ef66dd4a@syzkaller.appspotmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | io-wq: make buffered file write hashed work map per-ctxJens Axboe2021-02-253-11/+107
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before the io-wq thread change, we maintained a hash work map and lock per-node per-ring. That wasn't ideal, as we really wanted it to be per ring. But now that we have per-task workers, the hash map ends up being just per-task. That'll work just fine for the normal case of having one task use a ring, but if you share the ring between tasks, then it's considerably worse than it was before. Make the hash map per ctx instead, which provides full per-ctx buffered write serialization on hashed writes. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | io-wq: fix race around io_worker grabbingJens Axboe2021-02-231-9/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There's a small window between lookup dropping the reference to the worker and calling wake_up_process() on the worker task, where the worker itself could have exited. We ensure that the worker struct itself is valid, but worker->task may very well be gone by the time we issue the wakeup. Fix the race by using a completion triggered by the reference going to zero, and having exit wait for that completion before proceeding. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | io-wq: fix races around manager/worker creation and task exitJens Axboe2021-02-231-22/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These races have always been there, they are just more apparent now that we do early cancel of io-wq when the task exits. Ensure that the io-wq manager sets task state correctly to not miss wakeups for task creation. This is important if we get a wakeup after having marked ourselves as TASK_INTERRUPTIBLE. If we do end up creating workers, then we flip the state back to running, making the subsequent schedule() a no-op. Also increment the wq ref count before forking the thread, to avoid a use-after-free. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | io_uring: ensure io-wq context is always destroyed for tasksJens Axboe2021-02-231-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the task ends up doing no IO, the context list is empty and we don't call into __io_uring_files_cancel() when the task exits. This can cause a leak of the io-wq structures. Ensure we always call __io_uring_files_cancel(), even if the task context list is empty. Fixes: 5aa75ed5b93f ("io_uring: tie async worker side to the task context") Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | io_uring: cleanup ->user usageJens Axboe2021-02-231-12/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | At this point we're only using it for memory accounting, so there's no need to have an extra ->limit_mem - we can just set ->user if we do the accounting, or leave it at NULL if we don't. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | io-wq: remove nr_process accountingJens Axboe2021-02-233-50/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We're now just using fork like we would from userspace, so there's no need to try and impose extra restrictions or accounting on the user side of things. That's already being done for us. That also means we don't have to pass in the user_struct anymore, that's correctly inherited through ->creds on fork. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | io_uring: flag new native workers with IORING_FEAT_NATIVE_WORKERSJens Axboe2021-02-231-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A few reasons to do this: - The naming of the manager and worker have changed. That's a user visible change, so makes sense to flag it. - Opening certain files that use ->signal (like /proc/self or /dev/tty) now works, and the flag tells the application upfront that this is the case. - Related to the above, using signalfd will now work as well. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | Revert "proc: don't allow async path resolution of /proc/self components"Jens Axboe2021-02-231-7/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit 8d4c3e76e3be11a64df95ddee52e99092d42fc19. No longer needed, as the io-wq worker threads have the right identity. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | Revert "proc: don't allow async path resolution of /proc/thread-self components"Jens Axboe2021-02-232-8/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit 0d4370cfe36b7f1719123b621a4ec4d9c7a25f89. No longer needed, as the io-wq worker threads have the right identity. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | io_uring: move SQPOLL thread io-wq forked workerJens Axboe2021-02-231-292/+181
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Don't use a kthread for SQPOLL, use a forked worker just like the io-wq workers. With that done, we can drop the various context grabbing we do for SQPOLL, it already has everything it needs. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | io-wq: make io_wq_fork_thread() available to other usersJens Axboe2021-02-212-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We want to use this in io_uring proper as well, for the SQPOLL thread. Rename it from fork_thread() to io_wq_fork_thread(), and make it available through the io-wq.h header. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | io-wq: only remove worker from free_list, if it was thereJens Axboe2021-02-211-3/+7
| | | | | | | | | | | | | | | | | | | | | | | | If the worker isn't on the free_list, don't attempt to delete it. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | io_uring: remove io_identityJens Axboe2021-02-213-80/+52
| | | | | | | | | | | | | | | | | | | | | | | | | | | | We are no longer grabbing state, so no need to maintain an IO identity that we COW if there are changes. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | io_uring: remove any grabbing of contextJens Axboe2021-02-212-229/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The async workers are siblings of the task itself, so by definition we have all the state that we need. Remove any of the state grabbing that we have, and requests flagging what they need. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | io-wq: worker idling always returns falseJens Axboe2021-02-211-8/+2
| | | | | | | | | | | | | | | | | | | | | | | | Remove the bool return, and the checking for it in the caller. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | io-wq: fork worker threads from original taskJens Axboe2021-02-213-198/+113
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of using regular kthread kernel threads, create kernel threads that are like a real thread that the task would create. This ensures that we get all the context that we need, without having to carry that state around. This greatly reduces the code complexity, and the risk of missing state for a given request type. With the move away from kthread, we can also dump everything related to assigned state to the new threads. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | io-wq: don't pass 'wqe' needlessly aroundJens Axboe2021-02-211-15/+16
| | | | | | | | | | | | | | | | | | | | | | | | Just grab it from the worker itself, which we're already passing in. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | io_uring: tie async worker side to the task contextJens Axboe2021-02-211-50/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Move it outside of the io_ring_ctx, and tie it to the io_uring task context. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | io-wq: get rid of wq->use_refsJens Axboe2021-02-212-19/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | We don't support attach anymore, so doesn't make sense to carry the use_refs reference count. Get rid of it. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | io_uring: disable io-wq attachingJens Axboe2021-02-211-33/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Moving towards making the io_wq per ring per task, so we can't really share it between rings. Which is fine, since we've now dropped some of that fat from it. Retain compatibility with how attaching works, so that any attempt to attach to an fd that doesn't exist, or isn't an io_uring fd, will fail like it did before. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | io-wq: don't create any IO workers upfrontJens Axboe2021-02-211-12/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When the manager thread starts up, it creates a worker per node for the given context. Just let these get created dynamically, like we do for adding further workers. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | io_uring: remove the need for relying on an io-wq fallback workerJens Axboe2021-02-213-17/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We hit this case when the task is exiting, and we need somewhere to do background cleanup of requests. Instead of relying on the io-wq task manager to do this work for us, just stuff it somewhere where we can safely run it ourselves directly. Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * | | Merge branch 'for-5.12/io_uring' into io_uring-worker.v3Jens Axboe2021-02-211-342/+347
| |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * for-5.12/io_uring: (21 commits) io_uring: run task_work on io_uring_register() io_uring: fix leaving invalid req->flags io_uring: wait potential ->release() on resurrect io_uring: keep generic rsrc infra generic io_uring: zero ref_node after killing it io_uring: make the !CONFIG_NET helpers a bit more robust io_uring: don't hold uring_lock when calling io_run_task_work* io_uring: fail io-wq submission from a task_work io_uring: don't take uring_lock during iowq cancel io_uring: fail links more in io_submit_sqe() io_uring: don't do async setup for links' heads io_uring: do io_*_prep() early in io_submit_sqe() io_uring: split sqe-prep and async setup io_uring: don't submit link on error io_uring: move req link into submit_state io_uring: move io_init_req() into io_submit_sqe() io_uring: move io_init_req()'s definition io_uring: don't duplicate ->file check in sfr io_uring: keep io_*_prep() naming consistent io_uring: kill fictitious submit iteration index ...
* | \ \ \ Merge branch 'work.misc' of ↵Linus Torvalds2021-02-275-48/+32
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull misc vfs updates from Al Viro: "Assorted stuff pile - no common topic here" * 'work.misc' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: whack-a-mole: don't open-code iminor/imajor 9p: fix misuse of sscanf() in v9fs_stat2inode() audit_alloc_mark(): don't open-code ERR_CAST() fs/inode.c: make inode_init_always() initialize i_ino to 0 vfs: don't unnecessarily clone write access for writable fds
| * | | | | whack-a-mole: don't open-code iminor/imajorAl Viro2021-02-231-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | several instances creeped back into the tree... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | | 9p: fix misuse of sscanf() in v9fs_stat2inode()Al Viro2021-02-231-12/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 1) sscanf() return value needs to be checked, damnit 2) sscanf() is perfectly capable of checking for fixed prefix, no need for that %13s + strncmp with constant string. 3) st->extension is a valid string; no need for voodoo with str*cpy() there. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | | fs/inode.c: make inode_init_always() initialize i_ino to 0Eric Biggers2021-01-042-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently inode_init_always() doesn't initialize i_ino to 0. This is unexpected because unlike the other inode fields that aren't initialized by inode_init_always(), i_ino isn't guaranteed to end up back at its initial value after the inode is freed. Only one filesystem (XFS) actually sets set i_ino back to 0 when freeing its inodes. So, callers of new_inode() see some random previous i_ino. Normally that's fine, since normally i_ino isn't accessed before being set. There can be edge cases where that isn't necessarily true, though. The one I've run into is that on ext4, when creating an encrypted file, the new file's encryption key has to be set up prior to the jbd2 transaction, and thus prior to i_ino being set. If something goes wrong, fs/crypto/ may log warning or error messages, which normally include i_ino. So it needs to know whether it is valid to include i_ino yet or not. Also, on some files i_ino needs to be hashed for use in the crypto, so fs/crypto/ needs to know whether that can be done yet or not. There are ways this could be worked around, either in fs/crypto/ or in fs/ext4/. But, it seems there's no reason not to just fix inode_init_always() to do the expected thing and initialize i_ino to 0. So, do that, and also remove the initialization in jfs_fill_super() that becomes redundant. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | | | vfs: don't unnecessarily clone write access for writable fdsEric Biggers2021-01-041-33/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There's no need for mnt_want_write_file() to increment mnt_writers when the file is already open for writing, provided that mnt_drop_write_file() is changed to conditionally decrement it. We seem to have ended up in the current situation because mnt_want_write_file() used to be paired with mnt_drop_write(), due to mnt_drop_write_file() not having been added yet. So originally mnt_want_write_file() had to always increment mnt_writers. But later mnt_drop_write_file() was added, and all callers of mnt_want_write_file() were paired with it. This makes the compatibility between mnt_want_write_file() and mnt_drop_write() no longer necessary. Therefore, make __mnt_want_write_file() and __mnt_drop_write_file() skip incrementing mnt_writers on files already open for writing. This removes the only caller of mnt_clone_write(), so remove that too. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* | | | | | Merge tag '5.12-smb3-part1' of git://git.samba.org/sfrench/cifs-2.6Linus Torvalds2021-02-2621-377/+843
|\ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull cifs updates from Steve French: - improvements to mode bit conversion, chmod and chown when using cifsacl mount option - two new mount options for controlling attribute caching - improvements to crediting and reconnect, improved debugging - reconnect fix - add SMB3.1.1 dialect to default dialects for vers=3 * tag '5.12-smb3-part1' of git://git.samba.org/sfrench/cifs-2.6: (27 commits) cifs: update internal version number cifs: use discard iterator to discard unneeded network data more efficiently cifs: introduce helper for finding referral server to improve DFS target resolution cifs: check all path components in resolved dfs target cifs: fix DFS failover cifs: fix nodfs mount option cifs: fix handling of escaped ',' in the password mount argument cifs: Add new parameter "acregmax" for distinct file and directory metadata timeout cifs: convert revalidate of directories to using directory metadata cache timeout cifs: Add new mount parameter "acdirmax" to allow caching directory metadata cifs: If a corrupted DACL is returned by the server, bail out. cifs: minor simplification to smb2_is_network_name_deleted TCON Reconnect during STATUS_NETWORK_NAME_DELETED cifs: cleanup a few le16 vs. le32 uses in cifsacl.c cifs: Change SIDs in ACEs while transferring file ownership. cifs: Retain old ACEs when converting between mode bits and ACL. cifs: Fix cifsacl ACE mask for group and others. cifs: clarify hostname vs ip address in /proc/fs/cifs/DebugData cifs: change confusing field serverName (to ip_addr) cifs: Fix inconsistent IS_ERR and PTR_ERR ...
| * | | | | | cifs: update internal version numberSteve French2021-02-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To 2.31 Signed-off-by: Steve French <stfrench@microsoft.com>
| * | | | | | cifs: use discard iterator to discard unneeded network data more efficientlyDavid Howells2021-02-253-3/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The iterator, ITER_DISCARD, that can only be used in READ mode and just discards any data copied to it, was added to allow a network filesystem to discard any unwanted data sent by a server. Convert cifs_discard_from_socket() to use this. Signed-off-by: David Howells <dhowells@redhat.com> Signed-off-by: Steve French <stfrench@microsoft.com>
| * | | | | | cifs: introduce helper for finding referral server to improve DFS target ↵Paulo Alcantara2021-02-252-17/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | resolution Some servers seem to mistakenly report different values for capabilities and share flags, so we can't always rely on those values to decide whether the resolved target can handle any new DFS referrals. Add a new helper is_referral_server() to check if all resolved targets can handle new DFS referrals by directly looking at the GET_DFS_REFERRAL.ReferralHeaderFlags value as specified in MS-DFSC 2.2.4 RESP_GET_DFS_REFERRAL in addition to is_tcon_dfs(). Signed-off-by: Paulo Alcantara (SUSE) <pc@cjr.nz> Cc: stable@vger.kernel.org # 5.11 Signed-off-by: Steve French <stfrench@microsoft.com>
| * | | | | | cifs: check all path components in resolved dfs targetPaulo Alcantara2021-02-251-60/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Handle the case where a resolved target share is like //server/users/dir, and the user "foo" has no read permission to access the parent folder "users" but has access to the final path component "dir". is_path_remote() already implements that, so call it directly. Signed-off-by: Paulo Alcantara (SUSE) <pc@cjr.nz> Cc: stable@vger.kernel.org # 5.11 Signed-off-by: Steve French <stfrench@microsoft.com>
| * | | | | | cifs: fix DFS failoverPaulo Alcantara2021-02-251-64/+59
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In do_dfs_failover(), the mount_get_conns() function requires the full fs context in order to get new connection to server, so clone the original context and change it accordingly when retrying the DFS targets in the referral. If failover was successful, then update original context with the new UNC, prefix path and ip address. Signed-off-by: Paulo Alcantara (SUSE) <pc@cjr.nz> Cc: stable@vger.kernel.org # 5.11 Signed-off-by: Steve French <stfrench@microsoft.com>
| * | | | | | cifs: fix nodfs mount optionPaulo Alcantara2021-02-251-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Skip DFS resolving when mounting with 'nodfs' even if CONFIG_CIFS_DFS_UPCALL is enabled. Signed-off-by: Paulo Alcantara (SUSE) <pc@cjr.nz> Cc: stable@vger.kernel.org # 5.11 Reviewed-by: Shyam Prasad N <sprasad@microsoft.com> Signed-off-by: Steve French <stfrench@microsoft.com>
| * | | | | | cifs: fix handling of escaped ',' in the password mount argumentRonnie Sahlberg2021-02-251-13/+30
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Passwords can contain ',' which are also used as the separator between mount options. Mount.cifs will escape all ',' characters as the string ",,". Update parsing of the mount options to detect ",," and treat it as a single 'c' character. Fixes: 24e0a1eff9e2 ("cifs: switch to new mount api") Cc: stable@vger.kernel.org # 5.11 Reported-by: Simon Taylor <simon@simon-taylor.me.uk> Tested-by: Simon Taylor <simon@simon-taylor.me.uk> Signed-off-by: Ronnie Sahlberg <lsahlber@redhat.com> Signed-off-by: Steve French <stfrench@microsoft.com>
| * | | | | | cifs: Add new parameter "acregmax" for distinct file and directory metadata ↵Steve French2021-02-255-13/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | timeout The new optional mount parameter "acregmax" allows a different timeout for file metadata ("acdirmax" now allows controlling timeout for directory metadata). Setting "actimeo" still works as before, and changes timeout for both files and directories, but specifying "acregmax" or "acdirmax" allows overriding the default more granularly which can be a big performance benefit on some workloads. "acregmax" is already used by NFS as a mount parameter (albeit with a larger default and thus looser caching). Suggested-by: Tom Talpey <tom@talpey.com> Reviewed-By: Tom Talpey <tom@talpey.com> Reviewed-by: Ronnie Sahlberg <lsahlber@redhat.com> Signed-off-by: Steve French <stfrench@microsoft.com>
| * | | | | | cifs: convert revalidate of directories to using directory metadata cache ↵Steve French2021-02-251-6/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | timeout The new optional mount parm, "acdirmax" allows caching the metadata for a directory longer than file metadata, which can be very helpful for performance. Convert cifs_inode_needs_reval to check acdirmax for revalidating directory metadata. Signed-off-by: Steve French <stfrench@microsoft.com> Reviewed-by: Ronnie Sahlberg <lsahlber@redhat.com> Reviewed-By: Tom Talpey <tom@talpey.com> Signed-off-by: Steve French <stfrench@microsoft.com>