summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* Merge tag 'efi-next-for-v5.20' of ↵Linus Torvalds2022-08-0331-775/+486
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi Pull EFI updates from Ard Biesheuvel: - Enable mirrored memory for arm64 - Fix up several abuses of the efivar API - Refactor the efivar API in preparation for moving the 'business logic' part of it into efivarfs - Enable ACPI PRM on arm64 * tag 'efi-next-for-v5.20' of git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi: (24 commits) ACPI: Move PRM config option under the main ACPI config ACPI: Enable Platform Runtime Mechanism(PRM) support on ARM64 ACPI: PRM: Change handler_addr type to void pointer efi: Simplify arch_efi_call_virt() macro drivers: fix typo in firmware/efi/memmap.c efi: vars: Drop __efivar_entry_iter() helper which is no longer used efi: vars: Use locking version to iterate over efivars linked lists efi: pstore: Omit efivars caching EFI varstore access layer efi: vars: Add thin wrapper around EFI get/set variable interface efi: vars: Don't drop lock in the middle of efivar_init() pstore: Add priv field to pstore_record for backend specific use Input: applespi - avoid efivars API and invoke EFI services directly selftests/kexec: remove broken EFI_VARS secure boot fallback check brcmfmac: Switch to appropriate helper to load EFI variable contents iwlwifi: Switch to proper EFI variable store interface media: atomisp_gmin_platform: stop abusing efivar API efi: efibc: avoid efivar API for setting variables efi: avoid efivars layer when loading SSDTs from variables efi: Correct comment on efi_memmap_alloc memblock: Disable mirror feature if kernelcore is not specified ...
| * ACPI: Move PRM config option under the main ACPI configSudeep Holla2022-06-301-15/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently PRM(Platform Runtime Mechanism) config option is listed along with the main ACPI (Advanced Configuration and Power Interface) option at the same level. On ARM64 platforms unlike x86, ACPI option is listed at the topmost level of configuration menu. It is rather very confusing to see PRM option also listed along with ACPI in the topmost level. Move the same under ACPI config option. No functional change, just changes the level of visibility of this option under the configuration menu. Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * ACPI: Enable Platform Runtime Mechanism(PRM) support on ARM64Sudeep Holla2022-06-301-1/+1
| | | | | | | | | | | | | | | | | | | | There is interest to make use of PRM(Platform Runtime Mechanism) even on ARM64 ACPI platforms. Allow PRM to be enabled on ARM64 platforms. It will be enabled by default as on x86_64. Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * ACPI: PRM: Change handler_addr type to void pointerSudeep Holla2022-06-301-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | handler_addr is a virtual address passed to efi_call_virt_pointer. While x86 currently type cast it into the pointer in it's arch specific arch_efi_call_virt() implementation, ARM64 is restrictive for right reasons. Convert the handler_addr type from u64 to void pointer. Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * efi: Simplify arch_efi_call_virt() macroSudeep Holla2022-06-286-30/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | Currently, the arch_efi_call_virt() assumes all users of it will have defined a type 'efi_##f##_t' to make use of it. Simplify the arch_efi_call_virt() macro by eliminating the explicit need for efi_##f##_t type for every user of this macro. Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> [ardb: apply Sudeep's ARM fix to i686, Loongarch and RISC-V too] Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * drivers: fix typo in firmware/efi/memmap.cZheng Zhi Yuan2022-06-281-1/+1
| | | | | | | | | | | | | | | | This patch fixes the spelling error in firmware/efi/memmap.c, changing it to the correct word. Signed-off-by: Zheng Zhi Yuan <kevinjone25@g.ncu.edu.tw> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * efi: vars: Drop __efivar_entry_iter() helper which is no longer usedArd Biesheuvel2022-06-242-57/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | __efivar_entry_iter() uses a list iterator in a dubious way, i.e., it assumes that the iteration variable always points to an object of the appropriate type, even if the list traversal exhausts the list completely, in which case it will point somewhere in the vicinity of the list's anchor instead. Fortunately, we no longer use this function so we can just get rid of it entirely. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * efi: vars: Use locking version to iterate over efivars linked listsArd Biesheuvel2022-06-244-21/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | Both efivars and efivarfs uses __efivar_entry_iter() to go over the linked list that shadows the list of EFI variables held by the firmware, but fail to call the begin/end helpers that are documented as a prerequisite. So switch to the proper version, which is efivar_entry_iter(). Given that in both cases, efivar_entry_remove() is invoked with the lock held already, don't take the lock there anymore. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * efi: pstore: Omit efivars caching EFI varstore access layerArd Biesheuvel2022-06-245-310/+91
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Avoid the efivars layer and simply call the newly introduced EFI varstore helpers instead. This simplifies the code substantially, and also allows us to remove some hacks in the shared efivars layer that were added for efi-pstore specifically. In order to be able to delete the EFI variable associated with a record, store the UTF-16 name of the variable in the pstore record's priv field. That way, we don't have to make guesses regarding which variable the record may have been loaded from. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * efi: vars: Add thin wrapper around EFI get/set variable interfaceArd Biesheuvel2022-06-242-10/+164
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current efivars layer is a jumble of list iterators, shadow data structures and safe variable manipulation helpers that really belong in the efivarfs pseudo file system once the obsolete sysfs access method to EFI variables is removed. So split off a minimal efivar get/set variable API that reuses the existing efivars_lock semaphore to mediate access to the various runtime services, primarily to ensure that performing a SetVariable() on one CPU while another is calling GetNextVariable() in a loop to enumerate the contents of the EFI variable store does not result in surprises. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * efi: vars: Don't drop lock in the middle of efivar_init()Ard Biesheuvel2022-06-245-24/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Even though the efivars_lock lock is documented as protecting the efivars->ops pointer (among other things), efivar_init() happily releases and reacquires the lock for every EFI variable that it enumerates. This used to be needed because the lock was originally a spinlock, which prevented the callback that is invoked for every variable from being able to sleep. However, releasing the lock could potentially invalidate the ops pointer, but more importantly, it might allow a SetVariable() runtime service call to take place concurrently, and the UEFI spec does not define how this affects an enumeration that is running in parallel using the GetNextVariable() runtime service, which is what efivar_init() uses. In the meantime, the lock has been converted into a semaphore, and the only reason we need to drop the lock is because the efivarfs pseudo filesystem driver will otherwise deadlock when it invokes the efivars API from the callback to create the efivar_entry items and insert them into the linked list. (EFI pstore is affected in a similar way) So let's switch to helpers that can be used while the lock is already taken. This way, we can hold on to the lock throughout the enumeration. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * pstore: Add priv field to pstore_record for backend specific useArd Biesheuvel2022-06-243-0/+6
| | | | | | | | | | | | | | | | | | | | | | The EFI pstore backend will need to store per-record variable name data when we switch away from the efivars layer. Add a priv field to struct pstore_record, and document it as holding a backend specific pointer that is assumed to be a kmalloc()d buffer, and will be kfree()d when the entire record is freed. Acked-by: Kees Cook <keescook@chromium.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * Input: applespi - avoid efivars API and invoke EFI services directlyArd Biesheuvel2022-06-241-28/+14
| | | | | | | | | | | | | | | | | | | | | | This driver abuses the efivar API, by using a few of its helpers on entries that were not instantiated by the API itself. This is a problem as future cleanup work on efivars is complicated by this. So let's just switch to the get/set variable runtime wrappers directly. Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * selftests/kexec: remove broken EFI_VARS secure boot fallback checkArd Biesheuvel2022-06-241-34/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit b433a52aa28733e0 ("selftests/kexec: update get_secureboot_mode") refactored the code that discovers the EFI secure boot mode so it only depends on either the efivars pseudo filesystem or the efivars sysfs interface, but never both. However, the latter version was not implemented correctly, given the fact that the local 'efi_vars' variable never assumes a value. This means the fallback has been dead code ever since it was introduced. So let's drop the fallback altogether. The sysfs interface has been deprecated for ~10 years now, and is only enabled on x86 to begin with, so it is time to get rid of it entirely. Reviewed-by: Mimi Zohar <zohar@linux.ibm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * brcmfmac: Switch to appropriate helper to load EFI variable contentsArd Biesheuvel2022-06-201-17/+8
| | | | | | | | | | | | | | | | | | Avoid abusing the efivar layer by invoking it with locally constructed efivar_entry instances, and instead, just call the EFI routines directly if available. Acked-by: Kalle Valo <kvalo@kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * iwlwifi: Switch to proper EFI variable store interfaceArd Biesheuvel2022-06-201-64/+32
| | | | | | | | | | | | | | | | | | | | | | | | Using half of the efivar API with locally baked efivar_entry instances is not the right way to use this API, and these uses impede planned work on the efivar layer itself. So switch to direct EFI variable store accesses: we don't need the efivar layer anyway. Acked-by: Kalle Valo <kvalo@kernel.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * media: atomisp_gmin_platform: stop abusing efivar APIArd Biesheuvel2022-06-201-21/+6
| | | | | | | | | | | | | | | | | | | | As the code comment already suggests, using the efivar API in this way is not how it is intended, and so let's switch to the right one, which is simply to call efi.get_variable() directly after checking whether or not the GetVariable() runtime service is supported. Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * efi: efibc: avoid efivar API for setting variablesArd Biesheuvel2022-06-202-47/+30
| | | | | | | | | | | | | | | | Avoid abusing the efivar API by passing locally instantiated efivar_entry structs into efivar_set_entry_safe(), rather than using the API as intended. Instead, just call efi.set_variable() directly. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * efi: avoid efivars layer when loading SSDTs from variablesArd Biesheuvel2022-06-201-62/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The efivars intermediate variable access layer provides an abstraction that permits the EFI variable store to be replaced by something else that implements a compatible interface, and caches all variables in the variable store for fast access via the efivarfs pseudo-filesystem. The SSDT override feature does not take advantage of either feature, as it is only used when the generic EFI implementation of efivars is used, and it traverses all variables only once to find the ones it is interested in, and frees all data structures that the efivars layer keeps right after. So in this case, let's just call EFI's code directly, using the function pointers in struct efi. Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * efi: Correct comment on efi_memmap_allocLiu Zixian2022-06-151-2/+1
| | | | | | | | | | | | | | Returning zero means success now. Signed-off-by: Liu Zixian <liuzixian4@huawei.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * memblock: Disable mirror feature if kernelcore is not specifiedMa Wupeng2022-06-153-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If system have some mirrored memory and mirrored feature is not specified in boot parameter, the basic mirrored feature will be enabled and this will lead to the following situations: - memblock memory allocation prefers mirrored region. This may have some unexpected influence on numa affinity. - contiguous memory will be split into several parts if parts of them is mirrored memory via memblock_mark_mirror(). To fix this, variable mirrored_kernelcore will be checked in memblock_mark_mirror(). Mark mirrored memory with flag MEMBLOCK_MIRROR iff kernelcore=mirror is added in the kernel parameters. Signed-off-by: Ma Wupeng <mawupeng1@huawei.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220614092156.1972846-6-mawupeng1@huawei.com Acked-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * arm64: mm: Only remove nomap flag for initrdMa Wupeng2022-06-151-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 177e15f0c144 ("arm64: add the initrd region to the linear mapping explicitly") remove all the flags of the memory used by initrd. This is fine since MEMBLOCK_MIRROR is not used in arm64. However with mirrored feature introduced to arm64, this will clear the mirrored flag used by initrd, which will lead to error log printed by find_zone_movable_pfns_for_nodes() if the lower 4G range has some non-mirrored memory. To solve this problem, only MEMBLOCK_NOMAP flag will be removed via memblock_clear_nomap(). Signed-off-by: Ma Wupeng <mawupeng1@huawei.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20220614092156.1972846-5-mawupeng1@huawei.com Acked-by: Mike Rapoport <rppt@linux.ibm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * mm: Limit warning message in vmemmap_verify() to onceMa Wupeng2022-06-151-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For a system only have limited mirrored memory or some numa node without mirrored memory, the per node vmemmap page_structs prefer to allocate memory from mirrored region, which will lead to vmemmap_verify() in vmemmap_populate_basepages() report lots of warning message. This patch change the frequency of "potential offnode page_structs" warning messages to only once to avoid a very long print during bootup. Signed-off-by: Ma Wupeng <mawupeng1@huawei.com> Acked-by: David Hildenbrand <david@redhat.com> Link: https://lore.kernel.org/r/20220614092156.1972846-4-mawupeng1@huawei.com Acked-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * mm: Ratelimited mirrored memory related warning messagesMa Wupeng2022-06-151-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If system has mirrored memory, memblock will try to allocate mirrored memory firstly and fallback to non-mirrored memory when fails, but if with limited mirrored memory or some numa node without mirrored memory, lots of warning message about memblock allocation will occur. This patch ratelimit the warning message to avoid a very long print during bootup. Signed-off-by: Ma Wupeng <mawupeng1@huawei.com> Reviewed-by: David Hildenbrand <david@redhat.com> Acked-by: Mike Rapoport <rppt@linux.ibm.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Kefeng Wang <wangkefeng.wang@huawei.com> Link: https://lore.kernel.org/r/20220614092156.1972846-3-mawupeng1@huawei.com Acked-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
| * efi: Make code to find mirrored memory ranges genericMa Wupeng2022-06-155-27/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit b05b9f5f9dcf ("x86, mirror: x86 enabling - find mirrored memory ranges") introduce the efi_find_mirror() function on x86. In order to reuse the API we make it public. Arm64 can support mirrored memory too, so function efi_find_mirror() is added to efi_init() to this support for arm64. Since efi_init() is shared by ARM, arm64 and riscv, this patch will bring mirror memory support for these architectures, but this support is only tested in arm64. Signed-off-by: Ma Wupeng <mawupeng1@huawei.com> Link: https://lore.kernel.org/r/20220614092156.1972846-2-mawupeng1@huawei.com [ardb: fix subject to better reflect the payload] Acked-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
* | Merge tag 'pull-work.9p' of ↵Linus Torvalds2022-08-032-85/+35
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull 9p iov_iter fix from Al Viro: "net/9p abuses iov_iter primitives - it attempts to copy _from_ a destination-only iov_iter when it handles Rerror arriving in reply to zero-copy request. Not hard to fix, fortunately. This is a prereq for the iov_iter_get_pages() work in the second part of iov_iter series, ended up in a separate branch" * tag 'pull-work.9p' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: 9p: handling Rerror without copy_from_iter_full()
| * | 9p: handling Rerror without copy_from_iter_full()Al Viro2022-06-092-85/+35
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | p9_client_zc_rpc()/p9_check_zc_errors() are playing fast and loose with copy_from_iter_full(). Reading from file is done by sending Tread request. Response consists of fixed-sized header (including the amount of data actually read) followed by the data itself. For zero-copy case we arrange the things so that the first 11 bytes of reply go into the fixed-sized buffer, with the rest going straight into the pages we want to read into. What makes the things inconvenient is that sglist describing what should go where has to be set *before* the reply arrives. As the result, if reply is an error, the things get interesting. On success we get size[4] Rread tag[2] count[4] data[count] For error layout varies depending upon the protocol variant - in original 9P and 9P2000 it's size[4] Rerror tag[2] len[2] error[len] in 9P2000.U size[4] Rerror tag[2] len[2] error[len] errno[4] in 9P2000.L size[4] Rlerror tag[2] errno[4] The last case is nice and simple - we have an 11-byte response that fits into the fixed-sized buffer we hoped to get an Rread into. In other two, though, we get a variable-length string spill into the pages we'd prepared for the data to be read. Had that been in fixed-sized buffer (which is actually 4K), we would've dealt with that the same way we handle non-zerocopy case. However, for zerocopy it doesn't end up there, so we need to copy it from those pages. The trouble is, by the time we get around to that, the references to pages in question are already dropped. As the result, p9_zc_check_errors() tries to get the data using copy_from_iter_full(). Unfortunately, the iov_iter it's trying to read from might *NOT* be capable of that. It is, after all, a data destination, not data source. In particular, if it's an ITER_PIPE one, copy_from_iter_full() will simply fail. In ->zc_request() itself we do have those pages and dealing with the problem in there would be a simple matter of memcpy_from_page() into the fixed-sized buffer. Moreover, it isn't hard to recognize the (rare) case when such copying is needed. That way we get rid of p9_zc_check_errors() entirely - p9_check_errors() can be used instead both for zero-copy and non-zero-copy cases. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* | Merge tag 'pull-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfsLinus Torvalds2022-08-032-4/+20
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull copy_to_iter_mc fix from Al Viro: "Backportable fix for copy_to_iter_mc() - the second part of iov_iter work will pretty much overwrite this, but would be much harder to backport" * tag 'pull-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: fix short copy handling in copy_mc_pipe_to_iter()
| * | fix short copy handling in copy_mc_pipe_to_iter()Al Viro2022-06-282-4/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Unlike other copying operations on ITER_PIPE, copy_mc_to_iter() can result in a short copy. In that case we need to trim the unused buffers, as well as the length of partially filled one - it's not enough to set ->head, ->iov_offset and ->count to reflect how much had we copied. Not hard to fix, fortunately... I'd put a helper (pipe_discard_from(pipe, head)) into pipe_fs_i.h, rather than iov_iter.c - it has nothing to do with iov_iter and having it will allow us to avoid an ugly kludge in fs/splice.c. We could put it into lib/iov_iter.c for now and move it later, but I don't see the point going that way... Cc: stable@kernel.org # 4.19+ Fixes: ca146f6f091e "lib/iov_iter: Fix pipe handling in _copy_to_iter_mcsafe()" Reviewed-by: Jeff Layton <jlayton@kernel.org> Reviewed-by: Christian Brauner (Microsoft) <brauner@kernel.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* | | Merge tag 'pull-work.iov_iter-base' of ↵Linus Torvalds2022-08-0320-296/+113
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull vfs iov_iter updates from Al Viro: "Part 1 - isolated cleanups and optimizations. One of the goals is to reduce the overhead of using ->read_iter() and ->write_iter() instead of ->read()/->write(). new_sync_{read,write}() has a surprising amount of overhead, in particular inside iocb_flags(). That's the explanation for the beginning of the series is in this pile; it's not directly iov_iter-related, but it's a part of the same work..." * tag 'pull-work.iov_iter-base' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: first_iovec_segment(): just return address iov_iter: massage calling conventions for first_{iovec,bvec}_segment() iov_iter: first_{iovec,bvec}_segment() - simplify a bit iov_iter: lift dealing with maxpages out of first_{iovec,bvec}_segment() iov_iter_get_pages{,_alloc}(): cap the maxsize with MAX_RW_COUNT iov_iter_bvec_advance(): don't bother with bvec_iter copy_page_{to,from}_iter(): switch iovec variants to generic keep iocb_flags() result cached in struct file iocb: delay evaluation of IS_SYNC(...) until we want to check IOCB_DSYNC struct file: use anonymous union member for rcuhead and llist btrfs: use IOMAP_DIO_NOSYNC teach iomap_dio_rw() to suppress dsync No need of likely/unlikely on calls of check_copy_size()
| * | | first_iovec_segment(): just return addressAl Viro2022-07-061-7/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | ... and calculate the offset in the caller Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | iov_iter: massage calling conventions for first_{iovec,bvec}_segment()Al Viro2022-07-061-24/+18
| | | | | | | | | | | | | | | | | | | | | | | | Pass maxsize by reference, return length via the same. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | iov_iter: first_{iovec,bvec}_segment() - simplify a bitAl Viro2022-07-061-16/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | We return length + offset in page via *size. Don't bother - the caller can do that arithmetics just as well; just report the length to it. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | iov_iter: lift dealing with maxpages out of first_{iovec,bvec}_segment()Al Viro2022-07-061-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | caller can do that just as easily Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | iov_iter_get_pages{,_alloc}(): cap the maxsize with MAX_RW_COUNTAl Viro2022-07-061-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All callers can and should handle iov_iter_get_pages() returning fewer pages than requested. All in-kernel ones do. And it makes the arithmetical overflow analysis much simpler... Reviewed-by: Jeff Layton <jlayton@kernel.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | iov_iter_bvec_advance(): don't bother with bvec_iterAl Viro2022-07-061-9/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | do what we do for iovec/kvec; that ends up generating better code, AFAICS. Reviewed-by: Jeff Layton <jlayton@kernel.org> Reviewed-by: Christian Brauner (Microsoft) <brauner@kernel.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | copy_page_{to,from}_iter(): switch iovec variants to genericAl Viro2022-06-281-187/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | we can do copyin/copyout under kmap_local_page(); it shouldn't overflow the kmap stack - the maximal footprint increase only by one here. Reviewed-by: Jeff Layton <jlayton@kernel.org> Reviewed-by: Christian Brauner (Microsoft) <brauner@kernel.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | keep iocb_flags() result cached in struct fileAl Viro2022-06-107-6/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * calculate at the time we set FMODE_OPENED (do_dentry_open() for normal opens, alloc_file() for pipe()/socket()/etc.) * update when handling F_SETFL * keep in a new field - file->f_iocb_flags; since that thing is needed only before the refcount reaches zero, we can put it into the same anon union where ->f_rcuhead and ->f_llist live - those are used only after refcount reaches zero. Reviewed-by: Christian Brauner (Microsoft) <brauner@kernel.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | iocb: delay evaluation of IS_SYNC(...) until we want to check IOCB_DSYNCAl Viro2022-06-107-9/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | New helper to be used instead of direct checks for IOCB_DSYNC: iocb_is_dsync(iocb). Checks converted, which allows to avoid the IS_SYNC(iocb->ki_filp->f_mapping->host) part (4 cache lines) from iocb_flags() - it's checked in iocb_is_dsync() instead Reviewed-by: Christian Brauner (Microsoft) <brauner@kernel.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | struct file: use anonymous union member for rcuhead and llistAl Viro2022-06-102-11/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Once upon a time we couldn't afford anon unions; these days minimal gcc version had been raised enough to take care of that. Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Christian Brauner (Microsoft) <brauner@kernel.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | btrfs: use IOMAP_DIO_NOSYNCAl Viro2022-06-102-18/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ... instead of messing with iocb flags Suggested-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Christian Brauner (Microsoft) <brauner@kernel.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | teach iomap_dio_rw() to suppress dsyncAl Viro2022-06-102-9/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | New flag, equivalent to removal of IOCB_DSYNC from iocb flags. This mimics what btrfs is doing (and that's what btrfs will switch to). However, I'm not at all sure that we want to suppress REQ_FUA for those - all btrfs hack really cares about is suppression of generic_write_sync(). For now let's keep the existing behaviour, but I really want to hear more detailed arguments pro or contra. [folded brain fix from willy] Suggested-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Christian Brauner (Microsoft) <brauner@kernel.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | No need of likely/unlikely on calls of check_copy_size()Al Viro2022-06-074-14/+11
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | it's inline and unlikely() inside of it (including the implicit one in WARN_ON_ONCE()) suffice to convince the compiler that getting false from check_copy_size() is unlikely. Spotted-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Christian Brauner (Microsoft) <brauner@kernel.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* | | Merge tag 'pull-work.dcache' of ↵Linus Torvalds2022-08-032-17/+46
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull vfs dcache updates from Al Viro: "The main part here is making parallel lookups safe for RT - making sure preemption is disabled in start_dir_add()/ end_dir_add() sections (on non-RT it's automatic, on RT it needs to to be done explicitly) and moving wakeups from __d_lookup_done() inside of such to the end of those sections. Wakeups can be safely delayed for as long as ->d_lock on in-lookup dentry is held; proving that has caught a bug in d_add_ci() that allows memory corruption when sufficiently bogus ntfs (or case-insensitive xfs) image is mounted. Easily fixed, fortunately" * tag 'pull-work.dcache' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: fs/dcache: Move wakeup out of i_seq_dir write held region. fs/dcache: Move the wakeup from __d_lookup_done() to the caller. fs/dcache: Disable preemption on i_dir_seq write side on PREEMPT_RT d_add_ci(): make sure we don't miss d_lookup_done()
| * | | fs/dcache: Move wakeup out of i_seq_dir write held region.Sebastian Andrzej Siewior2022-07-301-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | __d_add() and __d_move() wake up waiters on dentry::d_wait from within the i_seq_dir write held region. This violates the PREEMPT_RT constraints as the wake up acquires wait_queue_head::lock which is a "sleeping" spinlock on RT. There is no requirement to do so. __d_lookup_unhash() has cleared DCACHE_PAR_LOOKUP and dentry::d_wait and returned the now unreachable wait queue head pointer to the caller, so the actual wake up can be postponed until the i_dir_seq write side critical section is left. The only requirement is that dentry::lock is held across the whole sequence including the wake up. The previous commit includes an analysis why this is considered safe. Move the wake up past end_dir_add() which leaves the i_dir_seq write side critical section and enables preemption. For non RT kernels there is no difference because preemption is still disabled due to dentry::lock being held, but it shortens the time between wake up and unlocking dentry::lock, which reduces the contention for the woken up waiter. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | fs/dcache: Move the wakeup from __d_lookup_done() to the caller.Sebastian Andrzej Siewior2022-07-302-13/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | __d_lookup_done() wakes waiters on dentry->d_wait. On PREEMPT_RT we are not allowed to do that with preemption disabled, since the wakeup acquired wait_queue_head::lock, which is a "sleeping" spinlock on RT. Calling it under dentry->d_lock is not a problem, since that is also a "sleeping" spinlock on the same configs. Unfortunately, two of its callers (__d_add() and __d_move()) are holding more than just ->d_lock and that needs to be dealt with. The key observation is that wakeup can be moved to any point before dropping ->d_lock. As a first step to solve this, move the wake up outside of the hlist_bl_lock() held section. This is safe because: Waiters get inserted into ->d_wait only after they'd taken ->d_lock and observed DCACHE_PAR_LOOKUP in flags. As long as they are woken up (and evicted from the queue) between the moment __d_lookup_done() has removed DCACHE_PAR_LOOKUP and dropping ->d_lock, we are safe, since the waitqueue ->d_wait points to won't get destroyed without having __d_lookup_done(dentry) called (under ->d_lock). ->d_wait is set only by d_alloc_parallel() and only in case when it returns a freshly allocated in-lookup dentry. Whenever that happens, we are guaranteed that __d_lookup_done() will be called for resulting dentry (under ->d_lock) before the wq in question gets destroyed. With two exceptions wq lives in call frame of the caller of d_alloc_parallel() and we have an explicit d_lookup_done() on the resulting in-lookup dentry before we leave that frame. One of those exceptions is nfs_call_unlink(), where wq is embedded into (dynamically allocated) struct nfs_unlinkdata. It is destroyed in nfs_async_unlink_release() after an explicit d_lookup_done() on the dentry wq went into. Remaining exception is d_add_ci(). There wq is what we'd found in ->d_wait of d_add_ci() argument. Callers of d_add_ci() are two instances of ->d_lookup() and they must have been given an in-lookup dentry. Which means that they'd been called by __lookup_slow() or lookup_open(), with wq in the call frame of one of those. Result of d_alloc_parallel() in d_add_ci() is fed to d_splice_alias(), which either returns non-NULL (and d_add_ci() does d_lookup_done()) or feeds dentry to __d_add() that will do __d_lookup_done() under ->d_lock. That concludes the analysis. Let __d_lookup_unhash(): 1) Lock the lookup hash and clear DCACHE_PAR_LOOKUP 2) Unhash the dentry 3) Retrieve and clear dentry::d_wait 4) Unlock the hash and return the retrieved waitqueue head pointer 5) Let the caller handle the wake up. 6) Rename __d_lookup_done() to __d_lookup_unhash_wake() to enforce build failures for OOT code that used __d_lookup_done() and is not aware of the new return value. This does not yet solve the PREEMPT_RT problem completely because preemption is still disabled due to i_dir_seq being held for write. This will be addressed in subsequent steps. An alternative solution would be to switch the waitqueue to a simple waitqueue, but aside of Linus not being a fan of them, moving the wake up closer to the place where dentry::lock is unlocked reduces lock contention time for the woken up waiter. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://lkml.kernel.org/r/20220613140712.77932-3-bigeasy@linutronix.de Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | fs/dcache: Disable preemption on i_dir_seq write side on PREEMPT_RTSebastian Andrzej Siewior2022-07-301-1/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | i_dir_seq is a sequence counter with a lock which is represented by the lowest bit. The writer atomically updates the counter which ensures that it can be modified by only one writer at a time. This requires preemption to be disabled across the write side critical section. On !PREEMPT_RT kernels this is implicit by the caller acquiring dentry::lock. On PREEMPT_RT kernels spin_lock() does not disable preemption which means that a preempting writer or reader would live lock. It's therefore required to disable preemption explicitly. An alternative solution would be to replace i_dir_seq with a seqlock_t for PREEMPT_RT, but that comes with its own set of problems due to arbitrary lock nesting. A pure sequence count with an associated spinlock is not possible because the locks held by the caller are not necessarily related. As the critical section is small, disabling preemption is a sensible solution. Reported-by: Oleg.Karfich@wago.com Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://lkml.kernel.org/r/20220613140712.77932-2-bigeasy@linutronix.de Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * | | d_add_ci(): make sure we don't miss d_lookup_done()Al Viro2022-07-301-0/+1
| | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All callers of d_alloc_parallel() must make sure that resulting in-lookup dentry (if any) will encounter __d_lookup_done() before the final dput(). d_add_ci() might end up creating in-lookup dentries; they are fed to d_splice_alias(), which will normally make sure they meet __d_lookup_done(). However, it is possible to end up with d_splice_alias() failing with ERR_PTR(-ELOOP) without having done so. It takes a corrupted ntfs or case-insensitive xfs image, but neither should end up with memory corruption... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* | | Merge tag 'pull-work.lseek' of ↵Linus Torvalds2022-08-0312-31/+26
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull vfs lseek updates from Al Viro: "Jason's lseek series. Saner handling of 'lseek should fail with ESPIPE' - this gets rid of the magical no_llseek thing and makes checks consistent. In particular, the ad-hoc "can we do splice via internal pipe" checks got saner (and somewhat more permissive, which is what Jason had been after, AFAICT)" * tag 'pull-work.lseek' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: fs: remove no_llseek fs: check FMODE_LSEEK to control internal pipe splicing vfio: do not set FMODE_LSEEK flag dma-buf: remove useless FMODE_LSEEK flag fs: do not compare against ->llseek fs: clear or set FMODE_LSEEK based on llseek function
| * | | fs: remove no_llseekJason A. Donenfeld2022-07-167-14/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that all callers of ->llseek are going through vfs_llseek(), we don't gain anything by keeping no_llseek around. Nothing actually calls it and setting ->llseek to no_lseek is completely equivalent to leaving it NULL. Longer term (== by the end of merge window) we want to remove all such intializations. To simplify the merge window this commit does *not* touch initializers - it only defines no_llseek as NULL (and simplifies the tests on file opening). At -rc1 we'll need do a mechanical removal of no_llseek - git grep -l -w no_llseek | grep -v porting.rst | while read i; do sed -i '/\<no_llseek\>/d' $i done would do it. Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>