summaryrefslogtreecommitdiffstats
path: root/fs/pstore
Commit message (Collapse)AuthorAgeFilesLines
* pstore/zone: Don't clear memory twiceChristophe JAILLET2024-03-091-1/+0
| | | | | | | | | | | There is no need to call memset(..., 0, ...) on memory allocated by kcalloc(). It is already zeroed. Remove the redundant call. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Link: https://lore.kernel.org/r/fa2597400051c18c6ca11187b0e4b906729991b2.1709972649.git.christophe.jaillet@wanadoo.fr Signed-off-by: Kees Cook <keescook@chromium.org>
* pstore/zone: Add a null pointer check to the psz_kmsg_readKunwu Chan2024-02-221-0/+2
| | | | | | | | | | kasprintf() returns a pointer to dynamically allocated memory which can be NULL upon failure. Ensure the allocation was successful by checking the pointer validity. Signed-off-by: Kunwu Chan <chentao@kylinos.cn> Link: https://lore.kernel.org/r/20240118100206.213928-1-chentao@kylinos.cn Signed-off-by: Kees Cook <keescook@chromium.org>
* pstore/ram: Register to module device tableNícolas F. R. A. Prado2024-02-221-0/+1
| | | | | | | | | | | | Register the compatible for this module on the module device table so it can be automatically loaded when a matching DT node is present, allowing logging of panics and oopses without any intervention. Signed-off-by: Nícolas F. R. A. Prado <nfraprado@collabora.com> Reviewed-by: Kees Cook <keescook@chromium.org> Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Link: https://lore.kernel.org/r/20240110210600.787703-2-nfraprado@collabora.com Signed-off-by: Kees Cook <keescook@chromium.org>
* pstore: inode: Only d_invalidate() is neededKees Cook2024-02-221-7/+3
| | | | | | | | | | | | | | | | | | | | | Unloading a modular pstore backend with records in pstorefs would trigger the dput() double-drop warning: WARNING: CPU: 0 PID: 2569 at fs/dcache.c:762 dput.part.0+0x3f3/0x410 Using the combo of d_drop()/dput() (as mentioned in Documentation/filesystems/vfs.rst) isn't the right approach here, and leads to the reference counting problem seen above. Use d_invalidate() and update the code to not bother checking for error codes that can never happen. Suggested-by: Alexander Viro <viro@zeniv.linux.org.uk> Fixes: 609e28bb139e ("pstore: Remove filesystem records when backend is unregistered") Signed-off-by: Kees Cook <keescook@chromium.org> --- Cc: "Guilherme G. Piccoli" <gpiccoli@igalia.com> Cc: Tony Luck <tony.luck@intel.com> Cc: linux-hardening@vger.kernel.org
* pstore: inode: Use cleanup.h for struct pstore_privateKees Cook2023-12-081-9/+4
| | | | | | | | | | Simplify error path when "private" needs to be freed. Cc: Guilherme G. Piccoli <gpiccoli@igalia.com> Cc: Tony Luck <tony.luck@intel.com> Cc: <linux-hardening@vger.kernel.org> Link: https://lore.kernel.org/r/20231205182622.1329923-4-keescook@chromium.org Signed-off-by: Kees Cook <keescook@chromium.org>
* pstore: inode: Use __free(pstore_iput) for inode allocationsKees Cook2023-12-081-5/+5
| | | | | | | | | | Simplify error path for failures where "inode" needs to be freed. Cc: Guilherme G. Piccoli <gpiccoli@igalia.com> Cc: Tony Luck <tony.luck@intel.com> Cc: <linux-hardening@vger.kernel.org> Link: https://lore.kernel.org/r/20231205182622.1329923-3-keescook@chromium.org Signed-off-by: Kees Cook <keescook@chromium.org>
* pstore: inode: Convert mutex usage to guard(mutex)Kees Cook2023-12-081-45/+31
| | | | | | | | | | | Replace open-coded mutex handling with cleanup.h guard(mutex) and scoped_guard(mutex, ...). Cc: Guilherme G. Piccoli <gpiccoli@igalia.com> Cc: Tony Luck <tony.luck@intel.com> Cc: <linux-hardening@vger.kernel.org> Link: https://lore.kernel.org/r/20231205182622.1329923-2-keescook@chromium.org Signed-off-by: Kees Cook <keescook@chromium.org>
* pstore: inode: Convert kfree() usage to __free(kfree)Kees Cook2023-12-081-6/+4
| | | | | | | | | | | Mostly as an example to myself, replace a simple allocation pattern with the automatic kfree cleanup features now exposed by cleanup.h. Cc: Guilherme G. Piccoli <gpiccoli@igalia.com> Cc: Tony Luck <tony.luck@intel.com> Cc: <linux-hardening@vger.kernel.org> Link: https://lore.kernel.org/r/20231205182622.1329923-1-keescook@chromium.org Signed-off-by: Kees Cook <keescook@chromium.org>
* pstore: ram_core: fix possible overflow in persistent_ram_init_ecc()Sergey Shtylyov2023-12-081-1/+1
| | | | | | | | | | | | | | | | | | | In persistent_ram_init_ecc(), on 64-bit arches DIV_ROUND_UP() will return 64-bit value since persistent_ram_zone::buffer_size has type size_t which is derived from the 64-bit *unsigned long*, while the ecc_blocks variable this value gets assigned to has (always 32-bit) *int* type. Even if that value fits into *int* type, an overflow is still possible when calculating the size_t typed ecc_total variable further below since there's no cast to any 64-bit type before multiplication. Declaring the ecc_blocks variable as *size_t* should fix this mess... Found by Linux Verification Center (linuxtesting.org) with the SVACE static analysis tool. Fixes: 9cc05ad97c57 ("staging: android: persistent_ram: refactor ecc support") Signed-off-by: Sergey Shtylyov <s.shtylyov@omp.ru> Link: https://lore.kernel.org/r/20231105202936.25694-1-s.shtylyov@omp.ru Signed-off-by: Kees Cook <keescook@chromium.org>
* pstore/ram: Fix crash when setting number of cpus to an odd numberWeichen Chen2023-12-081-0/+1
| | | | | | | | | | | | | | | | | | | | | When the number of cpu cores is adjusted to 7 or other odd numbers, the zone size will become an odd number. The address of the zone will become: addr of zone0 = BASE addr of zone1 = BASE + zone_size addr of zone2 = BASE + zone_size*2 ... The address of zone1/3/5/7 will be mapped to non-alignment va. Eventually crashes will occur when accessing these va. So, use ALIGN_DOWN() to make sure the zone size is even to avoid this bug. Signed-off-by: Weichen Chen <weichen.chen@mediatek.com> Reviewed-by: Matthias Brugger <matthias.bgg@gmail.com> Tested-by: "Guilherme G. Piccoli" <gpiccoli@igalia.com> Link: https://lore.kernel.org/r/20230224023632.6840-1-weichen.chen@mediatek.com Signed-off-by: Kees Cook <keescook@chromium.org>
* Merge tag 'pstore-v6.7-rc1' of ↵Linus Torvalds2023-10-301-1/+8
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux Pull pstore updates from Kees Cook: - Check for out-of-memory condition during initialization (Jiasheng Jiang) - Fix documentation typos (Tudor Ambarus) * tag 'pstore-v6.7-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: pstore/platform: Add check for kstrdup docs: pstore-blk.rst: fix typo, s/console/ftrace docs: pstore-blk.rst: use "about" as a preposition after "care"
| * pstore/platform: Add check for kstrdupJiasheng Jiang2023-10-121-1/+8
| | | | | | | | | | | | | | | | | | | | Add check for the return value of kstrdup() and return the error if it fails in order to avoid NULL pointer dereference. Fixes: 563ca40ddf40 ("pstore/platform: Switch pstore_info::name to const") Signed-off-by: Jiasheng Jiang <jiasheng@iscas.ac.cn> Link: https://lore.kernel.org/r/20230623022706.32125-1-jiasheng@iscas.ac.cn Signed-off-by: Kees Cook <keescook@chromium.org>
* | pstore: convert to new timestamp accessorsJeff Layton2023-10-181-2/+3
|/ | | | | | | | | Convert to using the new inode timestamp accessor functions. Signed-off-by: Jeff Layton <jlayton@kernel.org> Link: https://lore.kernel.org/r/20231004185347.80880-60-jlayton@kernel.org Acked-by: Kees Cook <keescook@chromium.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
* Merge tag 'pstore-v6.6-rc1-fix' of ↵Linus Torvalds2023-09-021-7/+27
|\ | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux Pull pstore fix from Kees Cook: - Adjust sizes of buffers just avoid uncompress failures (Ard Biesheuvel) * tag 'pstore-v6.6-rc1-fix' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: pstore: Base compression input buffer size on estimated compressed size
| * pstore: Base compression input buffer size on estimated compressed sizeArd Biesheuvel2023-08-311-7/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 1756ddea6916 ("pstore: Remove worst-case compression size logic") removed some clunky per-algorithm worst case size estimation routines on the basis that we can always store pstore records uncompressed, and these worst case estimations are about how much the size might inadvertently *increase* due to encapsulation overhead when the input cannot be compressed at all. So if compression results in a size increase, we just store the original data instead. However, it seems that the original code was misinterpreting these calculations as an estimation of how much uncompressed data might fit into a compressed buffer of a given size, and it was using the results to consume the input data in larger chunks than the pstore record size, relying on the compression to ensure that what ultimately gets stored fits into the available space. One result of this, as observed and reported by Linus, is that upgrading to a newer kernel that includes the given commit may result in pstore decompression errors reported in the kernel log. This is due to the fact that the existing records may unexpectedly decompress to a size that is larger than the pstore record size. Another potential problem caused by this change is that we may underutilize the fixed sized records on pstore backends such as ramoops. And on pstore backends with variable sized records such as EFI, we will end up creating many more entries than before to store the same amount of compressed data. So let's fix both issues, by bringing back the typical case estimation of how much ASCII text captured from the dmesg log might fit into a pstore record of a given size after compression. The original implementation used the computation given below for zlib: switch (size) { /* buffer range for efivars */ case 1000 ... 2000: cmpr = 56; break; case 2001 ... 3000: cmpr = 54; break; case 3001 ... 3999: cmpr = 52; break; /* buffer range for nvram, erst */ case 4000 ... 10000: cmpr = 45; break; default: cmpr = 60; break; } return (size * 100) / cmpr; We will use the previous worst-case of 60% for compression. For decompression go extra large (3x) so we make sure there's enough space for anything. While at it, rate limit the error message so we don't flood the log unnecessarily on systems that have accumulated a lot of pstore history. Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Kees Cook <keescook@chromium.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20230830212238.135900-1-ardb@kernel.org Co-developed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Kees Cook <keescook@chromium.org>
* | Merge tag 'pstore-v6.6-rc1' of ↵Linus Torvalds2023-08-285-346/+137
|\| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux Pull pstore updates from Kees Cook: - Greatly simplify compression support (Ard Biesheuvel) - Avoid crashes for corrupted offsets when prz size is 0 (Enlin Mu) - Expand range of usable record sizes (Yuxiao Zhang) - Fix kernel-doc warning (Matthew Wilcox) * tag 'pstore-v6.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: pstore: Fix kernel-doc warning pstore: Support record sizes larger than kmalloc() limit pstore/ram: Check start of empty przs during init pstore: Replace crypto API compression with zlib_deflate library calls pstore: Remove worst-case compression size logic
| * pstore: Fix kernel-doc warningMatthew Wilcox (Oracle)2023-08-181-6/+4
| | | | | | | | | | | | | | | | | | Fix the warning for the description of struct persistent_ram_buffer and improve the descriptions of the other struct members while I'm here. Signed-off-by: "Matthew Wilcox (Oracle)" <willy@infradead.org> Link: https://lore.kernel.org/r/20230818201253.2729485-1-willy@infradead.org Signed-off-by: Kees Cook <keescook@chromium.org>
| * pstore: Support record sizes larger than kmalloc() limitYuxiao Zhang2023-08-174-21/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently pstore record buffers are allocated using kmalloc() which has a maximum size based on page size. If a large "pmsg-size" module parameter is specified, pmsg will fail to copy the contents since memdup_user() is limited to kmalloc() allocation sizes. Since we don't need physically contiguous memory for any of the pstore record buffers, use kvzalloc() to avoid such limitations in the core of pstore and in the ram backend, and explicitly read from userspace using vmemdup_user(). This also means that any other backends that want to (or do already) support larger record sizes will Just Work now. Signed-off-by: Yuxiao Zhang <yuxiaozhang@google.com> Link: https://lore.kernel.org/r/20230627202540.881909-2-yuxiaozhang@google.com Co-developed-by: Kees Cook <keescook@chromium.org> Signed-off-by: Kees Cook <keescook@chromium.org>
| * pstore/ram: Check start of empty przs during initEnlin Mu2023-08-041-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After commit 30696378f68a ("pstore/ram: Do not treat empty buffers as valid"), initialization would assume a prz was valid after seeing that the buffer_size is zero (regardless of the buffer start position). This unchecked start value means it could be outside the bounds of the buffer, leading to future access panics when written to: sysdump_panic_event+0x3b4/0x5b8 atomic_notifier_call_chain+0x54/0x90 panic+0x1c8/0x42c die+0x29c/0x2a8 die_kernel_fault+0x68/0x78 __do_kernel_fault+0x1c4/0x1e0 do_bad_area+0x40/0x100 do_translation_fault+0x68/0x80 do_mem_abort+0x68/0xf8 el1_da+0x1c/0xc0 __raw_writeb+0x38/0x174 __memcpy_toio+0x40/0xac persistent_ram_update+0x44/0x12c persistent_ram_write+0x1a8/0x1b8 ramoops_pstore_write+0x198/0x1e8 pstore_console_write+0x94/0xe0 ... To avoid this, also check if the prz start is 0 during the initialization phase. If not, the next prz sanity check case will discover it (start > size) and zap the buffer back to a sane state. Fixes: 30696378f68a ("pstore/ram: Do not treat empty buffers as valid") Cc: Yunlong Xing <yunlong.xing@unisoc.com> Cc: stable@vger.kernel.org Signed-off-by: Enlin Mu <enlin.mu@unisoc.com> Link: https://lore.kernel.org/r/20230801060432.1307717-1-yunlong.xing@unisoc.com [kees: update commit log with backtrace and clarifications] Signed-off-by: Kees Cook <keescook@chromium.org>
| * pstore: Replace crypto API compression with zlib_deflate library callsArd Biesheuvel2023-07-172-139/+97
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pstore supports compression using a variety of algorithms exposed by the crypto API. This uses the deprecated comp (as opposed to scomp/acomp) API, and so we should stop using that, and either move to the new API, or switch to a different approach entirely. Given that we only compress ASCII text in pstore, and considering that this happens when the system is likely to be in an unstable state, the flexibility that the complex crypto API provides does not outweigh its impact on the risk that we might encounter additional problems when trying to commit the kernel log contents to the pstore backend. So let's switch [back] to the zlib deflate library API, and remove all the complexity that really has no place in a low-level diagnostic facility. Note that, while more modern compression algorithms have been added to the kernel in recent years, the code size of zlib deflate is substantially smaller than, e.g., zstd, while its performance in terms of compression ratio is comparable for ASCII text, and speed is deemed irrelevant in this context. Note that this means that compressed pstore records may no longer be accessible after a kernel upgrade, but this has never been part of the contract. (The choice of compression algorithm is not stored in the pstore records either) Tested-by: "Guilherme G. Piccoli" <gpiccoli@igalia.com> # Steam Deck Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Eric Biggers <ebiggers@google.com> Link: https://lore.kernel.org/r/20230712162332.2670437-3-ardb@kernel.org Signed-off-by: Kees Cook <keescook@chromium.org>
| * pstore: Remove worst-case compression size logicArd Biesheuvel2023-07-171-187/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The worst case compression size used by pstore gives an upper bound for how much the data might inadvertently *grow* due to encapsulation overhead if the input is not compressible at all. Given that pstore records have individual 'compressed' flags, we can simply store the uncompressed data if compressing it would end up using more space, making the worst case identical to the uncompressed case. This means we can just drop all the elaborate logic that reasons about upper bounds for each individual compression algorithm, and just store the uncompressed data directly if compression fails for any reason. Co-developed-by: Kees Cook <keescook@chromium.org> Tested-by: "Guilherme G. Piccoli" <gpiccoli@igalia.com> # Steam Deck Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Reviewed-by: Eric Biggers <ebiggers@google.com> Link: https://lore.kernel.org/r/20230712162332.2670437-2-ardb@kernel.org Signed-off-by: Kees Cook <keescook@chromium.org>
* | pstore: convert to ctime accessor functionsJeff Layton2023-07-241-2/+2
|/ | | | | | | | | | | | In later patches, we're going to change how the inode's ctime field is used. Switch to using accessor functions instead of raw accesses of inode->i_ctime. Acked-by: Kees Cook <keescook@chromium.org> Signed-off-by: Jeff Layton <jlayton@kernel.org> Reviewed-by: Jan Kara <jack@suse.cz> Message-Id: <20230705190309.579783-66-jlayton@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
* Merge tag 'pstore-v6.5-rc1' of ↵Linus Torvalds2023-06-272-4/+4
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux Pull pstore updates from Kees Cook: - Check for out-of-memory condition (Jiasheng Jiang) - Convert to platform remove callback returning void (Uwe Kleine-König) * tag 'pstore-v6.5-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: pstore/ram: Add check for kstrdup pstore/ram: Convert to platform remove callback returning void
| * pstore/ram: Add check for kstrdupJiasheng Jiang2023-06-141-0/+2
| | | | | | | | | | | | | | | | | | | | Add check for the return value of kstrdup() and return the error if it fails in order to avoid NULL pointer dereference. Fixes: e163fdb3f7f8 ("pstore/ram: Regularize prz label allocation lifetime") Signed-off-by: Jiasheng Jiang <jiasheng@iscas.ac.cn> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230614093733.36048-1-jiasheng@iscas.ac.cn
| * pstore/ram: Convert to platform remove callback returning voidUwe Kleine-König2023-05-101-4/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The .remove() callback for a platform driver returns an int which makes many driver authors wrongly assume it's possible to do error handling by returning an error code. However the value returned is (mostly) ignored and this typically results in resource leaks. To improve here there is a quest to make the remove callback return void. In the first step of this quest all drivers are converted to .remove_new() which already returns void. Trivially convert this driver from always returning zero in the remove callback to the void returning variant. Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Reviewed-by: Guilherme G. Piccoli <gpiccoli@igalia.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230401120000.2487153-1-u.kleine-koenig@pengutronix.de
* | init: improve the name_to_dev_t interfaceChristoph Hellwig2023-06-051-2/+2
|/ | | | | | | | | | | | | | | | | name_to_dev_t has a very misleading name, that doesn't make clear it should only be used by the early init code, and also has a bad calling convention that doesn't allow returning different kinds of errors. Rename it to early_lookup_bdev to make the use case clear, and return an errno, where -EINVAL means the string could not be parsed, and -ENODEV means it the string was valid, but there was no device found for it. Also stub out the whole call for !CONFIG_BLOCK as all the non-block root cases are always covered in the caller. Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20230531125535.676098-14-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
* Merge tag 'pstore-v6.4-rc1' of ↵Linus Torvalds2023-04-271-4/+3
|\ | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux Pull pstore update from Kees Cook: - Revert pmsg_lock back to a normal mutex (John Stultz) * tag 'pstore-v6.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: pstore: Revert pmsg_lock back to a normal mutex
| * pstore: Revert pmsg_lock back to a normal mutexJohn Stultz2023-03-081-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit 76d62f24db07f22ccf9bc18ca793c27d4ebef721. So while priority inversion on the pmsg_lock is an occasional problem that an rt_mutex would help with, in uses where logging is writing to pmsg heavily from multiple threads, the pmsg_lock can be heavily contended. After this change landed, it was reported that cases where the mutex locking overhead was commonly adding on the order of 10s of usecs delay had suddenly jumped to ~msec delay with rtmutex. It seems the slight differences in the locks under this level of contention causes the normal mutexes to utilize the spinning optimizations, while the rtmutexes end up in the sleeping slowpath (which allows additional threads to pile on trying to take the lock). In this case, it devolves to a worse case senerio where the lock acquisition and scheduling overhead dominates, and each thread is waiting on the order of ~ms to do ~us of work. Obviously, having tons of threads all contending on a single lock for logging is non-optimal, so the proper fix is probably reworking pstore pmsg to have per-cpu buffers so we don't have contention. Additionally, Steven Rostedt has provided some furhter optimizations for rtmutexes that improves the rtmutex spinning path, but at least in my testing, I still see the test tripping into the sleeping path on rtmutexes while utilizing the spinning path with mutexes. But in the short term, lets revert the change to the rt_mutex and go back to normal mutexes to avoid a potentially major performance regression. And we can work on optimizations to both rtmutexes and finer-grained locking for pstore pmsg in the future. Cc: Wei Wang <wvw@google.com> Cc: Midas Chien<midaschieh@google.com> Cc: "Chunhui Li (李春辉)" <chunhui.li@mediatek.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Kees Cook <keescook@chromium.org> Cc: Anton Vorontsov <anton@enomsg.org> Cc: "Guilherme G. Piccoli" <gpiccoli@igalia.com> Cc: Tony Luck <tony.luck@intel.com> Cc: kernel-team@android.com Fixes: 76d62f24db07 ("pstore: Switch pmsg_lock to an rt_mutex to avoid priority inversion") Reported-by: "Chunhui Li (李春辉)" <chunhui.li@mediatek.com> Signed-off-by: John Stultz <jstultz@google.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20230308204043.2061631-1-jstultz@google.com
* | driver core: class: remove module * from class_create()Greg Kroah-Hartman2023-03-171-1/+1
|/ | | | | | | | | | | | The module pointer in class_create() never actually did anything, and it shouldn't have been requred to be set as a parameter even if it did something. So just remove it and fix up all callers of the function in the kernel tree at the same time. Cc: "Rafael J. Wysocki" <rafael@kernel.org> Acked-by: Benjamin Tissoires <benjamin.tissoires@redhat.com> Link: https://lore.kernel.org/r/20230313181843.1207845-4-gregkh@linuxfoundation.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* Merge tag 'pstore-v6.2-rc1-fixes' of ↵Linus Torvalds2022-12-233-4/+6
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux Pull pstore fixes from Kees Cook: - Switch pmsg_lock to an rt_mutex to avoid priority inversion (John Stultz) - Correctly assign mem_type property (Luca Stefani) * tag 'pstore-v6.2-rc1-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: pstore: Properly assign mem_type property pstore: Make sure CONFIG_PSTORE_PMSG selects CONFIG_RT_MUTEXES pstore: Switch pmsg_lock to an rt_mutex to avoid priority inversion
| * pstore: Properly assign mem_type propertyLuca Stefani2022-12-231-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If mem-type is specified in the device tree it would end up overriding the record_size field instead of populating mem_type. As record_size is currently parsed after the improper assignment with default size 0 it continued to work as expected regardless of the value found in the device tree. Simply changing the target field of the struct is enough to get mem-type working as expected. Fixes: 9d843e8fafc7 ("pstore: Add mem_type property DT parsing support") Cc: stable@vger.kernel.org Signed-off-by: Luca Stefani <luca@osomprivacy.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20221222131049.286288-1-luca@osomprivacy.com
| * pstore: Make sure CONFIG_PSTORE_PMSG selects CONFIG_RT_MUTEXESJohn Stultz2022-12-231-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In commit 76d62f24db07 ("pstore: Switch pmsg_lock to an rt_mutex to avoid priority inversion") I changed a lock to an rt_mutex. However, its possible that CONFIG_RT_MUTEXES is not enabled, which then results in a build failure, as the 0day bot detected: https://lore.kernel.org/linux-mm/202212211244.TwzWZD3H-lkp@intel.com/ Thus this patch changes CONFIG_PSTORE_PMSG to select CONFIG_RT_MUTEXES, which ensures the build will not fail. Cc: Wei Wang <wvw@google.com> Cc: Midas Chien<midaschieh@google.com> Cc: Connor O'Brien <connoro@google.com> Cc: Kees Cook <keescook@chromium.org> Cc: Anton Vorontsov <anton@enomsg.org> Cc: Colin Cross <ccross@android.com> Cc: Tony Luck <tony.luck@intel.com> Cc: kernel test robot <lkp@intel.com> Cc: kernel-team@android.com Fixes: 76d62f24db07 ("pstore: Switch pmsg_lock to an rt_mutex to avoid priority inversion") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: John Stultz <jstultz@google.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20221221051855.15761-1-jstultz@google.com
| * pstore: Switch pmsg_lock to an rt_mutex to avoid priority inversionJohn Stultz2022-12-141-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Wei Wang reported seeing priority inversion caused latencies caused by contention on pmsg_lock, and suggested it be switched to a rt_mutex. I was initially hesitant this would help, as the tasks in that trace all seemed to be SCHED_NORMAL, so the benefit would be limited to only nice boosting. However, another similar issue was raised where the priority inversion was seen did involve a blocked RT task so it is clear this would be helpful in that case. Cc: Wei Wang <wvw@google.com> Cc: Midas Chien<midaschieh@google.com> Cc: Connor O'Brien <connoro@google.com> Cc: Kees Cook <keescook@chromium.org> Cc: Anton Vorontsov <anton@enomsg.org> Cc: Colin Cross <ccross@android.com> Cc: Tony Luck <tony.luck@intel.com> Cc: kernel-team@android.com Fixes: 9d5438f462ab ("pstore: Add pmsg - user-space accessible pstore object") Reported-by: Wei Wang <wvw@google.com> Signed-off-by: John Stultz <jstultz@google.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20221214231834.3711880-1-jstultz@google.com
* | Merge tag 'driver-core-6.2-rc1' of ↵Linus Torvalds2022-12-161-1/+1
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core Pull driver core updates from Greg KH: "Here is the set of driver core and kernfs changes for 6.2-rc1. The "big" change in here is the addition of a new macro, container_of_const() that will preserve the "const-ness" of a pointer passed into it. The "problem" of the current container_of() macro is that if you pass in a "const *", out of it can comes a non-const pointer unless you specifically ask for it. For many usages, we want to preserve the "const" attribute by using the same call. For a specific example, this series changes the kobj_to_dev() macro to use it, allowing it to be used no matter what the const value is. This prevents every subsystem from having to declare 2 different individual macros (i.e. kobj_const_to_dev() and kobj_to_dev()) and having the compiler enforce the const value at build time, which having 2 macros would not do either. The driver for all of this have been discussions with the Rust kernel developers as to how to properly mark driver core, and kobject, objects as being "non-mutable". The changes to the kobject and driver core in this pull request are the result of that, as there are lots of paths where kobjects and device pointers are not modified at all, so marking them as "const" allows the compiler to enforce this. So, a nice side affect of the Rust development effort has been already to clean up the driver core code to be more obvious about object rules. All of this has been bike-shedded in quite a lot of detail on lkml with different names and implementations resulting in the tiny version we have in here, much better than my original proposal. Lots of subsystem maintainers have acked the changes as well. Other than this change, included in here are smaller stuff like: - kernfs fixes and updates to handle lock contention better - vmlinux.lds.h fixes and updates - sysfs and debugfs documentation updates - device property updates All of these have been in the linux-next tree for quite a while with no problems" * tag 'driver-core-6.2-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core: (58 commits) device property: Fix documentation for fwnode_get_next_parent() firmware_loader: fix up to_fw_sysfs() to preserve const usb.h: take advantage of container_of_const() device.h: move kobj_to_dev() to use container_of_const() container_of: add container_of_const() that preserves const-ness of the pointer driver core: fix up missed drivers/s390/char/hmcdrv_dev.c class.devnode() conversion. driver core: fix up missed scsi/cxlflash class.devnode() conversion. driver core: fix up some missing class.devnode() conversions. driver core: make struct class.devnode() take a const * driver core: make struct class.dev_uevent() take a const * cacheinfo: Remove of_node_put() for fw_token device property: Add a blank line in Kconfig of tests device property: Rename goto label to be more precise device property: Move PROPERTY_ENTRY_BOOL() a bit down device property: Get rid of __PROPERTY_ENTRY_ARRAY_EL*SIZE*() kernfs: fix all kernel-doc warnings and multiple typos driver core: pass a const * into of_device_uevent() kobject: kset_uevent_ops: make name() callback take a const * kobject: kset_uevent_ops: make filter() callback take a const * kobject: make kobject_namespace take a const * ...
| * driver core: make struct class.devnode() take a const *Greg Kroah-Hartman2022-11-241-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The devnode() in struct class should not be modifying the device that is passed into it, so mark it as a const * and propagate the function signature changes out into all relevant subsystems that use this callback. Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Reinette Chatre <reinette.chatre@intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: x86@kernel.org Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Jens Axboe <axboe@kernel.dk> Cc: Justin Sanders <justin@coraid.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Sumit Semwal <sumit.semwal@linaro.org> Cc: Benjamin Gaignard <benjamin.gaignard@collabora.com> Cc: Liam Mark <lmark@codeaurora.org> Cc: Laura Abbott <labbott@redhat.com> Cc: Brian Starkey <Brian.Starkey@arm.com> Cc: John Stultz <jstultz@google.com> Cc: "Christian König" <christian.koenig@amd.com> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Maxime Ripard <mripard@kernel.org> Cc: Thomas Zimmermann <tzimmermann@suse.de> Cc: David Airlie <airlied@gmail.com> Cc: Daniel Vetter <daniel@ffwll.ch> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Leon Romanovsky <leon@kernel.org> Cc: Dennis Dalessandro <dennis.dalessandro@cornelisnetworks.com> Cc: Dmitry Torokhov <dmitry.torokhov@gmail.com> Cc: Mauro Carvalho Chehab <mchehab@kernel.org> Cc: Sean Young <sean@mess.org> Cc: Frank Haverkamp <haver@linux.ibm.com> Cc: Jiri Slaby <jirislaby@kernel.org> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Cc: Alex Williamson <alex.williamson@redhat.com> Cc: Cornelia Huck <cohuck@redhat.com> Cc: Kees Cook <keescook@chromium.org> Cc: Anton Vorontsov <anton@enomsg.org> Cc: Colin Cross <ccross@android.com> Cc: Tony Luck <tony.luck@intel.com> Cc: Jaroslav Kysela <perex@perex.cz> Cc: Takashi Iwai <tiwai@suse.com> Cc: Hans Verkuil <hverkuil-cisco@xs4all.nl> Cc: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Cc: Xie Yongji <xieyongji@bytedance.com> Cc: Gautam Dawar <gautam.dawar@xilinx.com> Cc: Dan Carpenter <error27@gmail.com> Cc: Eli Cohen <elic@nvidia.com> Cc: Parav Pandit <parav@nvidia.com> Cc: Maxime Coquelin <maxime.coquelin@redhat.com> Cc: alsa-devel@alsa-project.org Cc: dri-devel@lists.freedesktop.org Cc: kvm@vger.kernel.org Cc: linaro-mm-sig@lists.linaro.org Cc: linux-block@vger.kernel.org Cc: linux-input@vger.kernel.org Cc: linux-kernel@vger.kernel.org Cc: linux-media@vger.kernel.org Cc: linux-rdma@vger.kernel.org Cc: linux-scsi@vger.kernel.org Cc: linux-usb@vger.kernel.org Cc: virtualization@lists.linux-foundation.org Link: https://lore.kernel.org/r/20221123122523.1332370-2-gregkh@linuxfoundation.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* | pstore: Avoid kcore oops by vmap()ing with VM_IOREMAPStephen Boyd2022-12-051-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | An oops can be induced by running 'cat /proc/kcore > /dev/null' on devices using pstore with the ram backend because kmap_atomic() assumes lowmem pages are accessible with __va(). Unable to handle kernel paging request at virtual address ffffff807ff2b000 Mem abort info: ESR = 0x96000006 EC = 0x25: DABT (current EL), IL = 32 bits SET = 0, FnV = 0 EA = 0, S1PTW = 0 FSC = 0x06: level 2 translation fault Data abort info: ISV = 0, ISS = 0x00000006 CM = 0, WnR = 0 swapper pgtable: 4k pages, 39-bit VAs, pgdp=0000000081d87000 [ffffff807ff2b000] pgd=180000017fe18003, p4d=180000017fe18003, pud=180000017fe18003, pmd=0000000000000000 Internal error: Oops: 96000006 [#1] PREEMPT SMP Modules linked in: dm_integrity CPU: 7 PID: 21179 Comm: perf Not tainted 5.15.67-10882-ge4eb2eb988cd #1 baa443fb8e8477896a370b31a821eb2009f9bfba Hardware name: Google Lazor (rev3 - 8) (DT) pstate: a0400009 (NzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) pc : __memcpy+0x110/0x260 lr : vread+0x194/0x294 sp : ffffffc013ee39d0 x29: ffffffc013ee39f0 x28: 0000000000001000 x27: ffffff807ff2b000 x26: 0000000000001000 x25: ffffffc0085a2000 x24: ffffff802d4b3000 x23: ffffff80f8a60000 x22: ffffff802d4b3000 x21: ffffffc0085a2000 x20: ffffff8080b7bc68 x19: 0000000000001000 x18: 0000000000000000 x17: 0000000000000000 x16: 0000000000000000 x15: ffffffd3073f2e60 x14: ffffffffad588000 x13: 0000000000000000 x12: 0000000000000001 x11: 00000000000001a2 x10: 00680000fff2bf0b x9 : 03fffffff807ff2b x8 : 0000000000000001 x7 : 0000000000000000 x6 : 0000000000000000 x5 : ffffff802d4b4000 x4 : ffffff807ff2c000 x3 : ffffffc013ee3a78 x2 : 0000000000001000 x1 : ffffff807ff2b000 x0 : ffffff802d4b3000 Call trace: __memcpy+0x110/0x260 read_kcore+0x584/0x778 proc_reg_read+0xb4/0xe4 During early boot, memblock reserves the pages for the ramoops reserved memory node in DT that would otherwise be part of the direct lowmem mapping. Pstore's ram backend reuses those reserved pages to change the memory type (writeback or non-cached) by passing the pages to vmap() (see pfn_to_page() usage in persistent_ram_vmap() for more details) with specific flags. When read_kcore() starts iterating over the vmalloc region, it runs over the virtual address that vmap() returned for ramoops. In aligned_vread() the virtual address is passed to vmalloc_to_page() which returns the page struct for the reserved lowmem area. That lowmem page is passed to kmap_atomic(), which effectively calls page_to_virt() that assumes a lowmem page struct must be directly accessible with __va() and friends. These pages are mapped via vmap() though, and the lowmem mapping was never made, so accessing them via the lowmem virtual address oopses like above. Let's side-step this problem by passing VM_IOREMAP to vmap(). This will tell vread() to not include the ramoops region in the kcore. Instead the area will look like a bunch of zeros. The alternative is to teach kmap() about vmalloc areas that intersect with lowmem. Presumably such a change isn't a one-liner, and there isn't much interest in inspecting the ramoops region in kcore files anyway, so the most expedient route is taken for now. Cc: Brian Geffon <bgeffon@google.com> Cc: Mike Rapoport <rppt@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Fixes: 404a6043385d ("staging: android: persistent_ram: handle reserving and mapping memory") Signed-off-by: Stephen Boyd <swboyd@chromium.org> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20221205233136.3420802-1-swboyd@chromium.org
* | pstore/ram: Fix error return code in ramoops_probe()Wang Yufen2022-12-021-0/+2
| | | | | | | | | | | | | | | | | | | | | | In the if (dev_of_node(dev) && !pdata) path, the "err" may be assigned a value of 0, so the error return code -EINVAL may be incorrectly set to 0. To fix set valid return code before calling to goto. Fixes: 35da60941e44 ("pstore/ram: add Device Tree bindings") Signed-off-by: Wang Yufen <wangyufen@huawei.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/1669969374-46582-1-git-send-email-wangyufen@huawei.com
* | pstore: Alert on backend write errorGuilherme G. Piccoli2022-10-191-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The pstore dump function doesn't alert at all on errors - despite pstore is usually a last resource and if it fails users won't be able to read the kernel log, this is not the case for server users with serial access, for example. So, let's at least attempt to inform such advanced users on the first backend writing error detected during the kmsg dump - this is also very useful for pstore debugging purposes. Signed-off-by: Guilherme G. Piccoli <gpiccoli@igalia.com> Acked-by: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20221013210648.137452-2-gpiccoli@igalia.com
* | pstore/ram: Set freed addresses to NULLKees Cook2022-10-193-8/+18
| | | | | | | | | | | | | | | | | | | | | | | | For good measure, set all the freed addresses to NULL when managing przs. Cc: Anton Vorontsov <anton@enomsg.org> Cc: Colin Cross <ccross@android.com> Cc: Tony Luck <tony.luck@intel.com> Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-and-tested-by: Guilherme G. Piccoli <gpiccoli@igalia.com> Link: https://lore.kernel.org/r/20221011200112.731334-5-keescook@chromium.org
* | pstore/ram: Move internal definitions out of kernel-wide includeKees Cook2022-10-173-2/+102
| | | | | | | | | | | | | | | | | | | | | | | | | | Most of the details of the ram backend are entirely internal to the backend itself. Leave only what is needed to instantiate a ram backend in the kernel-wide header. Cc: Anton Vorontsov <anton@enomsg.org> Cc: Colin Cross <ccross@android.com> Cc: Tony Luck <tony.luck@intel.com> Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-and-tested-by: Guilherme G. Piccoli <gpiccoli@igalia.com> Link: https://lore.kernel.org/r/20221011200112.731334-4-keescook@chromium.org
* | pstore/ram: Move pmsg init earlierKees Cook2022-10-171-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | Since the ftrace area can vary in size based on CPU count, move pmsg initialization earlier so it will have a stable location. Suggested-by: Paramjit Oberoi <pso@chromium.org> Cc: Anton Vorontsov <anton@enomsg.org> Cc: Colin Cross <ccross@android.com> Cc: Tony Luck <tony.luck@intel.com> Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed: Guilherme G. Piccoli <gpiccoli@igalia.com> Link: https://lore.kernel.org/r/20221011200112.731334-3-keescook@chromium.org
* | pstore/ram: Consolidate kfree() pathsKees Cook2022-10-171-11/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | There's no reason to keep separate kfree() paths: either all allocations succeeded, or not. Everything is torn down in the case of failure, so adjust the callers to reflect this. Cc: Anton Vorontsov <anton@enomsg.org> Cc: Colin Cross <ccross@android.com> Cc: Tony Luck <tony.luck@intel.com> Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-and-tested-by: Guilherme G. Piccoli <gpiccoli@igalia.com> Link: https://lore.kernel.org/r/20221011200112.731334-2-keescook@chromium.org
* | pstore: Inform unregistered backend names as wellGuilherme G. Piccoli2022-10-171-0/+2
| | | | | | | | | | | | | | | | | | | | | | Currently we only show the registered ones in the kernel log; users that change backend for some reason require first to unregister the current one, so let's confirm this operation with a message in the log. Signed-off-by: Guilherme G. Piccoli <gpiccoli@igalia.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20221006224212.569555-4-gpiccoli@igalia.com
* | pstore: Expose kmsg_bytes as a module parameterGuilherme G. Piccoli2022-10-171-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently this tuning is only exposed as a filesystem option, but most Linux distros automatically mount pstore, hence changing this setting requires remounting it. Also, if that mount option wasn't explicitly set it doesn't show up in mount information, so users cannot check what is the current value of kmsg_bytes. Let's then expose it as a module parameter, allowing both user visibility at all times (even if not manually set) and also the possibility of setting that as a boot/module parameter. Signed-off-by: Guilherme G. Piccoli <gpiccoli@igalia.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20221006224212.569555-3-gpiccoli@igalia.com
* | pstore: Improve error reporting in case of backend overlapGuilherme G. Piccoli2022-10-171-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The pstore infrastructure supports one single backend at a time; trying to load a another backend causes an error and displays a message, introduced on commit 0d7cd09a3dbb ("pstore: Improve register_pstore() error reporting"). Happens that this message is not really clear about the situation, also the current error returned (-EPERM) isn't accurate, whereas -EBUSY makes more sense. We have another place in the code that relies in the -EBUSY return for a similar check. So, make it consistent here by returning -EBUSY and using a similar message in both scenarios. Signed-off-by: Guilherme G. Piccoli <gpiccoli@igalia.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20221006224212.569555-2-gpiccoli@igalia.com
* | pstore/zone: Use GFP_ATOMIC to allocate zone bufferQiujun Huang2022-10-171-1/+1
|/ | | | | | | | | | | | | | There is a case found when triggering a panic_on_oom, pstore fails to dump kmsg. Because psz_kmsg_write_record can't get the new buffer. Handle this by using GFP_ATOMIC to allocate a buffer at lower watermark. Signed-off-by: Qiujun Huang <hqjagain@gmail.com> Fixes: 335426c6dcdd ("pstore/zone: Provide way to skip "broken" zone for MTD devices") Cc: WeiXiong Liao <gmpy.liaowx@gmail.com> Cc: stable@vger.kernel.org Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/CAJRQjofRCF7wjrYmw3D7zd5QZnwHQq+F8U-mJDJ6NZ4bddYdLA@mail.gmail.com
* Revert "pstore: migrate to crypto acomp interface"Guilherme G. Piccoli2022-09-301-51/+12
| | | | | | | | | | | | | | | | | | | | | | | This reverts commit e4f0a7ec586b7644107839f5394fb685cf1aadcc. When using this new interface, both efi_pstore and ramoops backends are unable to properly decompress dmesg if using zstd, lz4 and lzo algorithms (and maybe more). It does succeed with deflate though. The message observed in the kernel log is: [2.328828] pstore: crypto_acomp_decompress failed, ret = -22! The pstore infrastructure is able to collect the dmesg with both backends tested, but since decompression fails it's unreadable. With this revert everything is back to normal. Fixes: e4f0a7ec586b ("pstore: migrate to crypto acomp interface") Cc: Ard Biesheuvel <ardb@kernel.org> Signed-off-by: Guilherme G. Piccoli <gpiccoli@igalia.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20220929215515.276486-1-gpiccoli@igalia.com
* Merge tag 'efi-next-for-v5.20' of ↵Linus Torvalds2022-08-032-0/+2
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi Pull EFI updates from Ard Biesheuvel: - Enable mirrored memory for arm64 - Fix up several abuses of the efivar API - Refactor the efivar API in preparation for moving the 'business logic' part of it into efivarfs - Enable ACPI PRM on arm64 * tag 'efi-next-for-v5.20' of git://git.kernel.org/pub/scm/linux/kernel/git/efi/efi: (24 commits) ACPI: Move PRM config option under the main ACPI config ACPI: Enable Platform Runtime Mechanism(PRM) support on ARM64 ACPI: PRM: Change handler_addr type to void pointer efi: Simplify arch_efi_call_virt() macro drivers: fix typo in firmware/efi/memmap.c efi: vars: Drop __efivar_entry_iter() helper which is no longer used efi: vars: Use locking version to iterate over efivars linked lists efi: pstore: Omit efivars caching EFI varstore access layer efi: vars: Add thin wrapper around EFI get/set variable interface efi: vars: Don't drop lock in the middle of efivar_init() pstore: Add priv field to pstore_record for backend specific use Input: applespi - avoid efivars API and invoke EFI services directly selftests/kexec: remove broken EFI_VARS secure boot fallback check brcmfmac: Switch to appropriate helper to load EFI variable contents iwlwifi: Switch to proper EFI variable store interface media: atomisp_gmin_platform: stop abusing efivar API efi: efibc: avoid efivar API for setting variables efi: avoid efivars layer when loading SSDTs from variables efi: Correct comment on efi_memmap_alloc memblock: Disable mirror feature if kernelcore is not specified ...
| * pstore: Add priv field to pstore_record for backend specific useArd Biesheuvel2022-06-242-0/+2
| | | | | | | | | | | | | | | | | | | | | | The EFI pstore backend will need to store per-record variable name data when we switch away from the efivars layer. Add a priv field to struct pstore_record, and document it as holding a backend specific pointer that is assumed to be a kmalloc()d buffer, and will be kfree()d when the entire record is freed. Acked-by: Kees Cook <keescook@chromium.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
* | pstore/zone: cleanup "rcnt" typeDan Carpenter2022-06-231-6/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The info->read() function returns ssize_t. That means that info->read() either returns either negative error codes or a positive number representing the bytes read. The "rcnt" variable should be declared as ssize_t as well. Most places do this correctly but psz_kmsg_recover_meta() needed to be fixed. This code casts the "rcnt" to int. That is unnecessary when "rcnt" is already signed. It's also slightly wrong because if info->read() returned a very high (more than INT_MAX) number of bytes then this might treat that as an error. This bug cannot happen in real life, so it doesn't affect run time, but static checkers correctly complain that it is wrong. fs/pstore/zone.c:366 psz_kmsg_recover_data() warn: casting 'rcnt' truncates high bits Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/YrRtPSFHDVJzV6d+@kili