diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2023-08-30 20:05:42 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2023-08-30 20:05:42 -0700 |
commit | cd99b9eb4b702563c5ac7d26b632a628f5a832a5 (patch) | |
tree | ff96773806b6bb1efece11d8b7678ae43d71411e /Documentation/mm | |
parent | f8fd5c24830fbc259ba7d5e72817c9867c01b8e8 (diff) | |
parent | c63594f2d66690805eb78b75e4b8e8dc9f2672bf (diff) | |
download | linux-cd99b9eb4b702563c5ac7d26b632a628f5a832a5.tar.gz linux-cd99b9eb4b702563c5ac7d26b632a628f5a832a5.tar.bz2 linux-cd99b9eb4b702563c5ac7d26b632a628f5a832a5.zip |
Merge tag 'docs-6.6' of git://git.lwn.net/linux
Pull documentation updates from Jonathan Corbet:
"Documentation work keeps chugging along; this includes:
- Work from Carlos Bilbao to integrate rustdoc output into the
generated HTML documentation. This took some work to figure out how
to do it without slowing the docs build and without creating people
who don't have Rust installed, but Carlos got there
- Move the loongarch and mips architecture documentation under
Documentation/arch/
- Some more maintainer documentation from Jakub
... plus the usual assortment of updates, translations, and fixes"
* tag 'docs-6.6' of git://git.lwn.net/linux: (56 commits)
Docu: genericirq.rst: fix irq-example
input: docs: pxrc: remove reference to phoenix-sim
Documentation: serial-console: Fix literal block marker
docs/mm: remove references to hmm_mirror ops and clean typos
docs/zh_CN: correct regi_chg(),regi_add() to region_chg(),region_add()
Documentation: Fix typos
Documentation/ABI: Fix typos
scripts: kernel-doc: fix macro handling in enums
scripts: kernel-doc: parse DEFINE_DMA_UNMAP_[ADDR|LEN]
Documentation: riscv: Update boot image header since EFI stub is supported
Documentation: riscv: Add early boot document
Documentation: arm: Add bootargs to the table of added DT parameters
docs: kernel-parameters: Refer to the correct bitmap function
doc: update params of memhp_default_state=
docs: Add book to process/kernel-docs.rst
docs: sparse: fix invalid link addresses
docs: vfs: clean up after the iterate() removal
docs: Add a section on surveys to the researcher guidelines
docs: move mips under arch
docs: move loongarch under arch
...
Diffstat (limited to 'Documentation/mm')
-rw-r--r-- | Documentation/mm/highmem.rst | 27 | ||||
-rw-r--r-- | Documentation/mm/hmm.rst | 13 | ||||
-rw-r--r-- | Documentation/mm/hwpoison.rst | 2 | ||||
-rw-r--r-- | Documentation/mm/page_migration.rst | 2 | ||||
-rw-r--r-- | Documentation/mm/unevictable-lru.rst | 2 | ||||
-rw-r--r-- | Documentation/mm/vmemmap_dedup.rst | 5 |
6 files changed, 23 insertions, 28 deletions
diff --git a/Documentation/mm/highmem.rst b/Documentation/mm/highmem.rst index aefb03eb386e..9d92e3f2b3d6 100644 --- a/Documentation/mm/highmem.rst +++ b/Documentation/mm/highmem.rst @@ -51,11 +51,14 @@ Temporary Virtual Mappings The kernel contains several ways of creating temporary mappings. The following list shows them in order of preference of use. -* kmap_local_page(). This function is used to require short term mappings. - It can be invoked from any context (including interrupts) but the mappings - can only be used in the context which acquired them. - - This function should always be used, whereas kmap_atomic() and kmap() have +* kmap_local_page(), kmap_local_folio() - These functions are used to create + short term mappings. They can be invoked from any context (including + interrupts) but the mappings can only be used in the context which acquired + them. The only differences between them consist in the first taking a pointer + to a struct page and the second taking a pointer to struct folio and the byte + offset within the folio which identifies the page. + + These functions should always be used, whereas kmap_atomic() and kmap() have been deprecated. These mappings are thread-local and CPU-local, meaning that the mapping @@ -72,17 +75,17 @@ list shows them in order of preference of use. maps of the outgoing task are saved and those of the incoming one are restored. - kmap_local_page() always returns a valid virtual address and it is assumed - that kunmap_local() will never fail. + kmap_local_page(), as well as kmap_local_folio() always returns valid virtual + kernel addresses and it is assumed that kunmap_local() will never fail. - On CONFIG_HIGHMEM=n kernels and for low memory pages this returns the + On CONFIG_HIGHMEM=n kernels and for low memory pages they return the virtual address of the direct mapping. Only real highmem pages are temporarily mapped. Therefore, users may call a plain page_address() for pages which are known to not come from ZONE_HIGHMEM. However, it is - always safe to use kmap_local_page() / kunmap_local(). + always safe to use kmap_local_{page,folio}() / kunmap_local(). - While it is significantly faster than kmap(), for the highmem case it - comes with restrictions about the pointers validity. Contrary to kmap() + While they are significantly faster than kmap(), for the highmem case they + come with restrictions about the pointers validity. Contrary to kmap() mappings, the local mappings are only valid in the context of the caller and cannot be handed to other contexts. This implies that users must be absolutely sure to keep the use of the return address local to the @@ -91,7 +94,7 @@ list shows them in order of preference of use. Most code can be designed to use thread local mappings. User should therefore try to design their code to avoid the use of kmap() by mapping pages in the same thread the address will be used and prefer - kmap_local_page(). + kmap_local_page() or kmap_local_folio(). Nesting kmap_local_page() and kmap_atomic() mappings is allowed to a certain extent (up to KMAP_TYPE_NR) but their invocations have to be strictly ordered diff --git a/Documentation/mm/hmm.rst b/Documentation/mm/hmm.rst index 9aa512c3a12c..0595098a74d9 100644 --- a/Documentation/mm/hmm.rst +++ b/Documentation/mm/hmm.rst @@ -163,16 +163,7 @@ use:: It will trigger a page fault on missing or read-only entries if write access is requested (see below). Page faults use the generic mm page fault code path just -like a CPU page fault. - -Both functions copy CPU page table entries into their pfns array argument. Each -entry in that array corresponds to an address in the virtual range. HMM -provides a set of flags to help the driver identify special CPU page table -entries. - -Locking within the sync_cpu_device_pagetables() callback is the most important -aspect the driver must respect in order to keep things properly synchronized. -The usage pattern is:: +like a CPU page fault. The usage pattern is:: int driver_populate_range(...) { @@ -417,7 +408,7 @@ entries. Any attempt to access the swap entry results in a fault which is resovled by replacing the entry with the original mapping. A driver gets notified that the mapping has been changed by MMU notifiers, after which point it will no longer have exclusive access to the page. Exclusive access is -guranteed to last until the driver drops the page lock and page reference, at +guaranteed to last until the driver drops the page lock and page reference, at which point any CPU faults on the page may proceed as described. Memory cgroup (memcg) and rss accounting diff --git a/Documentation/mm/hwpoison.rst b/Documentation/mm/hwpoison.rst index ba48a441feed..483b72aa7c11 100644 --- a/Documentation/mm/hwpoison.rst +++ b/Documentation/mm/hwpoison.rst @@ -48,7 +48,7 @@ of applications. KVM support requires a recent qemu-kvm release. For the KVM use there was need for a new signal type so that KVM can inject the machine check into the guest with the proper address. This in theory allows other applications to handle -memory failures too. The expection is that near all applications +memory failures too. The expectation is that most applications won't do that, but some very specialized ones might. Failure recovery modes diff --git a/Documentation/mm/page_migration.rst b/Documentation/mm/page_migration.rst index e35af7805be5..f1ce67a26615 100644 --- a/Documentation/mm/page_migration.rst +++ b/Documentation/mm/page_migration.rst @@ -180,7 +180,7 @@ The following events (counters) can be used to monitor page migration. 4. THP_MIGRATION_FAIL: A THP could not be migrated nor it could be split. 5. THP_MIGRATION_SPLIT: A THP was migrated, but not as such: first, the THP had - to be split. After splitting, a migration retry was used for it's sub-pages. + to be split. After splitting, a migration retry was used for its sub-pages. THP_MIGRATION_* events also update the appropriate PGMIGRATE_SUCCESS or PGMIGRATE_FAIL events. For example, a THP migration failure will cause both diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevictable-lru.rst index d5ac8511eb67..67f1338440a5 100644 --- a/Documentation/mm/unevictable-lru.rst +++ b/Documentation/mm/unevictable-lru.rst @@ -463,7 +463,7 @@ can request that a region of memory be mlocked by supplying the MAP_LOCKED flag to the mmap() call. There is one important and subtle difference here, though. mmap() + mlock() will fail if the range cannot be faulted in (e.g. because mm_populate fails) and returns with ENOMEM while mmap(MAP_LOCKED) will not fail. -The mmaped area will still have properties of the locked area - pages will not +The mmapped area will still have properties of the locked area - pages will not get swapped out - but major page faults to fault memory in might still happen. Furthermore, any mmap() call or brk() call that expands the heap by a task diff --git a/Documentation/mm/vmemmap_dedup.rst b/Documentation/mm/vmemmap_dedup.rst index c573e08b5043..59891f72420e 100644 --- a/Documentation/mm/vmemmap_dedup.rst +++ b/Documentation/mm/vmemmap_dedup.rst @@ -1,3 +1,4 @@ + .. SPDX-License-Identifier: GPL-2.0 ========================================= @@ -10,14 +11,14 @@ HugeTLB This section is to explain how HugeTLB Vmemmap Optimization (HVO) works. The ``struct page`` structures are used to describe a physical page frame. By -default, there is a one-to-one mapping from a page frame to it's corresponding +default, there is a one-to-one mapping from a page frame to its corresponding ``struct page``. HugeTLB pages consist of multiple base page size pages and is supported by many architectures. See Documentation/admin-guide/mm/hugetlbpage.rst for more details. On the x86-64 architecture, HugeTLB pages of size 2MB and 1GB are currently supported. Since the base page size on x86 is 4KB, a 2MB HugeTLB page -consists of 512 base pages and a 1GB HugeTLB page consists of 4096 base pages. +consists of 512 base pages and a 1GB HugeTLB page consists of 262144 base pages. For each base page, there is a corresponding ``struct page``. Within the HugeTLB subsystem, only the first 4 ``struct page`` are used to |