From e4c9eabc931b2a6c80fcdece5eb5051d8aec92f8 Mon Sep 17 00:00:00 2001 From: "Fabio M. De Francesco" Date: Sat, 8 Jul 2023 14:16:18 +0200 Subject: Documentation/highmem: Add information about kmap_local_folio() The differences between kmap_local_page() and kmap_local_folio() consist only in the first taking a pointer to a page and the second taking two arguments, a pointer to a folio and the byte offset within the folio which identifies the page. The two API's can be explained at the same time in the "Temporary Virtual Mappings" section of the Highmem's documentation. Add information about kmap_local_folio() in the same subsection that explains kmap_local_page(). Cc: Andrew Morton Reviewed-by: Ira Weiny Reviewed-by: Mike Rapoport (IBM) Signed-off-by: Fabio M. De Francesco Signed-off-by: Jonathan Corbet Link: https://lore.kernel.org/r/20230708121719.8270-1-fmdefrancesco@gmail.com --- Documentation/mm/highmem.rst | 27 +++++++++++++++------------ 1 file changed, 15 insertions(+), 12 deletions(-) (limited to 'Documentation/mm') diff --git a/Documentation/mm/highmem.rst b/Documentation/mm/highmem.rst index c964e0848702..fe68e02fc8ff 100644 --- a/Documentation/mm/highmem.rst +++ b/Documentation/mm/highmem.rst @@ -51,11 +51,14 @@ Temporary Virtual Mappings The kernel contains several ways of creating temporary mappings. The following list shows them in order of preference of use. -* kmap_local_page(). This function is used to require short term mappings. - It can be invoked from any context (including interrupts) but the mappings - can only be used in the context which acquired them. - - This function should always be used, whereas kmap_atomic() and kmap() have +* kmap_local_page(), kmap_local_folio() - These functions are used to create + short term mappings. They can be invoked from any context (including + interrupts) but the mappings can only be used in the context which acquired + them. The only differences between them consist in the first taking a pointer + to a struct page and the second taking a pointer to struct folio and the byte + offset within the folio which identifies the page. + + These functions should always be used, whereas kmap_atomic() and kmap() have been deprecated. These mappings are thread-local and CPU-local, meaning that the mapping @@ -72,17 +75,17 @@ list shows them in order of preference of use. maps of the outgoing task are saved and those of the incoming one are restored. - kmap_local_page() always returns a valid virtual address and it is assumed - that kunmap_local() will never fail. + kmap_local_page(), as well as kmap_local_folio() always returns valid virtual + kernel addresses and it is assumed that kunmap_local() will never fail. - On CONFIG_HIGHMEM=n kernels and for low memory pages this returns the + On CONFIG_HIGHMEM=n kernels and for low memory pages they return the virtual address of the direct mapping. Only real highmem pages are temporarily mapped. Therefore, users may call a plain page_address() for pages which are known to not come from ZONE_HIGHMEM. However, it is - always safe to use kmap_local_page() / kunmap_local(). + always safe to use kmap_local_{page,folio}() / kunmap_local(). - While it is significantly faster than kmap(), for the highmem case it - comes with restrictions about the pointers validity. Contrary to kmap() + While they are significantly faster than kmap(), for the highmem case they + come with restrictions about the pointers validity. Contrary to kmap() mappings, the local mappings are only valid in the context of the caller and cannot be handed to other contexts. This implies that users must be absolutely sure to keep the use of the return address local to the @@ -91,7 +94,7 @@ list shows them in order of preference of use. Most code can be designed to use thread local mappings. User should therefore try to design their code to avoid the use of kmap() by mapping pages in the same thread the address will be used and prefer - kmap_local_page(). + kmap_local_page() or kmap_local_folio(). Nesting kmap_local_page() and kmap_atomic() mappings is allowed to a certain extent (up to KMAP_TYPE_NR) but their invocations have to be strictly ordered -- cgit v1.2.3 From 17b6fc88eb31a10726b702cf07ea998e9828d478 Mon Sep 17 00:00:00 2001 From: Usama Arif Date: Tue, 7 Feb 2023 11:44:56 +0000 Subject: docs: mm: Fix number of base pages for 1GB HugeTLB 1GB HugeTLB page consists of 262144 base pages. Signed-off-by: Usama Arif Reviewed-by: David Hildenbrand Acked-by: Mike Rapoport (IBM) Acked-by: Muchun Song Signed-off-by: Jonathan Corbet Link: https://lore.kernel.org/r/20230207114456.2304801-1-usama.arif@bytedance.com --- Documentation/mm/vmemmap_dedup.rst | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) (limited to 'Documentation/mm') diff --git a/Documentation/mm/vmemmap_dedup.rst b/Documentation/mm/vmemmap_dedup.rst index a4b12ff906c4..689a6907c70b 100644 --- a/Documentation/mm/vmemmap_dedup.rst +++ b/Documentation/mm/vmemmap_dedup.rst @@ -1,3 +1,4 @@ + .. SPDX-License-Identifier: GPL-2.0 ========================================= @@ -17,7 +18,7 @@ HugeTLB pages consist of multiple base page size pages and is supported by many architectures. See Documentation/admin-guide/mm/hugetlbpage.rst for more details. On the x86-64 architecture, HugeTLB pages of size 2MB and 1GB are currently supported. Since the base page size on x86 is 4KB, a 2MB HugeTLB page -consists of 512 base pages and a 1GB HugeTLB page consists of 4096 base pages. +consists of 512 base pages and a 1GB HugeTLB page consists of 262144 base pages. For each base page, there is a corresponding ``struct page``. Within the HugeTLB subsystem, only the first 4 ``struct page`` are used to -- cgit v1.2.3 From d56b699d76d1b352f7a3d3a0a3e91c79b8612d94 Mon Sep 17 00:00:00 2001 From: Bjorn Helgaas Date: Mon, 14 Aug 2023 16:28:22 -0500 Subject: Documentation: Fix typos Fix typos in Documentation. Signed-off-by: Bjorn Helgaas Link: https://lore.kernel.org/r/20230814212822.193684-4-helgaas@kernel.org Signed-off-by: Jonathan Corbet --- Documentation/mm/hmm.rst | 2 +- Documentation/mm/hwpoison.rst | 2 +- Documentation/mm/page_migration.rst | 2 +- Documentation/mm/unevictable-lru.rst | 2 +- Documentation/mm/vmemmap_dedup.rst | 2 +- 5 files changed, 5 insertions(+), 5 deletions(-) (limited to 'Documentation/mm') diff --git a/Documentation/mm/hmm.rst b/Documentation/mm/hmm.rst index 9aa512c3a12c..fec21e6f2284 100644 --- a/Documentation/mm/hmm.rst +++ b/Documentation/mm/hmm.rst @@ -417,7 +417,7 @@ entries. Any attempt to access the swap entry results in a fault which is resovled by replacing the entry with the original mapping. A driver gets notified that the mapping has been changed by MMU notifiers, after which point it will no longer have exclusive access to the page. Exclusive access is -guranteed to last until the driver drops the page lock and page reference, at +guaranteed to last until the driver drops the page lock and page reference, at which point any CPU faults on the page may proceed as described. Memory cgroup (memcg) and rss accounting diff --git a/Documentation/mm/hwpoison.rst b/Documentation/mm/hwpoison.rst index ba48a441feed..483b72aa7c11 100644 --- a/Documentation/mm/hwpoison.rst +++ b/Documentation/mm/hwpoison.rst @@ -48,7 +48,7 @@ of applications. KVM support requires a recent qemu-kvm release. For the KVM use there was need for a new signal type so that KVM can inject the machine check into the guest with the proper address. This in theory allows other applications to handle -memory failures too. The expection is that near all applications +memory failures too. The expectation is that most applications won't do that, but some very specialized ones might. Failure recovery modes diff --git a/Documentation/mm/page_migration.rst b/Documentation/mm/page_migration.rst index e35af7805be5..f1ce67a26615 100644 --- a/Documentation/mm/page_migration.rst +++ b/Documentation/mm/page_migration.rst @@ -180,7 +180,7 @@ The following events (counters) can be used to monitor page migration. 4. THP_MIGRATION_FAIL: A THP could not be migrated nor it could be split. 5. THP_MIGRATION_SPLIT: A THP was migrated, but not as such: first, the THP had - to be split. After splitting, a migration retry was used for it's sub-pages. + to be split. After splitting, a migration retry was used for its sub-pages. THP_MIGRATION_* events also update the appropriate PGMIGRATE_SUCCESS or PGMIGRATE_FAIL events. For example, a THP migration failure will cause both diff --git a/Documentation/mm/unevictable-lru.rst b/Documentation/mm/unevictable-lru.rst index d5ac8511eb67..67f1338440a5 100644 --- a/Documentation/mm/unevictable-lru.rst +++ b/Documentation/mm/unevictable-lru.rst @@ -463,7 +463,7 @@ can request that a region of memory be mlocked by supplying the MAP_LOCKED flag to the mmap() call. There is one important and subtle difference here, though. mmap() + mlock() will fail if the range cannot be faulted in (e.g. because mm_populate fails) and returns with ENOMEM while mmap(MAP_LOCKED) will not fail. -The mmaped area will still have properties of the locked area - pages will not +The mmapped area will still have properties of the locked area - pages will not get swapped out - but major page faults to fault memory in might still happen. Furthermore, any mmap() call or brk() call that expands the heap by a task diff --git a/Documentation/mm/vmemmap_dedup.rst b/Documentation/mm/vmemmap_dedup.rst index 689a6907c70b..21f159b8afbe 100644 --- a/Documentation/mm/vmemmap_dedup.rst +++ b/Documentation/mm/vmemmap_dedup.rst @@ -11,7 +11,7 @@ HugeTLB This section is to explain how HugeTLB Vmemmap Optimization (HVO) works. The ``struct page`` structures are used to describe a physical page frame. By -default, there is a one-to-one mapping from a page frame to it's corresponding +default, there is a one-to-one mapping from a page frame to its corresponding ``struct page``. HugeTLB pages consist of multiple base page size pages and is supported by many -- cgit v1.2.3 From 090a7f1009b8447565a03b649189e6ff83e8e5e7 Mon Sep 17 00:00:00 2001 From: Marco Pagani Date: Fri, 25 Aug 2023 15:35:46 +0200 Subject: docs/mm: remove references to hmm_mirror ops and clean typos MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Clean typos and remove the reference to the sync_cpu_device_pagetables() callback since all hmm_mirror ops have been removed. Fixes: a22dd506400d ("mm/hmm: remove hmm_mirror and related") Signed-off-by: Marco Pagani Reviewed-by: Mika Penttilä Reviewed-by: Jason Gunthorpe Signed-off-by: Jonathan Corbet Link: https://lore.kernel.org/r/20230825133546.249683-1-marpagan@redhat.com --- Documentation/mm/hmm.rst | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-) (limited to 'Documentation/mm') diff --git a/Documentation/mm/hmm.rst b/Documentation/mm/hmm.rst index fec21e6f2284..0595098a74d9 100644 --- a/Documentation/mm/hmm.rst +++ b/Documentation/mm/hmm.rst @@ -163,16 +163,7 @@ use:: It will trigger a page fault on missing or read-only entries if write access is requested (see below). Page faults use the generic mm page fault code path just -like a CPU page fault. - -Both functions copy CPU page table entries into their pfns array argument. Each -entry in that array corresponds to an address in the virtual range. HMM -provides a set of flags to help the driver identify special CPU page table -entries. - -Locking within the sync_cpu_device_pagetables() callback is the most important -aspect the driver must respect in order to keep things properly synchronized. -The usage pattern is:: +like a CPU page fault. The usage pattern is:: int driver_populate_range(...) { -- cgit v1.2.3