summaryrefslogtreecommitdiffstats
path: root/Documentation/mm
diff options
context:
space:
mode:
Diffstat (limited to 'Documentation/mm')
-rw-r--r--Documentation/mm/balance.rst2
-rw-r--r--Documentation/mm/damon/design.rst123
-rw-r--r--Documentation/mm/damon/monitoring_intervals_tuning_example.rst8
-rw-r--r--Documentation/mm/hmm.rst2
-rw-r--r--Documentation/mm/index.rst1
-rw-r--r--Documentation/mm/physical_memory.rst266
-rw-r--r--Documentation/mm/process_addrs.rst44
-rw-r--r--Documentation/mm/split_page_table_lock.rst2
-rw-r--r--Documentation/mm/transhuge.rst39
-rw-r--r--Documentation/mm/z3fold.rst28
-rw-r--r--Documentation/mm/zsmalloc.rst5
11 files changed, 417 insertions, 103 deletions
diff --git a/Documentation/mm/balance.rst b/Documentation/mm/balance.rst
index abaa78561c31..c4962c89a7f5 100644
--- a/Documentation/mm/balance.rst
+++ b/Documentation/mm/balance.rst
@@ -81,7 +81,7 @@ Page stealing from process memory and shm is done if stealing the page would
alleviate memory pressure on any zone in the page's node that has fallen below
its watermark.
-watemark[WMARK_MIN/WMARK_LOW/WMARK_HIGH]/low_on_memory/zone_wake_kswapd: These
+watermark[WMARK_MIN/WMARK_LOW/WMARK_HIGH]/low_on_memory/zone_wake_kswapd: These
are per-zone fields, used to determine when a zone needs to be balanced. When
the number of pages falls below watermark[WMARK_MIN], the hysteric field
low_on_memory gets set. This stays set till the number of free pages becomes
diff --git a/Documentation/mm/damon/design.rst b/Documentation/mm/damon/design.rst
index e28c6a1b40ae..f12d33749329 100644
--- a/Documentation/mm/damon/design.rst
+++ b/Documentation/mm/damon/design.rst
@@ -313,6 +313,10 @@ sufficient for the given purpose, it shouldn't be unnecessarily further
lowered. It is recommended to be set proportional to ``aggregation interval``.
By default, the ratio is set as ``1/20``, and it is still recommended.
+Based on the manual tuning guide, DAMON provides more intuitive knob-based
+intervals auto tuning mechanism. Please refer to :ref:`the design document of
+the feature <damon_design_monitoring_intervals_autotuning>` for detail.
+
Refer to below documents for an example tuning based on the above guide.
.. toctree::
@@ -321,6 +325,52 @@ Refer to below documents for an example tuning based on the above guide.
monitoring_intervals_tuning_example
+.. _damon_design_monitoring_intervals_autotuning:
+
+Monitoring Intervals Auto-tuning
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+DAMON provides automatic tuning of the ``sampling interval`` and ``aggregation
+interval`` based on the :ref:`the tuning guide idea
+<damon_design_monitoring_params_tuning_guide>`. The tuning mechanism allows
+users to set the aimed amount of access events to observe via DAMON within
+given time interval. The target can be specified by the user as a ratio of
+DAMON-observed access events to the theoretical maximum amount of the events
+(``access_bp``) that measured within a given number of aggregations
+(``aggrs``).
+
+The DAMON-observed access events are calculated in byte granularity based on
+DAMON :ref:`region assumption <damon_design_region_based_sampling>`. For
+example, if a region of size ``X`` bytes of ``Y`` ``nr_accesses`` is found, it
+means ``X * Y`` access events are observed by DAMON. Theoretical maximum
+access events for the region is calculated in same way, but replacing ``Y``
+with theoretical maximum ``nr_accesses``, which can be calculated as
+``aggregation interval / sampling interval``.
+
+The mechanism calculates the ratio of access events for ``aggrs`` aggregations,
+and increases or decrease the ``sampleing interval`` and ``aggregation
+interval`` in same ratio, if the observed access ratio is lower or higher than
+the target, respectively. The ratio of the intervals change is decided in
+proportion to the distance between current samples ratio and the target ratio.
+
+The user can further set the minimum and maximum ``sampling interval`` that can
+be set by the tuning mechanism using two parameters (``min_sample_us`` and
+``max_sample_us``). Because the tuning mechanism changes ``sampling interval``
+and ``aggregation interval`` in same ratio always, the minimum and maximum
+``aggregation interval`` after each of the tuning changes can automatically set
+together.
+
+The tuning is turned off by default, and need to be set explicitly by the user.
+As a rule of thumbs and the Parreto principle, 4% access samples ratio target
+is recommended. Note that Parreto principle (80/20 rule) has applied twice.
+That is, assumes 4% (20% of 20%) DAMON-observed access events ratio (source)
+to capture 64% (80% multipled by 80%) real access events (outcomes).
+
+To know how user-space can use this feature via :ref:`DAMON sysfs interface
+<sysfs_interface>`, refer to :ref:`intervals_goal <sysfs_scheme>` part of
+the documentation.
+
+
.. _damon_design_damos:
Operation Schemes
@@ -569,11 +619,22 @@ number of filters for each scheme. Each filter specifies
- whether it is to allow (include) or reject (exclude) applying
the scheme's action to the memory (``allow``).
-When multiple filters are installed, each filter is evaluated in the installed
-order. If a part of memory is matched to one of the filter, next filters are
-ignored. If the memory passes through the filters evaluation stage because it
-is not matched to any of the filters, applying the scheme's action to it is
-allowed, same to the behavior when no filter exists.
+For efficient handling of filters, some types of filters are handled by the
+core layer, while others are handled by operations set. In the latter case,
+hence, support of the filter types depends on the DAMON operations set. In
+case of the core layer-handled filters, the memory regions that excluded by the
+filter are not counted as the scheme has tried to the region. In contrast, if
+a memory regions is filtered by an operations set layer-handled filter, it is
+counted as the scheme has tried. This difference affects the statistics.
+
+When multiple filters are installed, the group of filters that handled by the
+core layer are evaluated first. After that, the group of filters that handled
+by the operations layer are evaluated. Filters in each of the groups are
+evaluated in the installed order. If a part of memory is matched to one of the
+filter, next filters are ignored. If the part passes through the filters
+evaluation stage because it is not matched to any of the filters, applying the
+scheme's action to it depends on the last filter's allowance type. If the last
+filter was for allowing, the part of memory will be rejected, and vice versa.
For example, let's assume 1) a filter for allowing anonymous pages and 2)
another filter for rejecting young pages are installed in the order. If a page
@@ -585,39 +646,29 @@ second reject-filter blocks it. If the page is neither anonymous nor young,
the page will pass through the filters evaluation stage since there is no
matching filter, and the action will be applied to the page.
-Note that the action can equally be applied to memory that either explicitly
-filter-allowed or filters evaluation stage passed. It means that installing
-allow-filters at the end of the list makes no practical change but only
-filters-checking overhead.
-
-For efficient handling of filters, some types of filters are handled by the
-core layer, while others are handled by operations set. In the latter case,
-hence, support of the filter types depends on the DAMON operations set. In
-case of the core layer-handled filters, the memory regions that excluded by the
-filter are not counted as the scheme has tried to the region. In contrast, if
-a memory regions is filtered by an operations set layer-handled filter, it is
-counted as the scheme has tried. This difference affects the statistics.
-
Below ``type`` of filters are currently supported.
-- anonymous page
- - Applied to pages that containing data that not stored in files.
- - Handled by operations set layer. Supported by only ``paddr`` set.
-- memory cgroup
- - Applied to pages that belonging to a given cgroup.
- - Handled by operations set layer. Supported by only ``paddr`` set.
-- young page
- - Applied to pages that are accessed after the last access check from the
- scheme.
- - Handled by operations set layer. Supported by only ``paddr`` set.
-- address range
- - Applied to pages that belonging to a given address range.
- - Handled by the core logic.
-- DAMON monitoring target
- - Applied to pages that belonging to a given DAMON monitoring target.
- - Handled by the core logic.
-
-To know how user-space can set the watermarks via :ref:`DAMON sysfs interface
+- Core layer handled
+ - addr
+ - Applied to pages that belonging to a given address range.
+ - target
+ - Applied to pages that belonging to a given DAMON monitoring target.
+- Operations layer handled, supported by only ``paddr`` operations set.
+ - anon
+ - Applied to pages that containing data that not stored in files.
+ - active
+ - Applied to active pages.
+ - memcg
+ - Applied to pages that belonging to a given cgroup.
+ - young
+ - Applied to pages that are accessed after the last access check from the
+ scheme.
+ - hugepage_size
+ - Applied to pages that managed in a given size range.
+ - unmapped
+ - Applied to pages that unmapped.
+
+To know how user-space can set the filters via :ref:`DAMON sysfs interface
<sysfs_interface>`, refer to :ref:`filters <sysfs_filters>` part of the
documentation.
diff --git a/Documentation/mm/damon/monitoring_intervals_tuning_example.rst b/Documentation/mm/damon/monitoring_intervals_tuning_example.rst
index 334a854efb40..7207cbed591f 100644
--- a/Documentation/mm/damon/monitoring_intervals_tuning_example.rst
+++ b/Documentation/mm/damon/monitoring_intervals_tuning_example.rst
@@ -36,7 +36,7 @@ Then, list the DAMON-found regions of different access patterns, sorted by the
"access temperature". "Access temperature" is a metric representing the
access-hotness of a region. It is calculated as a weighted sum of the access
frequency and the age of the region. If the access frequency is 0 %, the
-temperature is multipled by minus one. That is, if a region is not accessed,
+temperature is multiplied by minus one. That is, if a region is not accessed,
it gets minus temperature and it gets lower as not accessed for longer time.
The sorting is in temperature-ascendint order, so the region at the top of the
list is the coldest, and the one at the bottom is the hottest one. ::
@@ -58,11 +58,11 @@ list is the coldest, and the one at the bottom is the hottest one. ::
The list shows not seemingly hot regions, and only minimum access pattern
diversity. Every region has zero access frequency. The number of region is
10, which is the default ``min_nr_regions value``. Size of each region is also
-nearly idential. We can suspect this is because “adaptive regions adjustment”
+nearly identical. We can suspect this is because “adaptive regions adjustment”
mechanism was not well working. As the guide suggested, we can get relative
hotness of regions using ``age`` as the recency information. That would be
better than nothing, but given the fact that the longest age is only about 6
-seconds while we waited about ten minuts, it is unclear how useful this will
+seconds while we waited about ten minutes, it is unclear how useful this will
be.
The temperature ranges to total size of regions of each range histogram
@@ -190,7 +190,7 @@ for sampling and aggregation intervals, respectively). ::
The number of regions having different access patterns has significantly
increased. Size of each region is also more varied. Total size of non-zero
access frequency regions is also significantly increased. Maybe this is already
-good enough to make some meaningful memory management efficieny changes.
+good enough to make some meaningful memory management efficiency changes.
800ms/16s intervals: Another bias
=================================
diff --git a/Documentation/mm/hmm.rst b/Documentation/mm/hmm.rst
index f6d53c37a2ca..7d61b7a8b65b 100644
--- a/Documentation/mm/hmm.rst
+++ b/Documentation/mm/hmm.rst
@@ -400,7 +400,7 @@ Exclusive access memory
Some devices have features such as atomic PTE bits that can be used to implement
atomic access to system memory. To support atomic operations to a shared virtual
memory page such a device needs access to that page which is exclusive of any
-userspace access from the CPU. The ``make_device_exclusive_range()`` function
+userspace access from the CPU. The ``make_device_exclusive()`` function
can be used to make a memory range inaccessible from userspace.
This replaces all mappings for pages in the given range with special swap
diff --git a/Documentation/mm/index.rst b/Documentation/mm/index.rst
index 0be1c7503a01..d3ada3e45e10 100644
--- a/Documentation/mm/index.rst
+++ b/Documentation/mm/index.rst
@@ -62,5 +62,4 @@ documentation, or deleted if it has served its purpose.
unevictable-lru
vmalloced-kernel-stacks
vmemmap_dedup
- z3fold
zsmalloc
diff --git a/Documentation/mm/physical_memory.rst b/Documentation/mm/physical_memory.rst
index 71fd4a6acf42..d3ac106e6b14 100644
--- a/Documentation/mm/physical_memory.rst
+++ b/Documentation/mm/physical_memory.rst
@@ -338,10 +338,272 @@ Statistics
Zones
=====
+As we have mentioned, each zone in memory is described by a ``struct zone``
+which is an element of the ``node_zones`` array of the node it belongs to.
+``struct zone`` is the core data structure of the page allocator. A zone
+represents a range of physical memory and may have holes.
+
+The page allocator uses the GFP flags, see :ref:`mm-api-gfp-flags`, specified by
+a memory allocation to determine the highest zone in a node from which the
+memory allocation can allocate memory. The page allocator first allocates memory
+from that zone, if the page allocator can't allocate the requested amount of
+memory from the zone, it will allocate memory from the next lower zone in the
+node, the process continues up to and including the lowest zone. For example, if
+a node contains ``ZONE_DMA32``, ``ZONE_NORMAL`` and ``ZONE_MOVABLE`` and the
+highest zone of a memory allocation is ``ZONE_MOVABLE``, the order of the zones
+from which the page allocator allocates memory is ``ZONE_MOVABLE`` >
+``ZONE_NORMAL`` > ``ZONE_DMA32``.
+
+At runtime, free pages in a zone are in the Per-CPU Pagesets (PCP) or free areas
+of the zone. The Per-CPU Pagesets are a vital mechanism in the kernel's memory
+management system. By handling most frequent allocations and frees locally on
+each CPU, the Per-CPU Pagesets improve performance and scalability, especially
+on systems with many cores. The page allocator in the kernel employs a two-step
+strategy for memory allocation, starting with the Per-CPU Pagesets before
+falling back to the buddy allocator. Pages are transferred between the Per-CPU
+Pagesets and the global free areas (managed by the buddy allocator) in batches.
+This minimizes the overhead of frequent interactions with the global buddy
+allocator.
+
+Architecture specific code calls free_area_init() to initializes zones.
+
+Zone structure
+--------------
+The zones structure ``struct zone`` is defined in ``include/linux/mmzone.h``.
+Here we briefly describe fields of this structure:
-.. admonition:: Stub
+General
+~~~~~~~
- This section is incomplete. Please list and describe the appropriate fields.
+``_watermark``
+ The watermarks for this zone. When the amount of free pages in a zone is below
+ the min watermark, boosting is ignored, an allocation may trigger direct
+ reclaim and direct compaction, it is also used to throttle direct reclaim.
+ When the amount of free pages in a zone is below the low watermark, kswapd is
+ woken up. When the amount of free pages in a zone is above the high watermark,
+ kswapd stops reclaiming (a zone is balanced) when the
+ ``NUMA_BALANCING_MEMORY_TIERING`` bit of ``sysctl_numa_balancing_mode`` is not
+ set. The promo watermark is used for memory tiering and NUMA balancing. When
+ the amount of free pages in a zone is above the promo watermark, kswapd stops
+ reclaiming when the ``NUMA_BALANCING_MEMORY_TIERING`` bit of
+ ``sysctl_numa_balancing_mode`` is set. The watermarks are set by
+ ``__setup_per_zone_wmarks()``. The min watermark is calculated according to
+ ``vm.min_free_kbytes`` sysctl. The other three watermarks are set according
+ to the distance between two watermarks. The distance itself is calculated
+ taking ``vm.watermark_scale_factor`` sysctl into account.
+
+``watermark_boost``
+ The number of pages which are used to boost watermarks to increase reclaim
+ pressure to reduce the likelihood of future fallbacks and wake kswapd now
+ as the node may be balanced overall and kswapd will not wake naturally.
+
+``nr_reserved_highatomic``
+ The number of pages which are reserved for high-order atomic allocations.
+
+``nr_free_highatomic``
+ The number of free pages in reserved highatomic pageblocks
+
+``lowmem_reserve``
+ The array of the amounts of the memory reserved in this zone for memory
+ allocations. For example, if the highest zone a memory allocation can
+ allocate memory from is ``ZONE_MOVABLE``, the amount of memory reserved in
+ this zone for this allocation is ``lowmem_reserve[ZONE_MOVABLE]`` when
+ attempting to allocate memory from this zone. This is a mechanism the page
+ allocator uses to prevent allocations which could use ``highmem`` from using
+ too much ``lowmem``. For some specialised workloads on ``highmem`` machines,
+ it is dangerous for the kernel to allow process memory to be allocated from
+ the ``lowmem`` zone. This is because that memory could then be pinned via the
+ ``mlock()`` system call, or by unavailability of swapspace.
+ ``vm.lowmem_reserve_ratio`` sysctl determines how aggressive the kernel is in
+ defending these lower zones. This array is recalculated by
+ ``setup_per_zone_lowmem_reserve()`` at runtime if ``vm.lowmem_reserve_ratio``
+ sysctl changes.
+
+``node``
+ The index of the node this zone belongs to. Available only when
+ ``CONFIG_NUMA`` is enabled because there is only one zone in a UMA system.
+
+``zone_pgdat``
+ Pointer to the ``struct pglist_data`` of the node this zone belongs to.
+
+``per_cpu_pageset``
+ Pointer to the Per-CPU Pagesets (PCP) allocated and initialized by
+ ``setup_zone_pageset()``. By handling most frequent allocations and frees
+ locally on each CPU, PCP improves performance and scalability on systems with
+ many cores.
+
+``pageset_high_min``
+ Copied to the ``high_min`` of the Per-CPU Pagesets for faster access.
+
+``pageset_high_max``
+ Copied to the ``high_max`` of the Per-CPU Pagesets for faster access.
+
+``pageset_batch``
+ Copied to the ``batch`` of the Per-CPU Pagesets for faster access. The
+ ``batch``, ``high_min`` and ``high_max`` of the Per-CPU Pagesets are used to
+ calculate the number of elements the Per-CPU Pagesets obtain from the buddy
+ allocator under a single hold of the lock for efficiency. They are also used
+ to decide if the Per-CPU Pagesets return pages to the buddy allocator in page
+ free process.
+
+``pageblock_flags``
+ The pointer to the flags for the pageblocks in the zone (see
+ ``include/linux/pageblock-flags.h`` for flags list). The memory is allocated
+ in ``setup_usemap()``. Each pageblock occupies ``NR_PAGEBLOCK_BITS`` bits.
+ Defined only when ``CONFIG_FLATMEM`` is enabled. The flags is stored in
+ ``mem_section`` when ``CONFIG_SPARSEMEM`` is enabled.
+
+``zone_start_pfn``
+ The start pfn of the zone. It is initialized by
+ ``calculate_node_totalpages()``.
+
+``managed_pages``
+ The present pages managed by the buddy system, which is calculated as:
+ ``managed_pages`` = ``present_pages`` - ``reserved_pages``, ``reserved_pages``
+ includes pages allocated by the memblock allocator. It should be used by page
+ allocator and vm scanner to calculate all kinds of watermarks and thresholds.
+ It is accessed using ``atomic_long_xxx()`` functions. It is initialized in
+ ``free_area_init_core()`` and then is reinitialized when memblock allocator
+ frees pages into buddy system.
+
+``spanned_pages``
+ The total pages spanned by the zone, including holes, which is calculated as:
+ ``spanned_pages`` = ``zone_end_pfn`` - ``zone_start_pfn``. It is initialized
+ by ``calculate_node_totalpages()``.
+
+``present_pages``
+ The physical pages existing within the zone, which is calculated as:
+ ``present_pages`` = ``spanned_pages`` - ``absent_pages`` (pages in holes). It
+ may be used by memory hotplug or memory power management logic to figure out
+ unmanaged pages by checking (``present_pages`` - ``managed_pages``). Write
+ access to ``present_pages`` at runtime should be protected by
+ ``mem_hotplug_begin/done()``. Any reader who can't tolerant drift of
+ ``present_pages`` should use ``get_online_mems()`` to get a stable value. It
+ is initialized by ``calculate_node_totalpages()``.
+
+``present_early_pages``
+ The present pages existing within the zone located on memory available since
+ early boot, excluding hotplugged memory. Defined only when
+ ``CONFIG_MEMORY_HOTPLUG`` is enabled and initialized by
+ ``calculate_node_totalpages()``.
+
+``cma_pages``
+ The pages reserved for CMA use. These pages behave like ``ZONE_MOVABLE`` when
+ they are not used for CMA. Defined only when ``CONFIG_CMA`` is enabled.
+
+``name``
+ The name of the zone. It is a pointer to the corresponding element of
+ the ``zone_names`` array.
+
+``nr_isolate_pageblock``
+ Number of isolated pageblocks. It is used to solve incorrect freepage counting
+ problem due to racy retrieving migratetype of pageblock. Protected by
+ ``zone->lock``. Defined only when ``CONFIG_MEMORY_ISOLATION`` is enabled.
+
+``span_seqlock``
+ The seqlock to protect ``zone_start_pfn`` and ``spanned_pages``. It is a
+ seqlock because it has to be read outside of ``zone->lock``, and it is done in
+ the main allocator path. However, the seqlock is written quite infrequently.
+ Defined only when ``CONFIG_MEMORY_HOTPLUG`` is enabled.
+
+``initialized``
+ The flag indicating if the zone is initialized. Set by
+ ``init_currently_empty_zone()`` during boot.
+
+``free_area``
+ The array of free areas, where each element corresponds to a specific order
+ which is a power of two. The buddy allocator uses this structure to manage
+ free memory efficiently. When allocating, it tries to find the smallest
+ sufficient block, if the smallest sufficient block is larger than the
+ requested size, it will be recursively split into the next smaller blocks
+ until the required size is reached. When a page is freed, it may be merged
+ with its buddy to form a larger block. It is initialized by
+ ``zone_init_free_lists()``.
+
+``unaccepted_pages``
+ The list of pages to be accepted. All pages on the list are ``MAX_PAGE_ORDER``.
+ Defined only when ``CONFIG_UNACCEPTED_MEMORY`` is enabled.
+
+``flags``
+ The zone flags. The least three bits are used and defined by
+ ``enum zone_flags``. ``ZONE_BOOSTED_WATERMARK`` (bit 0): zone recently boosted
+ watermarks. Cleared when kswapd is woken. ``ZONE_RECLAIM_ACTIVE`` (bit 1):
+ kswapd may be scanning the zone. ``ZONE_BELOW_HIGH`` (bit 2): zone is below
+ high watermark.
+
+``lock``
+ The main lock that protects the internal data structures of the page allocator
+ specific to the zone, especially protects ``free_area``.
+
+``percpu_drift_mark``
+ When free pages are below this point, additional steps are taken when reading
+ the number of free pages to avoid per-cpu counter drift allowing watermarks
+ to be breached. It is updated in ``refresh_zone_stat_thresholds()``.
+
+Compaction control
+~~~~~~~~~~~~~~~~~~
+
+``compact_cached_free_pfn``
+ The PFN where compaction free scanner should start in the next scan.
+
+``compact_cached_migrate_pfn``
+ The PFNs where compaction migration scanner should start in the next scan.
+ This array has two elements: the first one is used in ``MIGRATE_ASYNC`` mode,
+ and the other one is used in ``MIGRATE_SYNC`` mode.
+
+``compact_init_migrate_pfn``
+ The initial migration PFN which is initialized to 0 at boot time, and to the
+ first pageblock with migratable pages in the zone after a full compaction
+ finishes. It is used to check if a scan is a whole zone scan or not.
+
+``compact_init_free_pfn``
+ The initial free PFN which is initialized to 0 at boot time and to the last
+ pageblock with free ``MIGRATE_MOVABLE`` pages in the zone. It is used to check
+ if it is the start of a scan.
+
+``compact_considered``
+ The number of compactions attempted since last failure. It is reset in
+ ``defer_compaction()`` when a compaction fails to result in a page allocation
+ success. It is increased by 1 in ``compaction_deferred()`` when a compaction
+ should be skipped. ``compaction_deferred()`` is called before
+ ``compact_zone()`` is called, ``compaction_defer_reset()`` is called when
+ ``compact_zone()`` returns ``COMPACT_SUCCESS``, ``defer_compaction()`` is
+ called when ``compact_zone()`` returns ``COMPACT_PARTIAL_SKIPPED`` or
+ ``COMPACT_COMPLETE``.
+
+``compact_defer_shift``
+ The number of compactions skipped before trying again is
+ ``1<<compact_defer_shift``. It is increased by 1 in ``defer_compaction()``.
+ It is reset in ``compaction_defer_reset()`` when a direct compaction results
+ in a page allocation success. Its maximum value is ``COMPACT_MAX_DEFER_SHIFT``.
+
+``compact_order_failed``
+ The minimum compaction failed order. It is set in ``compaction_defer_reset()``
+ when a compaction succeeds and in ``defer_compaction()`` when a compaction
+ fails to result in a page allocation success.
+
+``compact_blockskip_flush``
+ Set to true when compaction migration scanner and free scanner meet, which
+ means the ``PB_migrate_skip`` bits should be cleared.
+
+``contiguous``
+ Set to true when the zone is contiguous (in other words, no hole).
+
+Statistics
+~~~~~~~~~~
+
+``vm_stat``
+ VM statistics for the zone. The items tracked are defined by
+ ``enum zone_stat_item``.
+
+``vm_numa_event``
+ VM NUMA event statistics for the zone. The items tracked are defined by
+ ``enum numa_stat_item``.
+
+``per_cpu_zonestats``
+ Per-CPU VM statistics for the zone. It records VM statistics and VM NUMA event
+ statistics on a per-CPU basis. It reduces updates to the global ``vm_stat``
+ and ``vm_numa_event`` fields of the zone to improve performance.
.. _pages:
diff --git a/Documentation/mm/process_addrs.rst b/Documentation/mm/process_addrs.rst
index 81417fa2ed20..e6756e78b476 100644
--- a/Documentation/mm/process_addrs.rst
+++ b/Documentation/mm/process_addrs.rst
@@ -716,9 +716,14 @@ calls :c:func:`!rcu_read_lock` to ensure that the VMA is looked up in an RCU
critical section, then attempts to VMA lock it via :c:func:`!vma_start_read`,
before releasing the RCU lock via :c:func:`!rcu_read_unlock`.
-VMA read locks hold the read lock on the :c:member:`!vma->vm_lock` semaphore for
-their duration and the caller of :c:func:`!lock_vma_under_rcu` must release it
-via :c:func:`!vma_end_read`.
+In cases when the user already holds mmap read lock, :c:func:`!vma_start_read_locked`
+and :c:func:`!vma_start_read_locked_nested` can be used. These functions do not
+fail due to lock contention but the caller should still check their return values
+in case they fail for other reasons.
+
+VMA read locks increment :c:member:`!vma.vm_refcnt` reference counter for their
+duration and the caller of :c:func:`!lock_vma_under_rcu` must drop it via
+:c:func:`!vma_end_read`.
VMA **write** locks are acquired via :c:func:`!vma_start_write` in instances where a
VMA is about to be modified, unlike :c:func:`!vma_start_read` the lock is always
@@ -726,9 +731,9 @@ acquired. An mmap write lock **must** be held for the duration of the VMA write
lock, releasing or downgrading the mmap write lock also releases the VMA write
lock so there is no :c:func:`!vma_end_write` function.
-Note that a semaphore write lock is not held across a VMA lock. Rather, a
-sequence number is used for serialisation, and the write semaphore is only
-acquired at the point of write lock to update this.
+Note that when write-locking a VMA lock, the :c:member:`!vma.vm_refcnt` is temporarily
+modified so that readers can detect the presense of a writer. The reference counter is
+restored once the vma sequence number used for serialisation is updated.
This ensures the semantics we require - VMA write locks provide exclusive write
access to the VMA.
@@ -738,7 +743,7 @@ Implementation details
The VMA lock mechanism is designed to be a lightweight means of avoiding the use
of the heavily contended mmap lock. It is implemented using a combination of a
-read/write semaphore and sequence numbers belonging to the containing
+reference counter and sequence numbers belonging to the containing
:c:struct:`!struct mm_struct` and the VMA.
Read locks are acquired via :c:func:`!vma_start_read`, which is an optimistic
@@ -779,28 +784,31 @@ release of any VMA locks on its release makes sense, as you would never want to
keep VMAs locked across entirely separate write operations. It also maintains
correct lock ordering.
-Each time a VMA read lock is acquired, we acquire a read lock on the
-:c:member:`!vma->vm_lock` read/write semaphore and hold it, while checking that
-the sequence count of the VMA does not match that of the mm.
+Each time a VMA read lock is acquired, we increment :c:member:`!vma.vm_refcnt`
+reference counter and check that the sequence count of the VMA does not match
+that of the mm.
-If it does, the read lock fails. If it does not, we hold the lock, excluding
-writers, but permitting other readers, who will also obtain this lock under RCU.
+If it does, the read lock fails and :c:member:`!vma.vm_refcnt` is dropped.
+If it does not, we keep the reference counter raised, excluding writers, but
+permitting other readers, who can also obtain this lock under RCU.
Importantly, maple tree operations performed in :c:func:`!lock_vma_under_rcu`
are also RCU safe, so the whole read lock operation is guaranteed to function
correctly.
-On the write side, we acquire a write lock on the :c:member:`!vma->vm_lock`
-read/write semaphore, before setting the VMA's sequence number under this lock,
-also simultaneously holding the mmap write lock.
+On the write side, we set a bit in :c:member:`!vma.vm_refcnt` which can't be
+modified by readers and wait for all readers to drop their reference count.
+Once there are no readers, the VMA's sequence number is set to match that of
+the mm. During this entire operation mmap write lock is held.
This way, if any read locks are in effect, :c:func:`!vma_start_write` will sleep
until these are finished and mutual exclusion is achieved.
-After setting the VMA's sequence number, the lock is released, avoiding
-complexity with a long-term held write lock.
+After setting the VMA's sequence number, the bit in :c:member:`!vma.vm_refcnt`
+indicating a writer is cleared. From this point on, VMA's sequence number will
+indicate VMA's write-locked state until mmap write lock is dropped or downgraded.
-This clever combination of a read/write semaphore and sequence count allows for
+This clever combination of a reference counter and sequence count allows for
fast RCU-based per-VMA lock acquisition (especially on page fault, though
utilised elsewhere) with minimal complexity around lock ordering.
diff --git a/Documentation/mm/split_page_table_lock.rst b/Documentation/mm/split_page_table_lock.rst
index 8e1ceb0a6619..cc3cd46abd1b 100644
--- a/Documentation/mm/split_page_table_lock.rst
+++ b/Documentation/mm/split_page_table_lock.rst
@@ -4,7 +4,7 @@ Split page table lock
Originally, mm->page_table_lock spinlock protected all page tables of the
mm_struct. But this approach leads to poor page fault scalability of
-multi-threaded applications due high contention on the lock. To improve
+multi-threaded applications due to high contention on the lock. To improve
scalability, split page table lock was introduced.
With split page table lock we have separate per-table lock to serialize
diff --git a/Documentation/mm/transhuge.rst b/Documentation/mm/transhuge.rst
index a2cd8800d527..0e7f8e4cd2e3 100644
--- a/Documentation/mm/transhuge.rst
+++ b/Documentation/mm/transhuge.rst
@@ -116,14 +116,27 @@ pages:
succeeds on tail pages.
- map/unmap of a PMD entry for the whole THP increment/decrement
- folio->_entire_mapcount, increment/decrement folio->_large_mapcount
- and also increment/decrement folio->_nr_pages_mapped by ENTIRELY_MAPPED
- when _entire_mapcount goes from -1 to 0 or 0 to -1.
+ folio->_entire_mapcount and folio->_large_mapcount.
+
+ We also maintain the two slots for tracking MM owners (MM ID and
+ corresponding mapcount), and the current status ("maybe mapped shared" vs.
+ "mapped exclusively").
+
+ With CONFIG_PAGE_MAPCOUNT, we also increment/decrement
+ folio->_nr_pages_mapped by ENTIRELY_MAPPED when _entire_mapcount goes
+ from -1 to 0 or 0 to -1.
- map/unmap of individual pages with PTE entry increment/decrement
- page->_mapcount, increment/decrement folio->_large_mapcount and also
- increment/decrement folio->_nr_pages_mapped when page->_mapcount goes
- from -1 to 0 or 0 to -1 as this counts the number of pages mapped by PTE.
+ folio->_large_mapcount.
+
+ We also maintain the two slots for tracking MM owners (MM ID and
+ corresponding mapcount), and the current status ("maybe mapped shared" vs.
+ "mapped exclusively").
+
+ With CONFIG_PAGE_MAPCOUNT, we also increment/decrement
+ page->_mapcount and increment/decrement folio->_nr_pages_mapped when
+ page->_mapcount goes from -1 to 0 or 0 to -1 as this counts the number
+ of pages mapped by PTE.
split_huge_page internally has to distribute the refcounts in the head
page to the tail pages before clearing all PG_head/tail bits from the page
@@ -151,8 +164,8 @@ clear where references should go after split: it will stay on the head page.
Note that split_huge_pmd() doesn't have any limitations on refcounting:
pmd can be split at any point and never fails.
-Partial unmap and deferred_split_folio()
-========================================
+Partial unmap and deferred_split_folio() (anon THP only)
+========================================================
Unmapping part of THP (with munmap() or other way) is not going to free
memory immediately. Instead, we detect that a subpage of THP is not in use
@@ -167,3 +180,13 @@ a THP crosses a VMA boundary.
The function deferred_split_folio() is used to queue a folio for splitting.
The splitting itself will happen when we get memory pressure via shrinker
interface.
+
+With CONFIG_PAGE_MAPCOUNT, we reliably detect partial mappings based on
+folio->_nr_pages_mapped.
+
+With CONFIG_NO_PAGE_MAPCOUNT, we detect partial mappings based on the
+average per-page mapcount in a THP: if the average is < 1, an anon THP is
+certainly partially mapped. As long as only a single process maps a THP,
+this detection is reliable. With long-running child processes, there can
+be scenarios where partial mappings can currently not be detected, and
+might need asynchronous detection during memory reclaim in the future.
diff --git a/Documentation/mm/z3fold.rst b/Documentation/mm/z3fold.rst
deleted file mode 100644
index 25b5935d06c7..000000000000
--- a/Documentation/mm/z3fold.rst
+++ /dev/null
@@ -1,28 +0,0 @@
-======
-z3fold
-======
-
-z3fold is a special purpose allocator for storing compressed pages.
-It is designed to store up to three compressed pages per physical page.
-It is a zbud derivative which allows for higher compression
-ratio keeping the simplicity and determinism of its predecessor.
-
-The main differences between z3fold and zbud are:
-
-* unlike zbud, z3fold allows for up to PAGE_SIZE allocations
-* z3fold can hold up to 3 compressed pages in its page
-* z3fold doesn't export any API itself and is thus intended to be used
- via the zpool API.
-
-To keep the determinism and simplicity, z3fold, just like zbud, always
-stores an integral number of compressed pages per page, but it can store
-up to 3 pages unlike zbud which can store at most 2. Therefore the
-compression ratio goes to around 2.7x while zbud's one is around 1.7x.
-
-Unlike zbud (but like zsmalloc for that matter) z3fold_alloc() does not
-return a dereferenceable pointer. Instead, it returns an unsigned long
-handle which encodes actual location of the allocated object.
-
-Keeping effective compression ratio close to zsmalloc's, z3fold doesn't
-depend on MMU enabled and provides more predictable reclaim behavior
-which makes it a better fit for small and response-critical systems.
diff --git a/Documentation/mm/zsmalloc.rst b/Documentation/mm/zsmalloc.rst
index 76902835e68e..d2bbecd78e14 100644
--- a/Documentation/mm/zsmalloc.rst
+++ b/Documentation/mm/zsmalloc.rst
@@ -27,9 +27,8 @@ Instead, it returns an opaque handle (unsigned long) which encodes actual
location of the allocated object. The reason for this indirection is that
zsmalloc does not keep zspages permanently mapped since that would cause
issues on 32-bit systems where the VA region for kernel space mappings
-is very small. So, before using the allocating memory, the object has to
-be mapped using zs_map_object() to get a usable pointer and subsequently
-unmapped using zs_unmap_object().
+is very small. So, using the allocated memory should be done through the
+proper handle-based APIs.
stat
====