summaryrefslogtreecommitdiffstats
path: root/kernel/dma/pool.c
Commit message (Collapse)AuthorAgeFilesLines
* dma-pool: no need to check return value of debugfs_create functionsTiezhu Yang2020-11-271-3/+0
| | | | | | | | | | When calling debugfs functions, there is no need to ever check the return value. The function can work or not, but the code logic should never do something different based on this. Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
* dma-mapping: merge <linux/dma-noncoherent.h> into <linux/dma-map-ops.h>Christoph Hellwig2020-10-061-1/+0
| | | | | | | Move more nitty gritty DMA implementation details into the common internal header. Signed-off-by: Christoph Hellwig <hch@lst.de>
* dma-mapping: merge <linux/dma-contiguous.h> into <linux/dma-map-ops.h>Christoph Hellwig2020-10-061-1/+1
| | | | | | | Merge dma-contiguous.h into dma-map-ops.h, after removing the comment describing the contiguous allocator into kernel/dma/contigous.c. Signed-off-by: Christoph Hellwig <hch@lst.de>
* dma-direct: remove dma_direct_{alloc,free}_pagesChristoph Hellwig2020-09-111-1/+1
| | | | | | | | Just merge these helpers into the main dma_direct_{alloc,free} routines, as the additional checks are always false for the two callers. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
* dma-pool: Fix an uninitialized variable bug in atomic_pool_expand()Dan Carpenter2020-08-271-1/+1
| | | | | | | | The "page" pointer can be used with out being initialized. Fixes: d7e673ec2c8e ("dma-pool: Only allocate from CMA when in same memory zone") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
* dma-pool: Only allocate from CMA when in same memory zoneNicolas Saenz Julienne2020-08-141-1/+30
| | | | | | | | | | There is no guarantee to CMA's placement, so allocating a zone specific atomic pool from CMA might return memory from a completely different memory zone. To get around this double check CMA's placement before allocating from it. Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de>
* dma-pool: fix coherent pool allocations for IOMMU mappingsChristoph Hellwig2020-08-141-66/+48
| | | | | | | | | | When allocating coherent pool memory for an IOMMU mapping we don't care about the DMA mask. Move the guess for the initial GFP mask into the dma_direct_alloc_pages and pass dma_coherent_ok as a function pointer argument so that it doesn't get applied to the IOMMU case. Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Amit Pundir <amit.pundir@linaro.org>
* dma-pool: do not allocate pool memory from CMANicolas Saenz Julienne2020-07-141-9/+2
| | | | | | | | | | | | | There is no guarantee to CMA's placement, so allocating a zone specific atomic pool from CMA might return memory from a completely different memory zone. So stop using it. Fixes: c84dc6e68a1d ("dma-pool: add additional coherent pools to map to gfp mask") Reported-by: Jeremy Linton <jeremy.linton@arm.com> Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Tested-by: Jeremy Linton <jeremy.linton@arm.com> Acked-by: David Rientjes <rientjes@google.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
* dma-pool: make sure atomic pool suits deviceNicolas Saenz Julienne2020-07-141-20/+37
| | | | | | | | | | | | When allocating DMA memory from a pool, the core can only guess which atomic pool will fit a device's constraints. If it doesn't, get a safer atomic pool and try again. Fixes: c84dc6e68a1d ("dma-pool: add additional coherent pools to map to gfp mask") Reported-by: Jeremy Linton <jeremy.linton@arm.com> Suggested-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de>
* dma-pool: introduce dma_guess_pool()Nicolas Saenz Julienne2020-07-141-3/+23
| | | | | | | | | | | | | | | dma-pool's dev_to_pool() creates the false impression that there is a way to grantee a mapping between a device's DMA constraints and an atomic pool. It tuns out it's just a guess, and the device might need to use an atomic pool containing memory from a 'safer' (or lower) memory zone. To help mitigate this, introduce dma_guess_pool() which can be fed a device's DMA constraints and atomic pools already known to be faulty, in order for it to provide an better guess on which pool to use. Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de>
* dma-pool: get rid of dma_in_atomic_pool()Nicolas Saenz Julienne2020-07-141-10/+1
| | | | | | | The function is only used once and can be simplified to a one-liner. Signed-off-by: Nicolas Saenz Julienne <nsaenzjulienne@suse.de> Signed-off-by: Christoph Hellwig <hch@lst.de>
* dma-mapping: warn when coherent pool is depletedDavid Rientjes2020-06-291-1/+5
| | | | | | | | | | | | | | | When a DMA coherent pool is depleted, allocation failures may or may not get reported in the kernel log depending on the allocator. The admin does have a workaround, however, by using coherent_pool= on the kernel command line. Provide some guidance on the failure and a recommended minimum size for the pools (double the size). Signed-off-by: David Rientjes <rientjes@google.com> Tested-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Christoph Hellwig <hch@lst.de>
* dma-pool: fix too large DMA pools on medium memory size systemsGeert Uytterhoeven2020-06-091-4/+3
| | | | | | | | | | | | | | | On systems with at least 32 MiB, but less than 32 GiB of RAM, the DMA memory pools are much larger than intended (e.g. 2 MiB instead of 128 KiB on a 256 MiB system). Fix this by correcting the calculation of the number of GiBs of RAM in the system. Invert the order of the min/max operations, to keep on calculating in pages until the last step, which aids readability. Fixes: 1d659236fb43c4d2 ("dma-pool: scale the default DMA coherent pool size with memory capacity") Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Acked-by: David Rientjes <rientjes@google.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
* dma-pool: scale the default DMA coherent pool size with memory capacityDavid Rientjes2020-04-251-2/+12
| | | | | | | | | | | | | | | | | | | | | When AMD memory encryption is enabled, some devices may use more than 256KB/sec from the atomic pools. It would be more appropriate to scale the default size based on memory capacity unless the coherent_pool option is used on the kernel command line. This provides a slight optimization on initial expansion and is deemed appropriate due to the increased reliance on the atomic pools. Note that the default size of 128KB per pool will normally be larger than the single coherent pool implementation since there are now up to three coherent pools (DMA, DMA32, and kernel). Note that even prior to this patch, coherent_pool= for sizes larger than 1 << (PAGE_SHIFT + MAX_ORDER-1) can fail. With new dynamic expansion support, this would be trivially extensible to allow even larger initial sizes. Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
* dma-pool: add pool sizes to debugfsDavid Rientjes2020-04-251-0/+30
| | | | | | | | | | | | | The atomic DMA pools can dynamically expand based on non-blocking allocations that need to use it. Export the sizes of each of these pools, in bytes, through debugfs for measurement. Suggested-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David Rientjes <rientjes@google.com> [hch: remove the !CONFIG_DEBUG_FS stubs] Signed-off-by: Christoph Hellwig <hch@lst.de>
* dma-direct: atomic allocations must come from atomic coherent poolsDavid Rientjes2020-04-251-3/+24
| | | | | | | | | | | | | | | | | When a device requires unencrypted memory and the context does not allow blocking, memory must be returned from the atomic coherent pools. This avoids the remap when CONFIG_DMA_DIRECT_REMAP is not enabled and the config only requires CONFIG_DMA_COHERENT_POOL. This will be used for CONFIG_AMD_MEM_ENCRYPT in a subsequent patch. Keep all memory in these pools unencrypted. When set_memory_decrypted() fails, this prohibits the memory from being added. If adding memory to the genpool fails, and set_memory_encrypted() subsequently fails, there is no alternative other than leaking the memory. Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
* dma-pool: dynamically expanding atomic poolsDavid Rientjes2020-04-251-38/+84
| | | | | | | | | | | | | | | | | | | | | | | | | | When an atomic pool becomes fully depleted because it is now relied upon for all non-blocking allocations through the DMA API, allow background expansion of each pool by a kworker. When an atomic pool has less than the default size of memory left, kick off a kworker to dynamically expand the pool in the background. The pool is doubled in size, up to MAX_ORDER-1. If memory cannot be allocated at the requested order, smaller allocation(s) are attempted. This allows the default size to be kept quite low when one or more of the atomic pools is not used. Allocations for lowmem should also use GFP_KERNEL for the benefits of reclaim, so use GFP_KERNEL | GFP_DMA and GFP_KERNEL | GFP_DMA32 for lowmem allocations. This also allows __dma_atomic_pool_init() to return a pointer to the pool to make initialization cleaner. Also switch over some node ids to the more appropriate NUMA_NO_NODE. Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
* dma-pool: add additional coherent pools to map to gfp maskDavid Rientjes2020-04-201-43/+77
| | | | | | | | | | | | | | | | | | | | | | | | | | The single atomic pool is allocated from the lowest zone possible since it is guaranteed to be applicable for any DMA allocation. Devices may allocate through the DMA API but not have a strict reliance on GFP_DMA memory. Since the atomic pool will be used for all non-blockable allocations, returning all memory from ZONE_DMA may unnecessarily deplete the zone. Provision for multiple atomic pools that will map to the optimal gfp mask of the device. When allocating non-blockable memory, determine the optimal gfp mask of the device and use the appropriate atomic pool. The coherent DMA mask will remain the same between allocation and free and, thus, memory will be freed to the same atomic pool it was allocated from. __dma_atomic_pool_init() will be changed to return struct gen_pool * later once dynamic expansion is added. Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
* dma-remap: separate DMA atomic pools from direct remap codeDavid Rientjes2020-04-201-0/+123
DMA atomic pools will be needed beyond only CONFIG_DMA_DIRECT_REMAP so separate them out into their own file. This also adds a new Kconfig option that can be subsequently used for options, such as CONFIG_AMD_MEM_ENCRYPT, that will utilize the coherent pools but do not have a dependency on direct remapping. For this patch alone, there is no functional change introduced. Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: David Rientjes <rientjes@google.com> [hch: fixup copyrights and remove unused includes] Signed-off-by: Christoph Hellwig <hch@lst.de>