summaryrefslogtreecommitdiffstats
path: root/net/core
diff options
context:
space:
mode:
authorJakub Kicinski <kuba@kernel.org>2023-04-17 08:28:05 -0700
committerJakub Kicinski <kuba@kernel.org>2023-04-19 11:28:15 -0700
commit8e4c62c7d980eaf0f64c1c0ef0c80f5685af0fb6 (patch)
tree253e6b917ff666b2af82bfbbd12b4ba35d6e1604 /net/core
parent4e195166624851d1da1a4e42110e66ee4faca4a3 (diff)
downloadlinux-stable-8e4c62c7d980eaf0f64c1c0ef0c80f5685af0fb6.tar.gz
linux-stable-8e4c62c7d980eaf0f64c1c0ef0c80f5685af0fb6.tar.bz2
linux-stable-8e4c62c7d980eaf0f64c1c0ef0c80f5685af0fb6.zip
page_pool: add DMA_ATTR_WEAK_ORDERING on all mappings
Commit c519fe9a4f0d ("bnxt: add dma mapping attributes") added DMA_ATTR_WEAK_ORDERING to DMA attrs on bnxt. It has since spread to a few more drivers (possibly as a copy'n'paste). DMA_ATTR_WEAK_ORDERING only seems to matter on Sparc and PowerPC/cell, the rarity of these platforms is likely why we never bothered adding the attribute in the page pool, even though it should be safe to add. To make the page pool migration in drivers which set this flag less of a risk (of regressing the precious sparc database workloads or whatever needed this) let's add DMA_ATTR_WEAK_ORDERING on all page pool DMA mappings. We could make this a driver opt-in but frankly I don't think it's worth complicating the API. I can't think of a reason why device accesses to packet memory would have to be ordered. Acked-by: Ilias Apalodimas <ilias.apalodimas@linaro.org> Acked-by: Somnath Kotur <somnath.kotur@broadcom.com> Reviewed-by: Alexander Lobakin <aleksander.lobakin@intel.com> Acked-by: Jesper Dangaard Brouer <brouer@redhat.com> Link: https://lore.kernel.org/r/20230417152805.331865-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'net/core')
-rw-r--r--net/core/page_pool.c5
1 files changed, 3 insertions, 2 deletions
diff --git a/net/core/page_pool.c b/net/core/page_pool.c
index 2f6bf422ed30..97f20f7ff4fc 100644
--- a/net/core/page_pool.c
+++ b/net/core/page_pool.c
@@ -316,7 +316,8 @@ static bool page_pool_dma_map(struct page_pool *pool, struct page *page)
*/
dma = dma_map_page_attrs(pool->p.dev, page, 0,
(PAGE_SIZE << pool->p.order),
- pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
+ pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC |
+ DMA_ATTR_WEAK_ORDERING);
if (dma_mapping_error(pool->p.dev, dma))
return false;
@@ -484,7 +485,7 @@ void page_pool_release_page(struct page_pool *pool, struct page *page)
/* When page is unmapped, it cannot be returned to our pool */
dma_unmap_page_attrs(pool->p.dev, dma,
PAGE_SIZE << pool->p.order, pool->p.dma_dir,
- DMA_ATTR_SKIP_CPU_SYNC);
+ DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING);
page_pool_set_dma_addr(page, 0);
skip_dma_unmap:
page_pool_clear_pp_info(page);