summaryrefslogtreecommitdiffstats
path: root/drivers/dma
diff options
context:
space:
mode:
authorDave Jiang <dave.jiang@intel.com>2016-07-25 10:34:19 -0700
committerVinod Koul <vinod.koul@intel.com>2016-08-08 08:11:43 +0530
commitfd3c69bd19244aa8cbf859561fd1b9f4ebc1d1c3 (patch)
treeb761ec1abdbe8929e210a46c5b41dc3b2c410b35 /drivers/dma
parented9f2c5896baf277959ed91f6b77b03c5de2db0f (diff)
downloadlinux-fd3c69bd19244aa8cbf859561fd1b9f4ebc1d1c3.tar.gz
linux-fd3c69bd19244aa8cbf859561fd1b9f4ebc1d1c3.tar.bz2
linux-fd3c69bd19244aa8cbf859561fd1b9f4ebc1d1c3.zip
dmaengine: xgene-dma: move unmap to before callback
Completion callback should happen after dma_descriptor_unmap() has happened. This allow the cache invalidate to happen and ensure that the data accessed by the upper layer is in memory that was from DMA rather than stale data. On some architecture this is done by the hardware, however we should make the code consistent to not cause confusion. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Cc: Rameshwar Prasad Sahu <rsahu@apm.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Diffstat (limited to 'drivers/dma')
-rw-r--r--drivers/dma/xgene-dma.c3
1 files changed, 1 insertions, 2 deletions
diff --git a/drivers/dma/xgene-dma.c b/drivers/dma/xgene-dma.c
index d66ed11baaec..8b693b712d0f 100644
--- a/drivers/dma/xgene-dma.c
+++ b/drivers/dma/xgene-dma.c
@@ -606,12 +606,11 @@ static void xgene_dma_run_tx_complete_actions(struct xgene_dma_chan *chan,
return;
dma_cookie_complete(tx);
+ dma_descriptor_unmap(tx);
/* Run the link descriptor callback function */
dmaengine_desc_get_callback_invoke(tx, NULL);
- dma_descriptor_unmap(tx);
-
/* Run any dependencies */
dma_run_dependencies(tx);
}