summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* Merge tag 'dmaengine-4.5-rc1' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds2016-01-1344-1181/+2088
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull dmaengine updates from Vinod Koul: "This round we have few new features, new driver and updates to few drivers. The new features to dmaengine core are: - Synchronized transfer termination API to terminate the dmaengine transfers in synchronized and async fashion as required by users. We have its user now in ALSA dmaengine lib, img, at_xdma, axi_dmac drivers. - Universal API for channel request and start consolidation of request flows. It's user is ompa-dma driver. - Introduce reuse of descriptors and use in pxa_dma driver Add/Remove: - New STM32 DMA driver - Removal of unused R-Car HPB-DMAC driver Updates: - ti-dma-crossbar updates for supporting eDMA - tegra-apb pm updates - idma64 - mv_xor updates - ste_dma updates" * tag 'dmaengine-4.5-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (54 commits) dmaengine: mv_xor: add suspend/resume support dmaengine: mv_xor: de-duplicate mv_chan_set_mode*() dmaengine: mv_xor: remove mv_xor_chan->current_type field dmaengine: omap-dma: Add support for DMA filter mapping to slave devices dmaengine: edma: Add support for DMA filter mapping to slave devices dmaengine: core: Introduce new, universal API to request a channel dmaengine: core: Move and merge the code paths using private_candidate dmaengine: core: Skip mask matching when it is not provided to private_candidate dmaengine: mdc: Correct terminate_all handling dmaengine: edma: Add probe callback to edma_tptc_driver dmaengine: dw: fix potential memory leak in dw_dma_parse_dt() dmaengine: stm32-dma: Fix unchecked deference of chan->desc dmaengine: sh: Remove unused R-Car HPB-DMAC driver dmaengine: usb-dmac: Document SoC specific compatibility strings ste_dma40: Delete an unnecessary variable initialisation in d40_probe() ste_dma40: Delete another unnecessary check in d40_probe() ste_dma40: Delete an unnecessary check before the function call "kmem_cache_destroy" dmaengine: tegra-apb: Free interrupts before killing tasklets dmaengine: tegra-apb: Update driver to use GFP_NOWAIT dmaengine: tegra-apb: Only save channel state for those in use ...
| * dmaengine: mv_xor: add suspend/resume supportThomas Petazzoni2016-01-062-0/+54
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit adds suspend/resume support to the mv_xor driver. The config and interrupt mask registers must be saved and restored, and upon resume, the MBus windows configuration must also be done again. Tested on Armada 388 GP, with a RAID 5 array, accessed before and after a suspend to RAM cycle. Based on work from Ofer Heifetz and Lior Amsalem. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * dmaengine: mv_xor: de-duplicate mv_chan_set_mode*()Thomas Petazzoni2016-01-061-38/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When commit 6f166312c6ea2 ("dmaengine: mv_xor: add support for a38x command in descriptor mode") added support for the descriptor mode available in Marvell Armada 38x and later SoCs, it added a new function mv_chan_set_mode_to_desc() which allows to configure a XOR channel to get the specific operation to be done from each individual DMA descriptor. However, this function was mainly a duplicate of the existing mv_chan_set_mode(), with just the operation being different. This commit re-organizes the code into a single mv_chan_set_mode() function, which takes the operation mode as argument, and the mv_xor_channel_add() function decides whether to use XOR_OPERATION_MODE_IN_DESC or XOR_OPERATION_MODE_XOR. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Reviewed-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * dmaengine: mv_xor: remove mv_xor_chan->current_type fieldThomas Petazzoni2016-01-062-2/+0
| | | | | | | | | | | | | | | | | | Since commit 3e4f52e2da9f6 ("dma: mv_xor: Simplify the DMA_MEMCPY operation"), this field is no longer used, so get rid of it. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Reviewed-by: Maxime Ripard <maxime.ripard@free-electrons.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * Merge branch 'topic/ti-xbar' into for-linusVinod Koul2016-01-062-11/+76
| |\
| | * dmaengine: ti-dma-crossbar: dra7: Support for reserving DMA event rangesPeter Ujfalusi2015-11-302-4/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In eDMA the events are directly mapped to a DMA channel (for example DMA event 14 can only be handled by DMA channel 14). If the memcpy is enabled on the eDMA, there is a possibility that the crossbar driver would assign DMA event number already allocated in eDMA for memcpy. Furthermore the eDMA can be shared with DSP in which case the crossbar driver should also avoid mapping xbar events to DSP used event numbers (or channels). Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * dmaengine: ti-dma-crossbar: dra7: Use bitops instead of idrPeter Ujfalusi2015-11-301-7/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The use of idr was nice, but it was a bit heavy and we did not need the features it provides. Using simple bitmap to track allocated DMA channels is adequate here and it will be easier to add support for reserving channels later on. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * dmaengine: ti-dma-crossbar: dra7: Support for eDMA with new bindingsPeter Ujfalusi2015-11-301-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | Allow the crossbar driver to be used with the eDMA node with non legacy binding. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/tegra' into for-linusVinod Koul2016-01-061-33/+40
| |\ \
| | * | dmaengine: tegra-apb: Free interrupts before killing taskletsJon Hunter2015-12-051-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On probe failure or driver removal, before killing any tasklets, ensure that the channel interrupt is freed to ensure that another channel interrupt cannot occur and schedule the tasklet again. Signed-off-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: tegra-apb: Update driver to use GFP_NOWAITJon Hunter2015-12-051-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The tegra20-apb-dma driver currently uses the flag GFP_ATOMIC when allocating memory for structures used in conjunction with the DMA descriptors. It is preferred that dmaengine drivers use GFP_NOWAIT instead and so the emergency memory pool will not be used by these drivers. Signed-off-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: tegra-apb: Only save channel state for those in useJon Hunter2015-12-051-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the tegra-apb DMA driver suspend/resume helpers, save and restore the registers for all channels regardless of whether they are in use or not. Change this so that only channels that have been allocated and configured are saved and restored. Signed-off-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: tegra-apb: Save and restore word countJon Hunter2015-12-051-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Newer tegra devices have a separate word count register per channel that contains the number of words to be transferred. This register is not saved or restored by the suspend/resume helpers for these newer devices and so ensure that it is. Signed-off-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: tegra-apb: Use dev_get_drvdata()Jon Hunter2015-12-051-4/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In the tegra_dma_runtime_suspend/resume functions, the pdev structure is not needed, and so just call dev_get_drvdata() to get the device data structure. Signed-off-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: tegra-apb: Correct runtime-pm usageJon Hunter2015-12-051-25/+18
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The tegra-apb DMA driver enables runtime-pm but never calls pm_runtime_get/put and hence the runtime-pm callbacks are never invoked. The driver manages the clocks by directly calling clk_prepare_enable() and clk_unprepare_disable(). Fix this by replacing the clk_prepare_enable() and clk_disable_unprepare() with pm_runtime_get_sync() and pm_runtime_put(), respectively. Note that the consequence of this is that if runtime-pm is disabled, then the clocks will remain on the entire time the driver is loaded. However, if runtime-pm is disabled, then power is not most likely not a concern. Signed-off-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/stm32' into for-linusVinod Koul2016-01-065-0/+1238
| |\ \
| | * | dmaengine: stm32-dma: Fix unchecked deference of chan->descM'boumba Cedric Madianga2015-12-101-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 'commit d8b468394fb7 ("dmaengine: Add STM32 DMA driver")' leads to the following Smatch complaint: drivers/dma/stm32-dma.c:562 stm32_dma_issue_pending() error: we previously assumed 'chan->desc' could be null (see line 560) So, this patch fixes the unchecked dereference of chan->desc by returning operation not permitted error when stm32_dma_start_transfer() does not succeed to allocate a virtual channel descriptor. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: M'boumba Cedric Madianga <cedric.madianga@gmail.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | ARM: configs: Add STM32 DMA support in STM32 defconfigM'boumba Cedric Madianga2015-11-161-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds STM32 DMA support in stm32_defconfig file Signed-off-by: M'boumba Cedric Madianga <cedric.madianga@gmail.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: Add STM32 DMA driverM'boumba Cedric Madianga2015-11-163-0/+1154
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds support for the STM32 DMA controller. Signed-off-by: M'boumba Cedric Madianga <cedric.madianga@gmail.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dt-bindings: Document the STM32 DMA bindingsM'boumba Cedric Madianga2015-11-161-0/+82
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | This patch adds documentation of device tree bindings for the STM32 dma controller. Signed-off-by: M'boumba Cedric Madianga <cedric.madianga@gmail.com> Acked-by: Rob Herring <robh@kernel.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/ste' into for-linusVinod Koul2016-01-061-45/+42
| |\ \
| | * | ste_dma40: Delete an unnecessary variable initialisation in d40_probe()Markus Elfring2015-12-101-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The variable "res" will eventually be set to a resource pointer from a call of the d40_hw_detect_init(() function. Thus let us omit the explicit initialisation at the beginning. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | ste_dma40: Delete another unnecessary check in d40_probe()Markus Elfring2015-12-101-42/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A single jump label was used by the d40_probe() function in several cases for error handling which was a bit inefficient here. * This implementation detail could be improved by the introduction of another jump label. * Remove an extra check for the variable "base". * Omit its explicit initialisation at the beginning then. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | ste_dma40: Delete an unnecessary check before the function call ↵Markus Elfring2015-12-101-2/+1
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "kmem_cache_destroy" The kmem_cache_destroy() function tests whether its argument is NULL and then returns immediately. Thus the test around the call is not needed. This issue was detected by using the Coccinelle software. Signed-off-by: Markus Elfring <elfring@users.sourceforge.net> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/rcar' into for-linusVinod Koul2016-01-065-781/+8
| |\ \
| | * | dmaengine: sh: Remove unused R-Car HPB-DMAC driverGeert Uytterhoeven2015-12-104-779/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As of commit 4baadb9e05c68962 ("ARM: shmobile: r8a7778: remove obsolete setup code"), the Renesas R-Car HPB-DMAC driver is no longer used. In theory it could still be used on R-Car Gen1 SoCs, but that requires adding DT support to the driver, which is not planned. Remove the driver, it can be resurrected from git history when needed. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Acked-by: Simon Horman <horms+renesas@verge.net.au> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: usb-dmac: Document SoC specific compatibility stringsSimon Horman2015-12-101-2/+8
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In general Renesas hardware is not documented to the extent where the relationship between IP blocks on different SoCs can be assumed although they may appear to operate the same way. Furthermore the documentation typically does not specify a version for individual IP blocks. For these reasons a convention of using the SoC name in place of a version and providing SoC-specific compatibility strings has been adopted. Although not universally liked this convention is used in the bindings for most drivers for Renesas hardware. The purpose of this patch is to update the Renesas USB DMA Controller driver to follow this convention. Cc: devicetree@vger.kernel.org Acked-by: Rob Herring <robh@kernel.org> Acked-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Signed-off-by: Simon Horman <horms+renesas@verge.net.au> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/omap' into for-linusVinod Koul2016-01-061-64/+14
| |\ \
| | * | dmaengine: omap-dma: Handle cases when the channel is polled for completionPeter Ujfalusi2015-12-051-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When a DMA client driver decides that it is not providing callback for completion of a transfer (and/or does not set the DMA_PREP_INTERRUPT) but it will poll the status of the transfer (in case of short memcpy for example) we will not get interrupt for the completion of the transfer and will not mark the transaction as done. Check the channel enable bit in the CCR when the status is queried and if the channel is no longer active, we call the omap_dma_callback() to handle the transfer completion. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: omap-dma: Remove tasklet to start the transfersPeter Ujfalusi2015-12-051-57/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The use of tasklet to actually start the DMA transfer slightly decreases the DMA throughput since it adds small scheduling delay when the transfer is started. In normal use, even with high I/O load the tasklet would start one transaction at a time, however running the DMAtest for memcpy on all available channels will cause the tasklet to start about 15 transfers. The performance numbers on OMAP4 PandaBoard-es (test_buf_size = 6553): With the tasklet: dmatest: dma0chan30-copy: summary 5000 tests, 0 failures 186 iops 593 KB/s (0) dmatest: dma0chan8-copy0: summary 5000 tests, 0 failures 184 iops 584 KB/s (0) dmatest: dma0chan13-copy: summary 5000 tests, 0 failures 184 iops 585 KB/s (0) dmatest: dma0chan12-copy: summary 5000 tests, 0 failures 184 iops 585 KB/s (0) dmatest: dma0chan7-copy0: summary 5000 tests, 0 failures 183 iops 581 KB/s (0) With this patch (no tasklet): dmatest: dma0chan4-copy0: summary 5000 tests, 0 failures 199 iops 644 KB/s (0) dmatest: dma0chan5-copy0: summary 5000 tests, 0 failures 199 iops 645 KB/s (0) dmatest: dma0chan6-copy0: summary 5000 tests, 0 failures 199 iops 637 KB/s (0) dmatest: dma0chan24-copy: summary 5000 tests, 0 failures 199 iops 638 KB/s (0) dmatest: dma0chan16-copy: summary 5000 tests, 0 failures 199 iops 638 KB/s (0) Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: omap-dma: Clean up the prep_slave_sg sg list walk codePeter Ujfalusi2015-12-051-6/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The for_each_sg() macro's last parameter is inteded to be used as counter. We can use 'i' instead of 'j' within the loop for indexes. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: omap-dma: Correct status reporting for memcpyPeter Ujfalusi2015-12-051-1/+1
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | During mem copy both src and dst position moves at the same pace. Check the dst position for progress reporting. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Tested-by: Tomi Valkeinen <tomi.valkeinen@ti.com> Signed-off-by: Jyri Sarha <jsarha@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/ioatdma' into for-linusVinod Koul2016-01-065-50/+10
| |\ \
| | * | dmaengine: ioatdma: constify dca_ops structuresJulia Lawall2015-11-163-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The dca_ops structure is never modified, so declare it as const. Done with the help of Coccinelle. Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Acked-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: IOATDMA: Cleanup pre v3.0 chansts register readsDave Jiang2015-11-162-46/+4
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove pre-3.0 channel status reads. 3.0 and later chansts register is 64bit and can be read 64bit. This was clarified with the hardware architects and since the driver now only support 3.0+ we don't need the legacy support Signed-off-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/idma' into for-linusVinod Koul2016-01-062-15/+10
| |\ \
| | * | dmaengine: idma64: use local variable to index descriptorAndy Shevchenko2015-12-051-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since a local variable contains the number of hardware desriptors at the beginning of idma64_desc_fill() we may use it to index the last descriptor as well. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: idma64: convert idma64_hw_desc_fill() to return voidAndy Shevchenko2015-12-051-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Explicitly show in idma64_desc_fill() how we link the hardware descriptors. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: idma64: set maximum allowed segment size for DMAAndy Shevchenko2015-12-052-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This tells, for example, IOMMU what the maximum size of a segment the DMA controller can send. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: idma64: drop IRQ enable / disable in handlerAndy Shevchenko2015-12-051-8/+0
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | There is no need to disable interrupts in the IRQ handler. The driver guarantess that at one time only one descriptor is active, besides the fact that each call to the same channel will be serialized in idma64_chan_irq() handler anyway. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/async' into for-linusVinod Koul2016-01-067-8/+177
| |\ \
| | * | dmaengine: Add might_sleep() to dmaengine_synchronize()Lars-Peter Clausen2015-12-051-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implementations of dmaengine_synchronize() are allowed to sleep, hence the function must not be called to from atomic context. Add might_sleep() to dmaengine_synchronize() to make it easier to detect non-compliant callers. Suggested-by: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | ALSA: pcm_dmaengine: Properly synchronize DMA on shutdownLars-Peter Clausen2015-11-161-3/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use the new dmaengine_synchronize() function to make sure that all complete callbacks have finished running before the runtime data, which is accessed in the completed callback, is freed. This fixes a long standing use-after-free race condition that has been observed on some systems. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Reviewed-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: axi_dmac: Add synchronization supportLars-Peter Clausen2015-11-161-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement the new device_synchronize() callback to allow proper synchronization when stopping a channel. Since the driver already makes sure that no new complete callbacks are scheduled after the device_terminate_all() callback has been called, all left to do in the device_synchronize() callback is to wait for all currently running complete callbacks to finish. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: virt-dma: Add synchronization helper functionLars-Peter Clausen2015-11-161-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a synchronize helper function for the virt-dma library. The function makes sure that any scheduled descriptor complete callbacks have finished running before the function returns. This needs to be called by drivers using virt-dma in their device_synchronize() callback. Depending on the driver additional operations might be necessary in addition to calling vchan_synchronize() to ensure proper synchronization. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: Add transfer termination synchronization supportLars-Peter Clausen2015-11-164-5/+148
| | |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The DMAengine API has a long standing race condition that is inherent to the API itself. Calling dmaengine_terminate_all() is supposed to stop and abort any pending or active transfers that have previously been submitted. Unfortunately it is possible that this operation races against a currently running (or with some drivers also scheduled) completion callback. Since the API allows dmaengine_terminate_all() to be called from atomic context as well as from within a completion callback it is not possible to synchronize to the execution of the completion callback from within dmaengine_terminate_all() itself. This means that a user of the DMAengine API does not know when it is safe to free resources used in the completion callback, which can result in a use-after-free race condition. This patch addresses the issue by introducing an explicit synchronization primitive to the DMAengine API called dmaengine_synchronize(). The existing dmaengine_terminate_all() is deprecated in favor of dmaengine_terminate_sync() and dmaengine_terminate_async(). The former aborts all pending and active transfers and synchronizes to the current context, meaning it will wait until all running completion callbacks have finished. This means it is only possible to call this function from non-atomic context. The later function does not synchronize, but can still be used in atomic context or from within a complete callback. It has to be followed up by dmaengine_synchronize() before a client can free the resources used in a completion callback. In addition to this the semantics of the device_terminate_all() callback are slightly relaxed by this patch. It is now OK for a driver to only schedule the termination of the active transfer, but does not necessarily have to wait until the DMA controller has completely stopped. The driver must ensure though that the controller has stopped and no longer accesses any memory when the device_synchronize() callback returns. This was in part done since most drivers do not pay attention to this anyway at the moment and to emphasize that this needs to be done when the device_synchronize() callback is implemented. But it also helps with implementing support for devices where stopping the controller can require operations that may sleep. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| * | Merge branch 'topic/univ_api' into for-linusVinod Koul2016-01-067-76/+191
| |\ \
| | * | dmaengine: omap-dma: Add support for DMA filter mapping to slave devicesPeter Ujfalusi2015-12-182-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for providing device to filter_fn mapping so client drivers can switch to use the dma_request_chan() API. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: edma: Add support for DMA filter mapping to slave devicesPeter Ujfalusi2015-12-182-0/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for providing device to filter_fn mapping so client drivers can switch to use the dma_request_chan() API. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
| | * | dmaengine: core: Introduce new, universal API to request a channelPeter Ujfalusi2015-12-183-36/+127
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The two API function can cover most, if not all current APIs used to request a channel. With minimal effort dmaengine drivers, platforms and dmaengine user drivers can be converted to use the two function. struct dma_chan *dma_request_chan_by_mask(const dma_cap_mask_t *mask); To request any channel matching with the requested capabilities, can be used to request channel for memcpy, memset, xor, etc where no hardware synchronization is needed. struct dma_chan *dma_request_chan(struct device *dev, const char *name); To request a slave channel. The dma_request_chan() will try to find the channel via DT, ACPI or in case if the kernel booted in non DT/ACPI mode it will use a filter lookup table and retrieves the needed information from the dma_slave_map provided by the DMA drivers. This legacy mode needs changes in platform code, in dmaengine drivers and finally the dmaengine user drivers can be converted: For each dmaengine driver an array of DMA device, slave and the parameter for the filter function needs to be added: static const struct dma_slave_map da830_edma_map[] = { { "davinci-mcasp.0", "rx", EDMA_FILTER_PARAM(0, 0) }, { "davinci-mcasp.0", "tx", EDMA_FILTER_PARAM(0, 1) }, { "davinci-mcasp.1", "rx", EDMA_FILTER_PARAM(0, 2) }, { "davinci-mcasp.1", "tx", EDMA_FILTER_PARAM(0, 3) }, { "davinci-mcasp.2", "rx", EDMA_FILTER_PARAM(0, 4) }, { "davinci-mcasp.2", "tx", EDMA_FILTER_PARAM(0, 5) }, { "spi_davinci.0", "rx", EDMA_FILTER_PARAM(0, 14) }, { "spi_davinci.0", "tx", EDMA_FILTER_PARAM(0, 15) }, { "da830-mmc.0", "rx", EDMA_FILTER_PARAM(0, 16) }, { "da830-mmc.0", "tx", EDMA_FILTER_PARAM(0, 17) }, { "spi_davinci.1", "rx", EDMA_FILTER_PARAM(0, 18) }, { "spi_davinci.1", "tx", EDMA_FILTER_PARAM(0, 19) }, }; This information is going to be needed by the dmaengine driver, so modification to the platform_data is needed, and the driver map should be added to the pdata of the DMA driver: da8xx_edma0_pdata.slave_map = da830_edma_map; da8xx_edma0_pdata.slavecnt = ARRAY_SIZE(da830_edma_map); The DMA driver then needs to configure the needed device -> filter_fn mapping before it registers with dma_async_device_register() : ecc->dma_slave.filter_map.map = info->slave_map; ecc->dma_slave.filter_map.mapcnt = info->slavecnt; ecc->dma_slave.filter_map.fn = edma_filter_fn; When neither DT or ACPI lookup is available the dma_request_chan() will try to match the requester's device name with the filter_map's list of device names, when a match found it will use the information from the dma_slave_map to get the channel with the dma_get_channel() internal function. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com> Reviewed-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>