summaryrefslogtreecommitdiffstats
path: root/drivers/dma
Commit message (Collapse)AuthorAgeFilesLines
* Merge tag 'dmaengine-5.2-rc1' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds2019-05-0917-156/+542
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull dmaengine updates from Vinod Koul: - Updates to stm32 dma residue calculations - Interleave dma capability to axi-dmac and support for ZynqMP arch - Rework of channel assignment for rcar dma - Debugfs for pl330 driver - Support for Tegra186/Tegra194, refactoring for new chips and support for pause/resume - Updates to axi-dmac, bcm2835, fsl-edma, idma64, imx-sdma, rcar-dmac, stm32-dma etc - dev_get_drvdata() updates on few drivers * tag 'dmaengine-5.2-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (34 commits) dmaengine: tegra210-adma: restore channel status dmaengine: tegra210-dma: free dma controller in remove() dmaengine: tegra210-adma: add pause/resume support dmaengine: tegra210-adma: add support for Tegra186/Tegra194 Documentation: DT: Add compatibility binding for Tegra186 dmaengine: tegra210-adma: prepare for supporting newer Tegra chips dmaengine: at_xdmac: remove a stray bottom half unlock dmaengine: fsl-edma: Adjust indentation dmaengine: fsl-edma: Fix typo in Vybrid name dmaengine: stm32-dma: fix residue calculation in stm32-dma dmaengine: nbpfaxi: Use dev_get_drvdata() dmaengine: bcm-sba-raid: Use dev_get_drvdata() dmaengine: stm32-dma: Fix unsigned variable compared with zero dmaengine: stm32-dma: use platform_get_irq() dmaengine: rcar-dmac: Update copyright information dmaengine: imx-sdma: Only check ratio on parts that support 1:1 dmaengine: xgene-dma: fix spelling mistake "descripto" -> "descriptor" dmaengine: idma64: Move driver name to the header dmaengine: bcm2835: Drop duplicate capability setting. dmaengine: pl330: _stop: clear interrupt status ...
| * dmaengine: tegra210-adma: restore channel statusSameer Pujar2019-05-041-1/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Status of ADMA channel registers is not saved and restored during system suspend. During active playback if system enters suspend, this results in wrong state of channel registers during system resume and playback fails to resume properly. Fix this by saving following channel registers in runtime suspend and restore during runtime resume. * ADMA_CH_LOWER_SRC_ADDR * ADMA_CH_LOWER_TRG_ADDR * ADMA_CH_FIFO_CTRL * ADMA_CH_CONFIG * ADMA_CH_CTRL * ADMA_CH_CMD * ADMA_CH_TC Runtime PM calls will be inovked during system resume path if a playback or capture needs to be resumed. Hence above changes work fine for system suspend case. Fixes: f46b195799b5 ("dmaengine: tegra-adma: Add support for Tegra210 ADMA") Signed-off-by: Sameer Pujar <spujar@nvidia.com> Reviewed-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: tegra210-dma: free dma controller in remove()Sameer Pujar2019-05-041-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Following kernel panic is seen during DMA driver unload->load sequence ========================================================================== Unable to handle kernel paging request at virtual address ffffff8001198880 Internal error: Oops: 86000007 [#1] PREEMPT SMP CPU: 0 PID: 5907 Comm: HwBinder:4123_1 Tainted: G C 4.9.128-tegra-g065839f Hardware name: galen (DT) task: ffffffc3590d1a80 task.stack: ffffffc3d0678000 PC is at 0xffffff8001198880 LR is at of_dma_request_slave_channel+0xd8/0x1f8 pc : [<ffffff8001198880>] lr : [<ffffff8008746f30>] pstate: 60400045 sp : ffffffc3d067b710 x29: ffffffc3d067b710 x28: 000000000000002f x27: ffffff800949e000 x26: ffffff800949e750 x25: ffffff800949e000 x24: ffffffbefe817d84 x23: ffffff8009f77cb0 x22: 0000000000000028 x21: ffffffc3ffda49c8 x20: 0000000000000029 x19: 0000000000000001 x18: ffffffffffffffff x17: 0000000000000000 x16: ffffff80082b66a0 x15: ffffff8009e78250 x14: 000000000000000a x13: 0000000000000038 x12: 0101010101010101 x11: 0000000000000030 x10: 0101010101010101 x9 : fffffffffffffffc x8 : 7f7f7f7f7f7f7f7f x7 : 62ff726b6b64622c x6 : 0000000000008064 x5 : 6400000000000000 x4 : ffffffbefe817c44 x3 : ffffffc3ffda3e08 x2 : ffffff8001198880 x1 : ffffffc3d48323c0 x0 : ffffffc3d067b788 Process HwBinder:4123_1 (pid: 5907, stack limit = 0xffffffc3d0678028) Call trace: [<ffffff8001198880>] 0xffffff8001198880 [<ffffff80087459f8>] dma_request_chan+0x50/0x1f0 [<ffffff8008745bc0>] dma_request_slave_channel+0x28/0x40 [<ffffff8001552c44>] tegra_alt_pcm_open+0x114/0x170 [<ffffff8008d65fa4>] soc_pcm_open+0x10c/0x878 [<ffffff8008d18618>] snd_pcm_open_substream+0xc0/0x170 [<ffffff8008d1878c>] snd_pcm_open+0xc4/0x240 [<ffffff8008d189e0>] snd_pcm_playback_open+0x58/0x80 [<ffffff8008cfc6d4>] snd_open+0xb4/0x178 [<ffffff8008250628>] chrdev_open+0xb8/0x1d0 [<ffffff8008246fdc>] do_dentry_open+0x214/0x318 [<ffffff80082485d0>] vfs_open+0x58/0x88 [<ffffff800825bce0>] do_last+0x450/0xde0 [<ffffff800825c718>] path_openat+0xa8/0x368 [<ffffff800825dd84>] do_filp_open+0x8c/0x110 [<ffffff8008248a74>] do_sys_open+0x164/0x220 [<ffffff80082b66dc>] compat_SyS_openat+0x3c/0x50 [<ffffff8008083040>] el0_svc_naked+0x34/0x38 ---[ end trace 67e6d544e65b5145 ]--- Kernel panic - not syncing: Fatal exception ========================================================================== In device probe(), of_dma_controller_register() registers DMA controller. But when driver is removed, this is not freed. During driver reload this results in data abort and kernel panic. Add of_dma_controller_free() in driver remove path to fix the issue. Fixes: f46b195799b5 ("dmaengine: tegra-adma: Add support for Tegra210 ADMA") Signed-off-by: Sameer Pujar <spujar@nvidia.com> Reviewed-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: tegra210-adma: add pause/resume supportSameer Pujar2019-05-041-0/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During an audio playback session it is observed that, audio goes off after few seconds of continuous pause and play. No audio is heard even when the playback is resumed. The reason for above is, currently ADMA driver does not handle DMA_PAUSE/ DMA_RESUME and relevant callbacks for dma_device are not implemented. This patch implements device_pause and device_resume callbacks for the device. During pause TRANSFER_PAUSE bit of dma channel control register is set and the same is cleared during resume. Signed-off-by: Sameer Pujar <spujar@nvidia.com> Reviewed-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: tegra210-adma: add support for Tegra186/Tegra194Sameer Pujar2019-05-041-8/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add Tegra186 specific macro defines and chip_data structure for chip specific information. New compatibility is added to select relevant chip details. There is no major change for Tegra194 and hence it can use the same chip data. The bits in the BURST_SIZE field of the ADMA CH_CONFIG register are encoded differently on Tegra186 and Tegra194 compared with Tegra210. On Tegra210 the bits are encoded as follows ... 1 = WORD_1 2 = WORDS_2 3 = WORDS_4 4 = WORDS_8 5 = WORDS_16 Where as on Tegra186 and Tegra194 the bits are encoded as ... 0 = WORD_1 1 = WORDS_2 2 = WORDS_3 3 = WORDS_4 4 = WORDS_5 ... 15 = WORDS_16 Add helper functions for generating the correct burst size. Signed-off-by: Sameer Pujar <spujar@nvidia.com> Reviewed-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: tegra210-adma: prepare for supporting newer Tegra chipsSameer Pujar2019-05-041-34/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a preparatory patch to add support for Tegra186 and Tegra194 chips. Following changes are necessary to make driver code generic. * chip_data structure is enhanced to have chip specific details and following are the additions to the structure * Offset addresses for ADMA global and channel registers * Offset values for Tx and Rx channel selection * Maximum supported Tx and Rx channels * Tx and Rx channel request mask * ADMA channel register space size * Make use of above chip_data to generalise the driver code Support for Tegra186 and Tegra194 will be added in subsequent patches of the series. Signed-off-by: Sameer Pujar <spujar@nvidia.com> Reviewed-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: at_xdmac: remove a stray bottom half unlockDan Carpenter2019-05-041-1/+1
| | | | | | | | | | | | | | | | | | | | We switched this code from spin_lock_bh() to vanilla spin_lock() but there was one stray spin_unlock_bh() that was overlooked. This patch converts it to spin_unlock() as well. Fixes: d8570d018f69 ("dmaengine: at_xdmac: move spin_lock_bh to spin_lock in tasklet") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: fsl-edma: Adjust indentationKrzysztof Kozlowski2019-05-041-3/+3
| | | | | | | | | | | | | | | | | | Fix indentation and remove unneeded space after 'return' keyword. This fixes checkpatch warning: WARNING: Statements should start on a tabstop Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: fsl-edma: Fix typo in Vybrid nameKrzysztof Kozlowski2019-05-041-1/+1
| | | | | | | | | | | | | | Fix typo in comment for Vybrid SoC family. Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: stm32-dma: fix residue calculation in stm32-dmaArnaud Pouliquen2019-05-041-13/+77
| | | | | | | | | | | | | | | | | | | | | | | | | | | | In double buffer mode, during residue calculation, the DMA can automatically switch to the next transfer. Indeed the CT bit that gives position in the double buffer can has been updated by the hardware, during calculation. In this case the SxNDTR register value can not be trusted. If a transition is detected we consider that the DMA has switched to the beginning of next sg. Signed-off-by: Arnaud Pouliquen <arnaud.pouliquen@st.com> Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: nbpfaxi: Use dev_get_drvdata()Kefeng Wang2019-04-291-2/+2
| | | | | | | | | | | | | | | | | | Using dev_get_drvdata directly. Cc: Vinod Koul <vinod.koul@intel.com> Cc: dmaengine@vger.kernel.org Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: bcm-sba-raid: Use dev_get_drvdata()Kefeng Wang2019-04-291-2/+1
| | | | | | | | | | | | | | | | | | Using dev_get_drvdata directly. Cc: Vinod Koul <vinod.koul@intel.com> Cc: dmaengine@vger.kernel.org Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: stm32-dma: Fix unsigned variable compared with zeroVinod Koul2019-04-291-2/+4
| | | | | | | | | | | | | | | | | | | | | | | | Commit f4fd2ec08f17: ("dmaengine: stm32-dma: use platform_get_irq()") used unsigned variable irq to store the results and check later for negative errors, so update the code to use signed variable for this Fixes: f4fd2ec08f17 ("dmaengine: stm32-dma: use platform_get_irq()") Reported-by: kbuild test robot <lkp@intel.com> Reported-by: Julia Lawall <julia.lawall@lip6.fr> Acked-by: Julia Lawall <julia.lawall@lip6.fr> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: stm32-dma: use platform_get_irq()Fabien Dessenne2019-04-261-5/+6
| | | | | | | | | | | | | | | | | | | | | | platform_get_resource(pdev, IORESOURCE_IRQ) is not recommended for requesting IRQ's resources, as they can be not ready yet. Using platform_get_irq() instead is preferred for getting IRQ even if it was not retrieved earlier. Signed-off-by: Fabien Dessenne <fabien.dessenne@st.com> Reviewed-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: rcar-dmac: Update copyright informationHiroyuki Yokoyama2019-04-261-2/+2
| | | | | | | | | | | | | | | | | | Update copyright and string for Gen3. Signed-off-by: Hiroyuki Yokoyama <hiroyuki.yokoyama.vx@renesas.com> Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Reviewed-by: Simon Horman <horms+renesas@verge.net.au> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: imx-sdma: Only check ratio on parts that support 1:1Angus Ainslie (Purism)2019-04-261-1/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | On imx8mq B0 chip, AHB/SDMA clock ratio 2:1 can't be supported, since SDMA clock ratio has to be increased to 250Mhz, AHB can't reach to 500Mhz, so use 1:1 instead. To limit this change to the imx8mq for now this patch also adds an im8mq-sdma compatible string. Signed-off-by: Angus Ainslie (Purism) <angus@akkea.ca> Acked-by: Robin Gong <yibin.gong@nxp.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: xgene-dma: fix spelling mistake "descripto" -> "descriptor"Colin Ian King2019-04-261-1/+1
| | | | | | | | | | | | | | There is a spelling mistake in a chan_dbg message, fix it. Signed-off-by: Colin Ian King <colin.king@canonical.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: idma64: Move driver name to the headerAndy Shevchenko2019-04-261-5/+4
| | | | | | | | | | | | | | | | | | There are two drivers that are relying on the iDMA 64-bit driver name to match. Instead of duplicating string in both of them, dedicate a header file and share it between users. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: bcm2835: Drop duplicate capability setting.Michal Suchanek2019-04-261-1/+0
| | | | | | | | | | | | Signed-off-by: Michal Suchanek <msuchanek@suse.de> Acked-by: Stefan Wahren <stefan.wahren@i2se.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: pl330: _stop: clear interrupt statusSugar Zhang2019-04-261-3/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch kill instructs the DMAC to immediately terminate execution of a thread. and then clear the interrupt status, at last, stop generating interrupts for DMA_SEV. to guarantee the next dma start is clean. otherwise, one interrupt maybe leave to next start and make some mistake. we can reporduce the problem as follows: DMASEV: modify the event-interrupt resource, and if the INTEN sets function as interrupt, the DMAC will set irq<event_num> HIGH to generate interrupt. write INTCLR to clear interrupt. DMA EXECUTING INSTRUCTS DMA TERMINATE | | | | ... _stop | | | spin_lock_irqsave DMASEV | | | | mask INTEN | | | DMAKILL | | | spin_unlock_irqrestore in above case, a interrupt was left, and if we unmask INTEN, the DMAC will set irq<event_num> HIGH to generate interrupt. to fix this, do as follows: DMA EXECUTING INSTRUCTS DMA TERMINATE | | | | ... _stop | | | spin_lock_irqsave DMASEV | | | | DMAKILL | | | clear INTCLR | mask INTEN | | | spin_unlock_irqrestore Signed-off-by: Sugar Zhang <sugar.zhang@rock-chips.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: axi-dmac: Enable DMA_INTERLEAVE capabilityDragos Bogdan2019-04-241-0/+1
| | | | | | | | | | | | | | | | | | Since device_prep_interleaved_dma() is already implemented, the DMA_INTERLEAVE capability should be set. Signed-off-by: Dragos Bogdan <dragos.bogdan@analog.com> Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: axi-dmac: Don't check the number of frames for alignmentAlexandru Ardelean2019-04-241-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In 2D transfers (for the AXI DMAC), the number of frames (numf) represents Y_LENGTH, and the length of a frame is X_LENGTH. 2D transfers are useful for video transfers where screen resolutions ( X * Y ) are typically aligned for X, but not for Y. There is no requirement for Y_LENGTH to be aligned to the bus-width (or anything), and this is also true for AXI DMAC. Checking the Y_LENGTH for alignment causes false errors when initiating DMA transfers. This change fixes this by checking only that the Y_LENGTH is non-zero. Fixes: 0e3b67b348b8 ("dmaengine: Add support for the Analog Devices AXI-DMAC DMA controller") Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: axi-dmac: Infer synthesis configuration parameters hardwareLars-Peter Clausen2019-04-241-12/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some synthesis time configuration parameters of the DMA controller can be inferred from the hardware itself. Use this information as it is more reliably than the information specified in the devicetree which might be outdated if the HDL project got changed. Deprecate the devicetree properties that can be inferred from the hardware itself. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: at_xdmac: only monitor overflow errors for peripheral xferNicolas Ferre2019-04-231-1/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The overflow error flag (ROI: Request Overflow Error) is only relevant for the case when the channel handles a peripheral synchronized transfer. Not in the case of memory to memory transfer where there is no hardware request signal. Remove the use of this interrupt source in such a case. It's based on the first descriptor which holds the configuration for the whole linked list transfer. Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com> Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: at_xdmac: enhance channel errors handling in taskletNicolas Ferre2019-04-231-6/+42
| | | | | | | | | | | | | | | | | | Complement the identification of errors with stopping the channel and dumping the descriptor that led to the error case. Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com> Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: at_xdmac: remove BUG_ON macro in taskletNicolas Ferre2019-04-231-1/+5
| | | | | | | | | | | | | | | | | | | | | | Even if this case shouldn't happen when controller is properly programmed, it's still better to avoid dumping a kernel Oops for this. As the sequence may happen only for debugging purposes, log the error and just finish the tasklet call. Signed-off-by: Nicolas Ferre <nicolas.ferre@microchip.com> Acked-by: Ludovic Desroches <ludovic.desroches@microchip.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: axi-dmac: extend support for ZynqMP archMichael Hennerich2019-03-251-1/+1
| | | | | | | | | | | | | | | | | | | | The AXI DMAC driver is currently supported also on the Xilinx ZynqMP architecture. This change allows this driver to be enabled & used on it as well. Signed-off-by: Michael Hennerich <michael.hennerich@analog.com> Signed-off-by: Alexandru Ardelean <alexandru.ardelean@analog.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: xgene-dma: move spin_lock_bh to spin_lock in taskletJeff Xie2019-03-251-2/+2
| | | | | | | | | | | | | | It is unnecessary to call spin_lock_bh in a tasklet. Signed-off-by: Jeff Xie <chongguiguzi@gmail.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: pl08x: be fair when re-assigning physical channelJean-Nicolas Graux2019-03-251-6/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current way we find a waiting virtual channel for the next transfer at the time one physical channel becomes free is not really fair. More in details, in case there is more than one channel waiting at a time, by just going through the arrays of memcpy and slave channels and stopping as soon as state match waiting state, channels with high indexes can be penalized. Whenever dma engine is substantially overloaded so that we constantly get several channels waiting, channels with highest indexes might not be served for a substantial time which in the worse case, might hang task that wait for dma transfer to complete. This patch makes physical channel re-assignment more fair by storing time in jiffies when a channel is put in waiting state. Whenever a physical channel has to be re-assigned, this time is used to select channel that is waiting for the longest time. Signed-off-by: Jean-Nicolas Graux <jean-nicolas.graux@st.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Nicolas Guion <nicolas.guion@st.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: axi-dmac: Split too large segmentsLars-Peter Clausen2019-03-251-20/+61
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The axi-dmac driver currently rejects transfers with segments that are larger than what the hardware can handle. Re-work the driver so that these large segments are split into multiple segments instead where each segment is smaller or equal to the maximum segment size. This allows the driver to handle transfers with segments of arbitrary size. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Bogdan Togorean <bogdan.togorean@analog.com> Signed-off-by: Alexandru Ardelean <alex.ardelean@analog.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: pl330: introduce debugfs interfaceKatsuhiro Suzuki2019-03-251-0/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds debugfs interface to show the relationship between DMA threads (hardware resource for transferring data) and DMA channel ID of DMA slave. Typically, PL330 has many slaves than number of DMA threads. So sometimes PL330 cannot allocate DMA threads for all slaves even if a user specify DMA channel ID in devicetree. This interface will be useful for checking that DMA threads are allocated or not. Below is an output sample: $ sudo cat /sys/kernel/debug/ff1f0000.dmac PL330 physical channels: THREAD: CHANNEL: -------- ----- 0 8 1 9 2 11 3 12 4 14 5 15 6 10 7 -- Signed-off-by: Katsuhiro Suzuki <katsuhiro@katsuster.net> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: tegra210-adma: update system sleep callbacksSameer Pujar2019-03-251-8/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the driver is active till late suspend, where runtime PM cannot run, force suspend is essential in such case to put the device in low power state. Thus pm_runtime_force_suspend and pm_runtime_force_resume are used as system sleep callbacks during system wide PM transitions. Late system sleep callbacks are used to ensure, for instance, that the sound core has suspended any on-going activity, including stopping the ADMA if active, before we attempt to suspend the ADMA. Suggested-by: Jonathan Hunter <jonathanh@nvidia.com> Signed-off-by: Sameer Pujar <spujar@nvidia.com> Acked-by: Jon Hunter <jonathanh@nvidia.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: tegra210-adma: use devm_clk_*() helpersSameer Pujar2019-03-251-15/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | adma driver is using pm_clk_*() interface for managing clock resources. With this it is observed that clocks remain ON always. This happens on Tegra devices which use BPMP co-processor to manage clock resources, where clocks are enabled during prepare phase. This is necessary because clocks to BPMP are always blocking. When pm_clk_*() interface is used on such Tegra devices, clock prepare count is not balanced till remove call happens for the driver and hence clocks are seen ON always. Thus this patch replaces pm_clk_*() with devm_clk_*() framework. Suggested-by: Mohan Kumar D <mkumard@nvidia.com> Reviewed-by: Jonathan Hunter <jonathanh@nvidia.com> Signed-off-by: Sameer Pujar <spujar@nvidia.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: idma64: Use actual device for DMA transfersAndy Shevchenko2019-03-212-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Intel IOMMU, when enabled, tries to find the domain of the device, assuming it's a PCI one, during DMA operations, such as mapping or unmapping. Since we are splitting the actual PCI device to couple of children via MFD framework (see drivers/mfd/intel-lpss.c for details), the DMA device appears to be a platform one, and thus not an actual one that performs DMA. In a such situation IOMMU can't find or allocate a proper domain for its operations. As a result, all DMA operations are failed. In order to fix this, supply parent of the platform device to the DMA engine framework and fix filter functions accordingly. We may rely on the fact that parent is a real PCI device, because no other configuration is present in the wild. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Mark Brown <broonie@kernel.org> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> [for tty parts] Signed-off-by: Vinod Koul <vkoul@kernel.org>
* | Merge tag 'arm64-mmiowb' of ↵Linus Torvalds2019-05-061-3/+0
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull mmiowb removal from Will Deacon: "Remove Mysterious Macro Intended to Obscure Weird Behaviours (mmiowb()) Remove mmiowb() from the kernel memory barrier API and instead, for architectures that need it, hide the barrier inside spin_unlock() when MMIO has been performed inside the critical section. The only relatively recent changes have been addressing review comments on the documentation, which is in a much better shape thanks to the efforts of Ben and Ingo. I was initially planning to split this into two pull requests so that you could run the coccinelle script yourself, however it's been plain sailing in linux-next so I've just included the whole lot here to keep things simple" * tag 'arm64-mmiowb' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (23 commits) docs/memory-barriers.txt: Update I/O section to be clearer about CPU vs thread docs/memory-barriers.txt: Fix style, spacing and grammar in I/O section arch: Remove dummy mmiowb() definitions from arch code net/ethernet/silan/sc92031: Remove stale comment about mmiowb() i40iw: Redefine i40iw_mmiowb() to do nothing scsi/qla1280: Remove stale comment about mmiowb() drivers: Remove explicit invocations of mmiowb() drivers: Remove useless trailing comments from mmiowb() invocations Documentation: Kill all references to mmiowb() riscv/mmiowb: Hook up mmwiob() implementation to asm-generic code powerpc/mmiowb: Hook up mmwiob() implementation to asm-generic code ia64/mmiowb: Add unconditional mmiowb() to arch_spin_unlock() mips/mmiowb: Add unconditional mmiowb() to arch_spin_unlock() sh/mmiowb: Add unconditional mmiowb() to arch_spin_unlock() m68k/io: Remove useless definition of mmiowb() nds32/io: Remove useless definition of mmiowb() x86/io: Remove useless definition of mmiowb() arm64/io: Remove useless definition of mmiowb() ARM/io: Remove useless definition of mmiowb() mmiowb: Hook up mmiowb helpers to spinlocks and generic I/O accessors ...
| * | drivers: Remove explicit invocations of mmiowb()Will Deacon2019-04-081-3/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mmiowb() is now implied by spin_unlock() on architectures that require it, so there is no reason to call it from driver code. This patch was generated using coccinelle: @mmiowb@ @@ - mmiowb(); and invoked as: $ for d in drivers include/linux/qed sound; do \ spatch --include-headers --sp-file mmiowb.cocci --dir $d --in-place; done NOTE: mmiowb() has only ever guaranteed ordering in conjunction with spin_unlock(). However, pairing each mmiowb() removal in this patch with the corresponding call to spin_unlock() is not at all trivial, so there is a small chance that this change may regress any drivers incorrectly relying on mmiowb() to order MMIO writes between CPUs using lock-free synchronisation. If you've ended up bisecting to this commit, you can reintroduce the mmiowb() calls using wmb() instead, which should restore the old behaviour on all architectures other than some esoteric ia64 systems. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
* | | dmaengine: mediatek-cqdma: fix wrong register usage in mtk_cqdma_startShun-Chih Yu2019-04-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes wrong register usage in the mtk_cqdma_start. The destination register should be MTK_CQDMA_DST2 instead. Fixes: b1f01e48df5a ("dmaengine: mediatek: Add MediaTek Command-Queue DMA controller for MT6765 SoC") Signed-off-by: Shun-Chih Yu <shun-chih.yu@mediatek.com> Cc: stable@vger.kernel.org Signed-off-by: Vinod Koul <vkoul@kernel.org>
* | | dmaengine: sh: rcar-dmac: Fix glitch in dmaengine_tx_statusAchim Dahlhoff2019-04-231-3/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The tx_status poll in the rcar_dmac driver reads the status register which indicates which chunk is busy (DMACHCRB). Afterwards the point inside the chunk is read from DMATCRB. It is possible that the chunk has changed between the two reads. The result is a non-monotonous increase of the residue. Fix this by introducing a 'safe read' logic. Fixes: 73a47bd0da66 ("dmaengine: rcar-dmac: use TCRB instead of TCR for residue") Signed-off-by: Achim Dahlhoff <Achim.Dahlhoff@de.bosch.com> Signed-off-by: Dirk Behme <dirk.behme@de.bosch.com> Reviewed-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Cc: <stable@vger.kernel.org> # v4.16+ Signed-off-by: Vinod Koul <vkoul@kernel.org>
* | | dmaengine: sh: rcar-dmac: With cyclic DMA residue 0 is validDirk Behme2019-04-231-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Having a cyclic DMA, a residue 0 is not an indication of a completed DMA. In case of cyclic DMA make sure that dma_set_residue() is called and with this a residue of 0 is forwarded correctly to the caller. Fixes: 3544d2878817 ("dmaengine: rcar-dmac: use result of updated get_residue in tx_status") Signed-off-by: Dirk Behme <dirk.behme@de.bosch.com> Signed-off-by: Achim Dahlhoff <Achim.Dahlhoff@de.bosch.com> Signed-off-by: Hiroyuki Yokoyama <hiroyuki.yokoyama.vx@renesas.com> Signed-off-by: Yao Lihua <ylhuajnu@outlook.com> Reviewed-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Cc: <stable@vger.kernel.org> # v4.8+ Signed-off-by: Vinod Koul <vkoul@kernel.org>
* | | dmaengine: bcm2835: Avoid GFP_KERNEL in device_prep_slave_sgStefan Wahren2019-04-231-1/+1
|/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The commit af19b7ce76ba ("mmc: bcm2835: Avoid possible races on data requests") introduces a possible circular locking dependency, which is triggered by swapping to the sdhost interface. So instead of reintroduce the race condition again, we could also avoid this situation by using GFP_NOWAIT for the allocation of the DMA buffer descriptors. Reported-by: Aaro Koskinen <aaro.koskinen@iki.fi> Signed-off-by: Stefan Wahren <stefan.wahren@i2se.com> Fixes: af19b7ce76ba ("mmc: bcm2835: Avoid possible races on data requests") Link: http://lists.infradead.org/pipermail/linux-rpi-kernel/2019-March/008615.html Signed-off-by: Vinod Koul <vkoul@kernel.org>
* / dmaengine: stm32-mdma: Revert "dmaengine: stm32-mdma: Add a check on ↵Pierre-Yves MORDRET2019-03-251-3/+1
|/ | | | | | | | | | | | | | read_u32_array" This reverts commit 906b40b246b0 ("dmaengine: stm32-mdma: Add a check on read_u32_array") As stated by bindings "st,ahb-addr-masks" is optional. The statement inserted by this commit makes this property mandatory and prevents MDMA to be probed in case property not present. Signed-off-by: Pierre-Yves MORDRET <pierre-yves.mordret@st.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
* Merge tag 'dmaengine-5.1-rc1' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds2019-03-1448-630/+2453
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull dmaengine updates from Vinod Koul: - dmatest updates for modularizing common struct and code - remove SG support for VDMA xilinx IP and updates to driver - Update to dw driver to support Intel iDMA controllers multi-block support - tegra updates for proper reporting of residue - Add Snow Ridge ioatdma device id and support for IOATDMA v3.4 - struct_size() usage and useless LIST_HEAD cleanups in subsystem. - qDMA controller driver for Layerscape SoCs - stm32-dma PM Runtime support - And usual updates to imx-sdma, sprd, Documentation, fsl-edma, bcm2835, qcom_hidma etc * tag 'dmaengine-5.1-rc1' of git://git.infradead.org/users/vkoul/slave-dma: (81 commits) dmaengine: imx-sdma: fix consistent dma test failures dmaengine: imx-sdma: add a test for imx8mq multi sdma devices dmaengine: imx-sdma: add clock ratio 1:1 check dmaengine: dmatest: move test data alloc & free into functions dmaengine: dmatest: add short-hand `buf_size` var in dmatest_func() dmaengine: dmatest: wrap src & dst data into a struct dmaengine: ioatdma: support latency tolerance report (LTR) for v3.4 dmaengine: ioatdma: add descriptor pre-fetch support for v3.4 dmaengine: ioatdma: disable DCA enabling on IOATDMA v3.4 dmaengine: ioatdma: Add Snow Ridge ioatdma device id dmaengine: sprd: Change channel id to slave id for DMA cell specifier dt-bindings: dmaengine: sprd: Change channel id to slave id for DMA cell specifier dmaengine: mv_xor: Use correct device for DMA API Documentation :dmaengine: clarify DMA desc. pointer after submission Documentation: dmaengine: fix dmatest.rst warning dmaengine: k3dma: Add support for dma-channel-mask dmaengine: k3dma: Delete axi_config dmaengine: k3dma: Upgrade k3dma driver to support hisi_asp_dma hardware Documentation: bindings: dma: Add binding for dma-channel-mask Documentation: bindings: k3dma: Extend the k3dma driver binding to support hisi-asp ...
| * Merge branch 'topic/xilinx' into for-linusVinod Koul2019-03-121-70/+100
| |\
| | * dmaengine: xilinx_dma: remove set but not used variable 'tail_segment'YueHaibing2019-01-201-4/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fixes gcc '-Wunused-but-set-variable' warning: drivers/dma/xilinx/xilinx_dma.c: In function 'xilinx_vdma_start_transfer': drivers/dma/xilinx/xilinx_dma.c:1104:33: warning: variable 'tail_segment' set but not used [-Wunused-but-set-variable] It not used since commit b8349172b400 ("dmaengine: xilinx_dma: Drop SG support for VDMA IP") Signed-off-by: YueHaibing <yuehaibing@huawei.com> Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| | * dmaengine: xilinx_dma: Drop SG support for VDMA IPAndrea Merello2019-01-071-52/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | xilinx_vdma_start_transfer() is used only for VDMA IP, still it contains conditional code on has_sg variable. has_sg is set only whenever the HW does support SG mode, that is never true for VDMA IP. This patch drops the never-taken branches. Signed-off-by: Andrea Merello <andrea.merello@gmail.com> Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| | * dmaengine: xilinx_dma: autodetect whether the HW supports scatter-gatherAndrea Merello2019-01-071-4/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The AXIDMA and CDMA HW can be either direct-access or scatter-gather version. These are SW incompatible. The driver can handle both versions: a DT property was used to tell the driver whether to assume the HW is in scatter-gather mode. This patch makes the driver to autodetect this information. The DT property is not required anymore. No changes for VDMA. Cc: Rob Herring <robh+dt@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: devicetree@vger.kernel.org Cc: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Signed-off-by: Andrea Merello <andrea.merello@gmail.com> Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| | * dmaengine: xilinx_dma: program hardware supported buffer lengthRadhey Shyam Pandey2019-01-071-4/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | AXI-DMA IP supports configurable (c_sg_length_width) buffer length register width, hence read buffer length (xlnx,sg-length-width) DT property and ensure that driver doesn't program buffer length exceeding the supported limit. For VDMA and CDMA there is no change. Cc: Rob Herring <robh+dt@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: devicetree@vger.kernel.org Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Signed-off-by: Michal Simek <michal.simek@xilinx.com> Signed-off-by: Andrea Merello <andrea.merello@gmail.com> [rebase, reword] Signed-off-by: Vinod Koul <vkoul@kernel.org>
| | * dmaengine: xilinx_dma: in axidma slave_sg and dma_cyclic mode align split ↵Andrea Merello2019-01-071-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | descriptors Whenever a single or cyclic transaction is prepared, the driver could eventually split it over several SG descriptors in order to deal with the HW maximum transfer length. This could end up in DMA operations starting from a misaligned address. This seems fatal for the HW if DRE (Data Realignment Engine) is not enabled. This patch eventually adjusts the transfer size in order to make sure all operations start from an aligned address. Cc: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Signed-off-by: Andrea Merello <andrea.merello@gmail.com> Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| | * dmaengine: xilinx_dma: commonize DMA copy size calculationAndrea Merello2019-01-071-8/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch removes a bit of duplicated code by introducing a new function that implements calculations for DMA copy size, and prepares for changes to the copy size calculation that will happen in following patches. Suggested-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Andrea Merello <andrea.merello@gmail.com> Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * | Merge branch 'topic/tegra' into for-linusVinod Koul2019-03-122-19/+31
| |\ \