summaryrefslogtreecommitdiffstats
path: root/drivers/dma
Commit message (Collapse)AuthorAgeFilesLines
* dmaengine: stm32-mdma: correct desc prep when channel runningAlain Volmat2023-11-281-2/+2
| | | | | | | | | | | | | | | | | | | | commit 03f25d53b145bc2f7ccc82fc04e4482ed734f524 upstream. In case of the prep descriptor while the channel is already running, the CCR register value stored into the channel could already have its EN bit set. This would lead to a bad transfer since, at start transfer time, enabling the channel while other registers aren't yet properly set. To avoid this, ensure to mask the CCR_EN bit when storing the ccr value into the mdma channel structure. Fixes: a4ffb13c8946 ("dmaengine: Add STM32 MDMA driver") Signed-off-by: Alain Volmat <alain.volmat@foss.st.com> Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com> Cc: stable@vger.kernel.org Tested-by: Alain Volmat <alain.volmat@foss.st.com> Link: https://lore.kernel.org/r/20231009082450.452877-1-amelie.delaunay@foss.st.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* dmaengine: pxa_dma: Remove an erroneous BUG_ON() in pxad_free_desc()Christophe JAILLET2023-11-201-1/+0
| | | | | | | | | | | | | | | | | | | | [ Upstream commit 83c761f568733277ce1f7eb9dc9e890649c29a8c ] If pxad_alloc_desc() fails on the first dma_pool_alloc() call, then sw_desc->nb_desc is zero. In such a case pxad_free_desc() is called and it will BUG_ON(). Remove this erroneous BUG_ON(). It is also useless, because if "sw_desc->nb_desc == 0", then, on the first iteration of the for loop, i is -1 and the loop will not be executed. (both i and sw_desc->nb_desc are 'int') Fixes: a57e16cf0333 ("dmaengine: pxa: add pxa dmaengine driver") Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Link: https://lore.kernel.org/r/c8fc5563c9593c914fde41f0f7d1489a21b45a9a.1696676782.git.christophe.jaillet@wanadoo.fr Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* dmaengine: ti: edma: handle irq_of_parse_and_map() errorsDan Carpenter2023-11-201-2/+2
| | | | | | | | | | | | | | | [ Upstream commit 14f6d317913f634920a640e9047aa2e66f5bdcb7 ] Zero is not a valid IRQ for in-kernel code and the irq_of_parse_and_map() function returns zero on error. So this check for valid IRQs should only accept values > 0. Fixes: 2b6b3b742019 ("ARM/dmaengine: edma: Merge the two drivers under drivers/dma/") Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org> Acked-by: Peter Ujfalusi <peter.ujfalusi@gmail.com> Link: https://lore.kernel.org/r/f15cb6a7-8449-4f79-98b6-34072f04edbc@moroto.mountain Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* dmaengine: idxd: Register dsa_bus_type before registering idxd sub-driversFenghua Yu2023-11-201-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 88928addeec577386e8c83b48b5bc24d28ba97fd ] idxd sub-drivers belong to bus dsa_bus_type. Thus, dsa_bus_type must be registered in dsa bus init before idxd drivers can be registered. But the order is wrong when both idxd and idxd_bus are builtin drivers. In this case, idxd driver is compiled and linked before idxd_bus driver. Since the initcall order is determined by the link order, idxd sub-drivers are registered in idxd initcall before dsa_bus_type is registered in idxd_bus initcall. idxd initcall fails: [ 21.562803] calling idxd_init_module+0x0/0x110 @ 1 [ 21.570761] Driver 'idxd' was unable to register with bus_type 'dsa' because the bus was not initialized. [ 21.586475] initcall idxd_init_module+0x0/0x110 returned -22 after 15717 usecs [ 21.597178] calling dsa_bus_init+0x0/0x20 @ 1 To fix the issue, compile and link idxd_bus driver before idxd driver to ensure the right registration order. Fixes: d9e5481fca74 ("dmaengine: dsa: move dsa_bus_type out of idxd driver to standalone") Reported-by: Michael Prinke <michael.prinke@intel.com> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Lijun Pan <lijun.pan@intel.com> Tested-by: Lijun Pan <lijun.pan@intel.com> Link: https://lore.kernel.org/r/20230924162232.1409454-1-fenghua.yu@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* dmaengine: ste_dma40: Fix PM disable depth imbalance in d40_probeZhang Shurong2023-11-081-0/+1
| | | | | | | | | | | | | | | [ Upstream commit 0618c077a8c20e8c81e367988f70f7e32bb5a717 ] The pm_runtime_enable will increase power disable depth. Thus a pairing decrement is needed on the error handling path to keep it balanced according to context. We fix it by calling pm_runtime_disable when error returns. Signed-off-by: Zhang Shurong <zhang_shurong@foxmail.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Link: https://lore.kernel.org/r/tencent_DD2D371DB5925B4B602B1E1D0A5FA88F1208@qq.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* dmaengine: mediatek: Fix deadlock caused by synchronize_irq()Duoming Zhou2023-10-191-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 01f1ae2733e2bb4de92fefcea5fda847d92aede1 ] The synchronize_irq(c->irq) will not return until the IRQ handler mtk_uart_apdma_irq_handler() is completed. If the synchronize_irq() holds a spin_lock and waits the IRQ handler to complete, but the IRQ handler also needs the same spin_lock. The deadlock will happen. The process is shown below: cpu0 cpu1 mtk_uart_apdma_device_pause() | mtk_uart_apdma_irq_handler() spin_lock_irqsave() | | spin_lock_irqsave() //hold the lock to wait | synchronize_irq() | This patch reorders the synchronize_irq(c->irq) outside the spin_lock in order to mitigate the bug. Fixes: 9135408c3ace ("dmaengine: mediatek: Add MediaTek UART APDMA support") Signed-off-by: Duoming Zhou <duoming@zju.edu.cn> Reviewed-by: Eugen Hristev <eugen.hristev@collabora.com> Link: https://lore.kernel.org/r/20230806032511.45263-1-duoming@zju.edu.cn Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* dmaengine: idxd: use spin_lock_irqsave before wait_event_lock_irqRex Zhang2023-10-191-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit c0409dd3d151f661e7e57b901a81a02565df163c ] In idxd_cmd_exec(), wait_event_lock_irq() explicitly calls spin_unlock_irq()/spin_lock_irq(). If the interrupt is on before entering wait_event_lock_irq(), it will become off status after wait_event_lock_irq() is called. Later, wait_for_completion() may go to sleep but irq is disabled. The scenario is warned in might_sleep(). Fix it by using spin_lock_irqsave() instead of the primitive spin_lock() to save the irq status before entering wait_event_lock_irq() and using spin_unlock_irqrestore() instead of the primitive spin_unlock() to restore the irq status before entering wait_for_completion(). Before the change: idxd_cmd_exec() { interrupt is on spin_lock() // interrupt is on wait_event_lock_irq() spin_unlock_irq() // interrupt is enabled ... spin_lock_irq() // interrupt is disabled spin_unlock() // interrupt is still disabled wait_for_completion() // report "BUG: sleeping function // called from invalid context... // in_atomic() irqs_disabled()" } After applying spin_lock_irqsave(): idxd_cmd_exec() { interrupt is on spin_lock_irqsave() // save the on state // interrupt is disabled wait_event_lock_irq() spin_unlock_irq() // interrupt is enabled ... spin_lock_irq() // interrupt is disabled spin_unlock_irqrestore() // interrupt is restored to on wait_for_completion() // No Call trace } Fixes: f9f4082dbc56 ("dmaengine: idxd: remove interrupt disable for cmd_lock") Signed-off-by: Rex Zhang <rex.zhang@intel.com> Signed-off-by: Lijun Pan <lijun.pan@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Fenghua Yu <fenghua.yu@intel.com> Link: https://lore.kernel.org/r/20230916060619.3744220-1-rex.zhang@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* dmaengine: stm32-mdma: set in_flight_bytes in case CRQA flag is setAmelie Delaunay2023-10-191-5/+9
| | | | | | | | | | | | | | | | | | | | | | commit 584970421725b7805db84714b857851fdf7203a9 upstream. CRQA flag is set by hardware when the channel request become active and the channel is enabled. It is cleared by hardware, when the channel request is completed. So when it is set, it means MDMA is transferring bytes. This information is useful in case of STM32 DMA and MDMA chaining, especially when the user pauses DMA before stopping it, to trig one last MDMA transfer to get the latest bytes of the SRAM buffer to the destination buffer. STM32 DCMI driver can then use this to know if the last MDMA transfer in case of chaining is done. Fixes: 696874322771 ("dmaengine: stm32-mdma: add support to be triggered by STM32 DMA") Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20231004163531.2864160-3-amelie.delaunay@foss.st.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* dmaengine: stm32-mdma: use Link Address Register to compute residueAmelie Delaunay2023-10-191-4/+11
| | | | | | | | | | | | | | | | | | | | | commit a4b306eb83579c07b63dc65cd5bae53b7b4019d0 upstream. Current implementation relies on curr_hwdesc index. But to keep this index up to date, Block Transfer interrupt (BTIE) has to be enabled. If it is not, curr_hwdesc is not updated, and then residue is not reliable. Rely on Link Address Register instead. And disable BTIE interrupt in stm32_mdma_setup_xfer() because it is no more needed in case of _prep_slave_sg() to maintain curr_hwdesc up to date. It avoids extra interrupts and also ensures a reliable residue. These improvements are required for STM32 DCMI camera capture use case, which need STM32 DMA and MDMA chaining for good performance. Fixes: 696874322771 ("dmaengine: stm32-mdma: add support to be triggered by STM32 DMA") Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20231004163531.2864160-2-amelie.delaunay@foss.st.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* dmaengine: stm32-dma: fix residue in case of MDMA chainingAmelie Delaunay2023-10-191-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | commit 67e13e89742c3b21ce177f612bf9ef32caae6047 upstream. In case of MDMA chaining, DMA is configured in Double-Buffer Mode (DBM) with two periods, but if transfer has been prepared with _prep_slave_sg(), the transfer is not marked cyclic (=!chan->desc->cyclic). However, as DBM is activated for MDMA chaining, residue computation must take into account cyclic constraints. With only two periods in MDMA chaining, and no update due to Transfer Complete interrupt masked, n_sg is always 0. If DMA current memory address (depending on SxCR.CT and SxM0AR/SxM1AR) does not correspond, it means n_sg should be increased. Then, the residue of the current period is the one read from SxNDTR and should not be overwritten with the full period length. Fixes: 723795173ce1 ("dmaengine: stm32-dma: add support to trigger STM32 MDMA") Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20231004155024.2609531-2-amelie.delaunay@foss.st.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* dmaengine: stm32-dma: fix stm32_dma_prep_slave_sg in case of MDMA chainingAmelie Delaunay2023-10-191-1/+3
| | | | | | | | | | | | | | | commit 2df467e908ce463cff1431ca1b00f650f7a514b4 upstream. Current Target (CT) have to be reset when starting an MDMA chaining use case, as Double Buffer mode is activated. It ensures the DMA will start processing the first memory target (pointed with SxM0AR). Fixes: 723795173ce1 ("dmaengine: stm32-dma: add support to trigger STM32 MDMA") Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20231004155024.2609531-1-amelie.delaunay@foss.st.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* dmaengine: stm32-mdma: abort resume if no ongoing transferAmelie Delaunay2023-10-191-0/+4
| | | | | | | | | | | | | | | | commit 81337b9a72dc58a5fa0ae8a042e8cb59f9bdec4a upstream. chan->desc can be null, if transfer is terminated when resume is called, leading to a NULL pointer when retrieving the hwdesc. To avoid this case, check that chan->desc is not null and channel is disabled (transfer previously paused or terminated). Fixes: a4ffb13c8946 ("dmaengine: Add STM32 MDMA driver") Signed-off-by: Amelie Delaunay <amelie.delaunay@foss.st.com> Cc: stable@vger.kernel.org Link: https://lore.kernel.org/r/20231004163531.2864160-1-amelie.delaunay@foss.st.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* dmaengine: sh: rz-dmac: Fix destination and source data size settingHien Huynh2023-09-191-4/+7
| | | | | | | | | | | | | | | | | commit c6ec8c83a29fb3aec3efa6fabbf5344498f57c7f upstream. Before setting DDS and SDS values, we need to clear its value first otherwise, we get incorrect results when we change/update the DMA bus width several times due to the 'OR' expression. Fixes: 5000d37042a6 ("dmaengine: sh: Add DMAC driver for RZ/G2L SoC") Cc: stable@kernel.org Signed-off-by: Hien Huynh <hien.huynh.px@renesas.com> Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com> Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be> Link: https://lore.kernel.org/r/20230706112150.198941-3-biju.das.jz@bp.renesas.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* dmaengine: idxd: Fix issues with PRS disable sysfs knobFenghua Yu2023-09-131-2/+2
| | | | | | | | | | | | | | | | | | | | | [ Upstream commit 8cae66574398326134a41513b419e00ad4e380ca ] There are two issues in the current PRS disable sysfs store function wq_prs_disable_store(): 1. Since PRS disable knob is invisible if PRS disable is not supported in WQ, it's redundant to check PRS support again in the store function again. Remove the redundant PRS support check. 2. Since PRS disable is read-only when the device is not configurable, PRS disable cannot be changed on the device. Add device configurable check in the store function. Fixes: f2dc327131b5 ("dmaengine: idxd: add per wq PRS disable") Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20230811012635.535413-2-fenghua.yu@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* dmaengine: idxd: Allow ATS disable update only for configurable devicesFenghua Yu2023-09-131-0/+4
| | | | | | | | | | | | | | | [ Upstream commit 0056a7f07b0a63e6cee815a789eabba6f3a710f0 ] ATS disable status in a WQ is read-only if the device is not configurable. This change ensures that the ATS disable attribute can be modified via sysfs only on configurable devices. Fixes: 92de5fa2dc39 ("dmaengine: idxd: add ATS disable knob for work queues") Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20230811012635.535413-1-fenghua.yu@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* dmaengine: idxd: Expose ATS disable knob only when WQ ATS is supportedFenghua Yu2023-09-131-4/+3
| | | | | | | | | | | | | | | [ Upstream commit 62b41b656666d2d35890124df5ef0881fe6d6769 ] WQ Advanced Translation Service (ATS) can be controlled only when WQ ATS is supported. The sysfs ATS disable knob should be visible only when the features is supported. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20230712174436.3435088-2-fenghua.yu@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Stable-dep-of: 0056a7f07b0a ("dmaengine: idxd: Allow ATS disable update only for configurable devices") Signed-off-by: Sasha Levin <sashal@kernel.org>
* dmaengine: idxd: Simplify WQ attribute visibility checksFenghua Yu2023-09-131-15/+5
| | | | | | | | | | | | | | | [ Upstream commit 97b1185fe54c8ce94104e3c7fa4ee0bbedd85920 ] The functions that check if WQ attributes are invisible are almost duplicate. Define a helper to simplify these functions and future WQ attribute visibility checks as well. Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20230712174436.3435088-1-fenghua.yu@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Stable-dep-of: 0056a7f07b0a ("dmaengine: idxd: Allow ATS disable update only for configurable devices") Signed-off-by: Sasha Levin <sashal@kernel.org>
* dmaengine: ste_dma40: Add missing IRQ check in d40_proberuanjinjie2023-09-131-0/+4
| | | | | | | | | | | | | | [ Upstream commit c05ce6907b3d6e148b70f0bb5eafd61dcef1ddc1 ] Check for the return value of platform_get_irq(): if no interrupt is specified, it wouldn't make sense to call request_irq(). Fixes: 8d318a50b3d7 ("DMAENGINE: Support for ST-Ericssons DMA40 block v3") Signed-off-by: Ruan Jinjie <ruanjinjie@huawei.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Link: https://lore.kernel.org/r/20230724144108.2582917-1-ruanjinjie@huawei.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* dmaengine: idxd: Modify the dependence of attribute pasid_enabledRex Zhang2023-09-131-1/+1
| | | | | | | | | | | | | | | | | | | | [ Upstream commit 50c5e6f41d5ad7c731c31135a30d0e4f0e4fea26 ] Kernel PASID and user PASID are separately enabled. User needs to know the user PASID enabling status to decide how to use IDXD device in user space. This is done via the attribute /sys/bus/dsa/devices/dsa0/pasid_enabled. It's unnecessary for user to know the kernel PASID enabling status because user won't use the kernel PASID. But instead of showing the user PASID enabling status, the attribute shows the kernel PASID enabling status. Fix the issue by showing the user PASID enabling status in the attribute. Fixes: 42a1b73852c4 ("dmaengine: idxd: Separate user and kernel pasid enabling") Signed-off-by: Rex Zhang <rex.zhang@intel.com> Acked-by: Fenghua Yu <fenghua.yu@intel.com> Acked-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20230614062706.1743078-1-rex.zhang@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* dmaengine: xilinx: xdma: Fix typoMiquel Raynal2023-08-071-1/+1
| | | | | | | | | | Probably a copy/paste error with the previous block, here we are actually managing C2H IRQs. Fixes: 17ce252266c7 ("dmaengine: xilinx: xdma: Add xilinx xdma driver") Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Link: https://lore.kernel.org/r/20230731101442.792514-3-miquel.raynal@bootlin.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: xilinx: xdma: Fix interrupt vector settingMiquel Raynal2023-08-071-0/+2
| | | | | | | | | | | | | | | | | | | A couple of hardware registers need to be set to reflect which interrupts have been allocated to the device. Each register is 32-bit wide and can receive four 8-bit values. If we provide any other interrupt number than four, the irq_num variable will never be 0 within the while check and the while block will loop forever. There is an easy way to prevent this: just break the for loop when we reach "irq_num == 0", which anyway means all interrupts have been processed. Cc: stable@vger.kernel.org Fixes: 17ce252266c7 ("dmaengine: xilinx: xdma: Add xilinx xdma driver") Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com> Acked-by: Lizhi Hou <lizhi.hou@amd.com> Link: https://lore.kernel.org/r/20230731101442.792514-2-miquel.raynal@bootlin.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: owl-dma: Modify mismatched function nameZhang Jianhua2023-08-071-1/+1
| | | | | | | | | | | | No functional modification involved. drivers/dma/owl-dma.c:208: warning: expecting prototype for struct owl_dma_pchan. Prototype was for struct owl_dma_vchan instead HDRTEST usr/include/sound/asequencer.h Fixes: 47e20577c24d ("dmaengine: Add Actions Semi Owl family S900 DMA driver") Signed-off-by: Zhang Jianhua <chris.zjh@huawei.com> Reviewed-by: Randy Dunlap <rdunlap@infradead.org> Link: https://lore.kernel.org/r/20230722153244.2086949-1-chris.zjh@huawei.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: idxd: Clear PRS disable flag when disabling IDXD deviceFenghua Yu2023-08-071-3/+1
| | | | | | | | | | | | | | | | | | | | | Disabling IDXD device doesn't reset Page Request Service (PRS) disable flag to its initial value 0. This may cause user confusion because once PRS is disabled user will see PRS still remains the previous setting (i.e. disabled) via sysfs interface even after the device is disabled. To eliminate user confusion, reset PRS disable flag to ensure that the PRS flag bit reflects correct state after the device is disabled. Additionally, simplify the code by setting wq->flags to 0, which clears all flag bits, including any future additions. Fixes: f2dc327131b5 ("dmaengine: idxd: add per wq PRS disable") Tested-by: Tony Zhu <tony.zhu@intel.com> Signed-off-by: Fenghua Yu <fenghua.yu@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20230712193505.3440752-1-fenghua.yu@intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: pl330: Return DMA_PAUSED when transaction is pausedIlpo Järvinen2023-08-071-2/+16
| | | | | | | | | | | | | | | | | | | | | | pl330_pause() does not set anything to indicate paused condition which causes pl330_tx_status() to return DMA_IN_PROGRESS. This breaks 8250 DMA flush after the fix in commit 57e9af7831dc ("serial: 8250_dma: Fix DMA Rx rearm race"). The function comment for pl330_pause() claims pause is supported but resume is not which is enough for 8250 DMA flush to work as long as DMA status reports DMA_PAUSED when appropriate. Add PAUSED state for descriptor and mark BUSY descriptors with PAUSED in pl330_pause(). Return DMA_PAUSED from pl330_tx_status() when the descriptor is PAUSED. Reported-by: Richard Tresidder <rtresidd@electromag.com.au> Tested-by: Richard Tresidder <rtresidd@electromag.com.au> Fixes: 88987d2c7534 ("dmaengine: pl330: add DMA_PAUSE feature") Cc: stable@vger.kernel.org Link: https://lore.kernel.org/linux-serial/f8a86ecd-64b1-573f-c2fa-59f541083f1a@electromag.com.au/ Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com> Link: https://lore.kernel.org/r/20230526105434.14959-1-ilpo.jarvinen@linux.intel.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: mcf-edma: Fix a potential un-allocated memory accessChristophe JAILLET2023-08-071-6/+7
| | | | | | | | | | | | | | | | | | | | | When 'mcf_edma' is allocated, some space is allocated for a flexible array at the end of the struct. 'chans' item are allocated, that is to say 'pdata->dma_channels'. Then, this number of item is stored in 'mcf_edma->n_chans'. A few lines later, if 'mcf_edma->n_chans' is 0, then a default value of 64 is set. This ends to no space allocated by devm_kzalloc() because chans was 0, but 64 items are read and/or written in some not allocated memory. Change the logic to define a default value before allocating the memory. Fixes: e7a3ff92eaf1 ("dmaengine: fsl-edma: add ColdFire mcf5441x edma support") Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Link: https://lore.kernel.org/r/f55d914407c900828f6fad3ea5fa791a5f17b9a4.1685172449.git.christophe.jaillet@wanadoo.fr Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: xilinx: xdma: Fix Judgment of the return valueMinjie Du2023-07-121-1/+1
| | | | | | | | | | Fix: make IS_ERR() judge the devm_ioremap_resource() function return. Fixes: 17ce252266c7 ("dmaengine: xilinx: xdma: Add xilinx xdma driver") Signed-off-by: Minjie Du <duminjie@vivo.com> Acked-by: Michal Simek <michal.simek@amd.com> Link: https://lore.kernel.org/r/20230705113912.16247-1-duminjie@vivo.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* idmaengine: make FSL_EDMA and INTEL_IDMA64 depends on HAS_IOMEMBaoquan He2023-07-121-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | On s390 systems (aka mainframes), it has classic channel devices for networking and permanent storage that are currently even more common than PCI devices. Hence it could have a fully functional s390 kernel with CONFIG_PCI=n, then the relevant iomem mapping functions [including ioremap(), devm_ioremap(), etc.] are not available. Here let FSL_EDMA and INTEL_IDMA64 depend on HAS_IOMEM so that it won't be built to cause below compiling error if PCI is unset. -------- ERROR: modpost: "devm_platform_ioremap_resource" [drivers/dma/fsl-edma.ko] undefined! ERROR: modpost: "devm_platform_ioremap_resource" [drivers/dma/idma64.ko] undefined! -------- Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202306211329.ticOJCSv-lkp@intel.com/ Signed-off-by: Baoquan He <bhe@redhat.com> Cc: Vinod Koul <vkoul@kernel.org> Cc: dmaengine@vger.kernel.org Link: https://lore.kernel.org/r/20230707135852.24292-2-bhe@redhat.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* Merge tag 'dmaengine-6.5-rc1' of ↵Linus Torvalds2023-07-0624-288/+1113
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine Pull dmaengine updates from Vinod Koul: "New support: - TI J721S2 CSI BCDMA support Updates: - Native HDMI support for dw edma driver - ste dma40 updates for supporting proper SRAM handle in DT - removal of dma device chancnt setting in drivers" * tag 'dmaengine-6.5-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vkoul/dmaengine: (28 commits) dmaengine: sprd: Don't set chancnt dmaengine: hidma: Don't set chancnt dmaengine: plx_dma: Don't set chancnt dmaengine: axi-dmac: Don't set chancnt dmaengine: dw-axi-dmac: Don't set chancnt dmaengine: qcom: bam_dma: allow omitting num-{channels,ees} dmaengine: dw-edma: Add HDMA DebugFS support dmaengine: dw-edma: Add support for native HDMA dmaengine: dw-edma: Create a new dw_edma_core_ops structure to abstract controller operation dmaengine: dw-edma: Rename dw_edma_core_ops structure to dw_edma_plat_ops dmaengine: ste_dma40: use proper format string for resource_size_t dmaengine: make QCOM_HIDMA depend on HAS_IOMEM dmaengine: ste_dma40: fix typo in enum documentation dmaengine: ste_dma40: use correct print specfier for resource_size_t MAINTAINERS: Add myself as the DW eDMA driver reviewer MAINTAINERS: Add Manivannan to DW eDMA driver maintainers list MAINTAINERS: Demote Gustavo Pimentel to DW EDMA driver reviewer dmaengine: ti: k3-udma: Add support for J721S2 CSI BCDMA instance dt-bindings: dma: ti: Add J721S2 BCDMA dmaengine: ti: k3-psil-j721s2: Add PSI-L thread map for main CPSW2G ...
| * dmaengine: sprd: Don't set chancntJisheng Zhang2023-05-241-1/+0
| | | | | | | | | | | | | | | | | | | | The dma framework will calculate the dma channels chancnt, setting it ourself is wrong. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Link: https://lore.kernel.org/r/20230521100252.3197-6-jszhang@kernel.org Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: hidma: Don't set chancntJisheng Zhang2023-05-241-1/+0
| | | | | | | | | | | | | | | | | | The dma framework will calculate the dma channels chancnt, setting it ourself is wrong. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Link: https://lore.kernel.org/r/20230521100252.3197-5-jszhang@kernel.org Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: plx_dma: Don't set chancntJisheng Zhang2023-05-241-1/+0
| | | | | | | | | | | | | | | | | | | | The dma framework will calculate the dma channels chancnt, setting it ourself is wrong. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Acked-by: Logan Gunthorpe <logang@deltatee.com> Link: https://lore.kernel.org/r/20230521100252.3197-4-jszhang@kernel.org Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: axi-dmac: Don't set chancntJisheng Zhang2023-05-241-1/+0
| | | | | | | | | | | | | | | | | | | | The dma framework will calculate the dma channels chancnt, setting it ourself is wrong. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Acked-by: Lars-Peter Clausen <lars@metafoo.de> Link: https://lore.kernel.org/r/20230521100252.3197-3-jszhang@kernel.org Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: dw-axi-dmac: Don't set chancntJisheng Zhang2023-05-241-1/+0
| | | | | | | | | | | | | | | | | | The dma framework will calculate the dma channels chancnt, setting it ourself is wrong. Signed-off-by: Jisheng Zhang <jszhang@kernel.org> Link: https://lore.kernel.org/r/20230521100252.3197-2-jszhang@kernel.org Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: qcom: bam_dma: allow omitting num-{channels,ees}Stephan Gerhold2023-05-241-9/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The bam_dma driver needs to know the number of channels and execution environments (EEs) at probe time. If we are in full control of the BAM controller this information can be obtained from the BAM identification registers (BAM_REVISION/BAM_NUM_PIPES). When the BAM is "controlled remotely" it is more complicated. The BAM might not be on at probe time, so reading the registers could fail. This is why the information must be added to the device tree in this case, using "num-channels" and "qcom,num-ees". However, there are also some BAM instances that are initialized by something else but we still have a clock that allows to turn it on when needed. This can be set up in the DT with "qcom,controlled-remotely" and "clocks" and is already supported by the bam_dma driver. Examples for this are the typical BLSP BAM instances on older SoCs, QPIC BAM (for NAND) and the crypto BAM on some SoCs. In this case, there is no need to read "num-channels" and "qcom,num-ees" from the DT. The BAN can be turned on using the clock so we can just read it from the BAM registers like in the normal case. Check for the BAM clock earlier and skip reading "num-channels" and "qcom,num-ees" if it is present to allow simplifying the DT description a bit. Signed-off-by: Stephan Gerhold <stephan@gerhold.net> Reviewed-by: Bhupesh Sharma <bhupesh.sharma@linaro.org> Link: https://lore.kernel.org/r/20230518-bamclk-dt-v2-1-a1a857b966ca@gerhold.net Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: dw-edma: Add HDMA DebugFS supportCai Huoqing2023-05-244-1/+196
| | | | | | | | | | | | | | | | | | | | | | Add HDMA DebugFS support to show registers content Signed-off-by: Cai Huoqing <cai.huoqing@linux.dev> Reviewed-by: Serge Semin <fancer.lancer@gmail.com> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Tested-by: Serge Semin <fancer.lancer@gmail.com> Link: https://lore.kernel.org/r/20230520050854.73160-5-cai.huoqing@linux.dev Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: dw-edma: Add support for native HDMACai Huoqing2023-05-245-3/+448
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for HDMA NATIVE, as long the IP design has set the compatible register map parameter-HDMA_NATIVE, which allows compatibility for native HDMA register configuration. The HDMA Hyper-DMA IP is an enhancement of the eDMA embedded-DMA IP. And the native HDMA registers are different from eDMA, so this patch add support for HDMA NATIVE mode. HDMA write and read channels operate independently to maximize the performance of the HDMA read and write data transfer over the link When you configure the HDMA with multiple read channels, then it uses a round robin (RR) arbitration scheme to select the next read channel to be serviced.The same applies when you have multiple write channels. The native HDMA driver also supports a maximum of 16 independent channels (8 write + 8 read), which can run simultaneously. Both SAR (Source Address Register) and DAR (Destination Address Register) are aligned to byte. Signed-off-by: Cai Huoqing <cai.huoqing@linux.dev> Reviewed-by: Serge Semin <fancer.lancer@gmail.com> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Tested-by: Serge Semin <fancer.lancer@gmail.com> Link: https://lore.kernel.org/r/20230520050854.73160-4-cai.huoqing@linux.dev Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: dw-edma: Create a new dw_edma_core_ops structure to abstract ↵Cai Huoqing2023-05-244-82/+157
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | controller operation The structure dw_edma_core_ops has a set of the pointers abstracting out the DW eDMA vX and DW HDMA Native controllers. And use dw_edma_v0_core_register to set up operation. Signed-off-by: Cai Huoqing <cai.huoqing@linux.dev> Reviewed-by: Serge Semin <fancer.lancer@gmail.com> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Tested-by: Serge Semin <fancer.lancer@gmail.com> Link: https://lore.kernel.org/r/20230520050854.73160-3-cai.huoqing@linux.dev Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: dw-edma: Rename dw_edma_core_ops structure to dw_edma_plat_opsCai Huoqing2023-05-241-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The dw_edma_core_ops structure contains a set of the operations: device IRQ numbers getter, CPU/PCI address translation. Based on the functions semantics the structure name "dw_edma_plat_ops" looks more descriptive since indeed the operations are platform-specific. The "dw_edma_core_ops" name shall be used for a structure with the IP-core specific set of callbacks in order to abstract out DW eDMA and DW HDMA setups. Such structure will be added in one of the next commit in the framework of the set of changes adding the DW HDMA device support. Anyway the renaming was necessary to distinguish two types of the implementation callbacks: 1. DW eDMA/hDMA IP-core specific operations: device-specific CSR setups in one or another aspect of the DMA-engine initialization. 2. DW eDMA/hDMA platform specific operations: the DMA device environment configs like IRQs, address translation, etc. Signed-off-by: Cai Huoqing <cai.huoqing@linux.dev> Reviewed-by: Serge Semin <fancer.lancer@gmail.com> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Tested-by: Serge Semin <fancer.lancer@gmail.com> Link: https://lore.kernel.org/r/20230520050854.73160-2-cai.huoqing@linux.dev Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: ste_dma40: use proper format string for resource_size_tArnd Bergmann2023-05-191-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | A fixup for a printk format string warning causes an out-of-bounds variable access as the %pR string expects a struct resource instead of a plain resource_size_t. Change both to the special %pap and %pap helpers for these types. Fixes: 5a1a3b9c19dd ("dmaengine: ste_dma40: Get LCPA SRAM from SRAM node") Fixes: ef1e1c41a11d ("dmaengine: ste_dma40: use correct print specfier for resource_size_t") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Link: https://lore.kernel.org/r/20230519093447.4097040-1-arnd@kernel.org Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: make QCOM_HIDMA depend on HAS_IOMEMBaoquan He2023-05-181-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On s390 systems (aka mainframes), it has classic channel devices for networking and permanent storage that are currently even more common than PCI devices. Hence it could have a fully functional s390 kernel with CONFIG_PCI=n, then the relevant iomem mapping functions [including ioremap(), devm_ioremap(), etc.] are not available. Here let QCOM_HIDMA depend on HAS_IOMEM so that it won't be built to cause below compiling error if PCI is unset. -------------------------------------------------------- ld: drivers/dma/qcom/hidma.o: in function `hidma_probe': hidma.c:(.text+0x4b46): undefined reference to `devm_ioremap_resource' ld: hidma.c:(.text+0x4b9e): undefined reference to `devm_ioremap_resource' make[1]: *** [scripts/Makefile.vmlinux:35: vmlinux] Error 1 make: *** [Makefile:1264: vmlinux] Error 2 Signed-off-by: Baoquan He <bhe@redhat.com> Reviewed-by: Niklas Schnelle <schnelle@linux.ibm.com> Cc: Andy Gross <agross@kernel.org> Cc: Bjorn Andersson <andersson@kernel.org> Cc: Konrad Dybcio <konrad.dybcio@linaro.org> Cc: Vinod Koul <vkoul@kernel.org> Cc: linux-arm-msm@vger.kernel.org Cc: dmaengine@vger.kernel.org Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@gmail.com> Link: https://lore.kernel.org/r/20230506111628.712316-3-bhe@redhat.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: ste_dma40: fix typo in enum documentationVinod Koul2023-05-181-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | s/40_command/d40_command to fix the below warning reported: drivers/dma/ste_dma40.c:151: warning: expecting prototype for enum 40_command. Prototype was for enum d40_command instead Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Vinod Koul <vkoul@kernel.org> Link: https://lore.kernel.org/r/20230517064434.141091-2-vkoul@kernel.org Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: ste_dma40: use correct print specfier for resource_size_tVinod Koul2023-05-181-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We should use %pR for printing resource_size_t, so update that fixing the warning: drivers/dma/ste_dma40.c:3556:25: warning: format specifies type 'unsigned int' but the argument has type 'resource_size_t' (aka 'unsigned long long') [-Wformat] Reported-by: kernel test robot <lkp@intel.com> Fixes: 5a1a3b9c19dd ("dmaengine: ste_dma40: Get LCPA SRAM from SRAM node") Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Vinod Koul <vkoul@kernel.org> Link: https://lore.kernel.org/r/20230517064434.141091-1-vkoul@kernel.org Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: ti: k3-udma: Add support for J721S2 CSI BCDMA instanceVaishnav Achath2023-05-161-0/+25
| | | | | | | | | | | | | | | | | | | | J721S2 has dedicated BCDMA instance for Camera Serial Interface RX and TX. The BCDMA instance supports RX and TX channels but block copy channels are not present, add support for the same. Signed-off-by: Vaishnav Achath <vaishnav.a@ti.com> Link: https://lore.kernel.org/r/20230505143929.28131-3-vaishnav.a@ti.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: ti: k3-psil-j721s2: Add PSI-L thread map for main CPSW2GKishon Vijay Abraham I2023-05-161-0/+11
| | | | | | | | | | | | | | | | | | | | Add PSI-L thread map for main CPSW2G. Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com> Signed-off-by: Siddharth Vadapalli <s-vadapalli@ti.com> Acked-by: Peter Ujfalusi <peter.ujfalusi@gmail.com> Link: https://lore.kernel.org/r/20230511034704.656155-1-s-vadapalli@ti.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: ste_dma40: Return error codes properlyLinus Walleij2023-05-161-22/+24
| | | | | | | | | | | | | | | | | | | | | | | | This makes the probe() and its subfunction d40_hw_detect_init() return proper error codes. One effect of this is that deferred probe, e.g from the clock, will start to work, would it happen. Also it is better design. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Link: https://lore.kernel.org/r/20230417-ux500-dma40-cleanup-v3-7-60bfa6785968@linaro.org Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: ste_dma40: Use managed resourcesLinus Walleij2023-05-161-119/+61
| | | | | | | | | | | | | | | | | | | | | | | | This switches the DMA40 driver to use a bunch of managed resources and strip down the errorpath. The result is pretty neat and makes the driver way more readable. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Link: https://lore.kernel.org/r/20230417-ux500-dma40-cleanup-v3-6-60bfa6785968@linaro.org Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: ste_dma40: Pass dev to OF functionLinus Walleij2023-05-161-7/+6
| | | | | | | | | | | | | | | | | | | | | | | | The OF platform data population function only wants to use struct device *dev, so pass that instead. This change makes the compiler realize that the local platform data variable is unused, so drop that too. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Link: https://lore.kernel.org/r/20230417-ux500-dma40-cleanup-v3-5-60bfa6785968@linaro.org Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: ste_dma40: Remove platform dataLinus Walleij2023-05-163-19/+150
| | | | | | | | | | | | | | | | | | | | | | | | The Ux500 is device tree-only since ages. Delete the platform data header and push it into or next to the driver instead. Drop the non-DT probe path since this will not happen. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Link: https://lore.kernel.org/r/20230417-ux500-dma40-cleanup-v3-4-60bfa6785968@linaro.org Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: ste_dma40: Add dev helper variableLinus Walleij2023-05-161-24/+26
| | | | | | | | | | | | | | | | | | | | The &pdev->dev device pointer is used so many times in the probe() and d40_hw_detect_init() functions that a local *dev variable makes the code way easier to read. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Link: https://lore.kernel.org/r/20230417-ux500-dma40-cleanup-v3-3-60bfa6785968@linaro.org Signed-off-by: Vinod Koul <vkoul@kernel.org>
| * dmaengine: ste_dma40: Get LCPA SRAM from SRAM nodeLinus Walleij2023-05-162-23/+25
| | | | | | | | | | | | | | | | | | | | Instead of passing the reserved SRAM as a "reg" field look for a phandle to the LCPA SRAM memory so we can use the proper SRAM device tree bindings for the SRAM. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Link: https://lore.kernel.org/r/20230417-ux500-dma40-cleanup-v3-2-60bfa6785968@linaro.org Signed-off-by: Vinod Koul <vkoul@kernel.org>