summaryrefslogtreecommitdiffstats
path: root/drivers/dma/dw-edma/dw-edma-core.c
Commit message (Collapse)AuthorAgeFilesLines
* dmaengine: dw-edma: Add support for native HDMACai Huoqing2023-05-241-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for HDMA NATIVE, as long the IP design has set the compatible register map parameter-HDMA_NATIVE, which allows compatibility for native HDMA register configuration. The HDMA Hyper-DMA IP is an enhancement of the eDMA embedded-DMA IP. And the native HDMA registers are different from eDMA, so this patch add support for HDMA NATIVE mode. HDMA write and read channels operate independently to maximize the performance of the HDMA read and write data transfer over the link When you configure the HDMA with multiple read channels, then it uses a round robin (RR) arbitration scheme to select the next read channel to be serviced.The same applies when you have multiple write channels. The native HDMA driver also supports a maximum of 16 independent channels (8 write + 8 read), which can run simultaneously. Both SAR (Source Address Register) and DAR (Destination Address Register) are aligned to byte. Signed-off-by: Cai Huoqing <cai.huoqing@linux.dev> Reviewed-by: Serge Semin <fancer.lancer@gmail.com> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Tested-by: Serge Semin <fancer.lancer@gmail.com> Link: https://lore.kernel.org/r/20230520050854.73160-4-cai.huoqing@linux.dev Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Create a new dw_edma_core_ops structure to abstract ↵Cai Huoqing2023-05-241-57/+25
| | | | | | | | | | | | | | | controller operation The structure dw_edma_core_ops has a set of the pointers abstracting out the DW eDMA vX and DW HDMA Native controllers. And use dw_edma_v0_core_register to set up operation. Signed-off-by: Cai Huoqing <cai.huoqing@linux.dev> Reviewed-by: Serge Semin <fancer.lancer@gmail.com> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Tested-by: Serge Semin <fancer.lancer@gmail.com> Link: https://lore.kernel.org/r/20230520050854.73160-3-cai.huoqing@linux.dev Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Fix to enable to issue dma request on DMA processingShunsuke Mie2023-04-121-2/+5
| | | | | | | | | | The issue_pending request is ignored while driver is processing a DMA request. Fix to issue the pending requests on any dma channel status. Fixes: e63d79d1ffcd ("dmaengine: Add Synopsys eDMA IP core driver") Signed-off-by: Shunsuke Mie <mie@igel.co.jp> Link: https://lore.kernel.org/r/20230411101758.438472-2-mie@igel.co.jp Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Fix to change for continuous transferShunsuke Mie2023-04-121-9/+11
| | | | | | | | | | | | | | The dw-edma driver stops after processing a DMA request even if a request remains in the issued queue, which is not the expected behavior. The DMA engine API requires continuous processing. Add a trigger to start after one processing finished if there are requests remain. Fixes: e63d79d1ffcd ("dmaengine: Add Synopsys eDMA IP core driver") Signed-off-by: Shunsuke Mie <mie@igel.co.jp> Link: https://lore.kernel.org/r/20230411101758.438472-1-mie@igel.co.jp Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Skip cleanup procedure if no private data foundSerge Semin2023-02-101-0/+4
| | | | | | | | | | | | | | | DW eDMA driver private data is preserved in the passed DW eDMA chip info structure. If the probe fails or for some reason the passed info object doesn't have the private data pointer initialized, halt the DMA device cleanup procedure to prevent system crashes. Link: https://lore.kernel.org/r/20230113171409.30470-23-Sergey.Semin@baikalelectronics.ru Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Acked-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Replace chip ID number with device nameSerge Semin2023-02-101-1/+2
| | | | | | | | | | | | | | | | | | | | Using an abstract number as the DW eDMA chip identifier isn't practical because there can be more than one DW eDMA controller on the platform. Some may be detected as the PCIe Endpoints, and others may be embedded in DW PCIe Root Port/Endpoint controllers. An abstract number in, for instance, the IRQ handlers list, doesn't give a notion regarding their reference to the particular DMA controller. To preserve the code simplicity and support multi-eDMA platforms, use the parental device name to create the DW eDMA controller name. Link: https://lore.kernel.org/r/20230113171409.30470-22-Sergey.Semin@baikalelectronics.ru Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Acked-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Drop DT-region allocationSerge Semin2023-02-101-17/+4
| | | | | | | | | | | | | | | | | | | There is no point in allocating additional memory for the data target regions passed to the client drivers. Use the already available structures defined in the dw_edma_chip instance. Note: these regions are unused in normal circumstances since they are specific to the case of eDMA being embedded into the DW PCIe Endpoint and having its CSRs accessible via an Endpoint BAR. This case is only known to be implemented as a part of the Synopsys PCIe Endpoint IP prototype kit. Link: https://lore.kernel.org/r/20230113171409.30470-21-Sergey.Semin@baikalelectronics.ru Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Acked-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Use DMA engine device debugfs subdirectorySerge Semin2023-02-101-3/+0
| | | | | | | | | | | | | | | | Since all DW eDMA read and write channels are now installed in a framework of a single DMA engine device, move all the DW eDMA-specific debugfs nodes into a ready-to-use DMA-engine debugfs subdirectory. It's created during the DMA-device registration and can be found in the dma_device.dbg_dev_root field. Link: https://lore.kernel.org/r/20230113171409.30470-19-Sergey.Semin@baikalelectronics.ru Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Acked-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Join read/write channels into a single deviceSerge Semin2023-02-101-57/+59
| | | | | | | | | | | | | | | | | | | | | | | There is no point in splitting read/write channels. First of all, eDMA read and write channels belong to one physical controller. Secondly, channel differentiation can be done by filtering and dma_get_slave_caps(). Finally, having these channels handled separately needlessly complicates the code and causes this debugfs warning: debugfs: Directory '1f052000.pcie' with parent 'dmaengine' already present! Join the read/write channels into a single DMA device. Client drivers can choose the correct channel via the DMA slave direction setting. The default value is overridden by the dw_edma_device_caps() callback in accordance with the channel type. Link: https://lore.kernel.org/r/20230113171409.30470-18-Sergey.Semin@baikalelectronics.ru Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Acked-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Drop chancnt initializationSerge Semin2023-01-271-1/+0
| | | | | | | | | | | | | | | | | | The DMA engine core manages dma_device.chancnt itself, e.g., in dma_async_device_register(). DMA device drivers should not initialize chancnt because it causes the wrong number of channels printed in the device summary. Drop the dw-edma chancnt initialization. Link: https://lore.kernel.org/r/20230113171409.30470-10-Sergey.Semin@baikalelectronics.ru Fixes: e63d79d1ffcd ("dmaengine: Add Synopsys eDMA IP core driver") Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Acked-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Add CPU to PCI bus address translationSerge Semin2023-01-271-1/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since 9575632052ba ("dmaengine: make slave address physical"), the source and destination addresses of the DMA slave device have been converted to physical addresses in the CPU address space. It's the DMA device driver's responsibility to convert them to the DMA bus address space. In case of the DW eDMA device, the source or destination peripheral (slave) devices reside in PCI bus space. Thus we need to perform the PCI Host/Endpoint windows- based (i.e. DT "ranges" property) address translation; otherwise the eDMA transactions won't work as expected (or can be even harmful) if the CPU and PCI address spaces don't match. Note 1: Even though the DMA interleaved template has both source and destination addresses declared as dma_addr_t, only the CPU memory range should be mapped to be seen by the DMA device since it's a subject of the DMA getting towards the system side. The device part must not be mapped since the slave device resides in the PCI bus space, which isn't affected by IOMMUs or iATU translations. DW PCIe eDMA generates corresponding MWr/MRd TLPs on its own. Note 2: This functionality is mainly required for the remote eDMA setup since the CPU address must be manually translated into the PCI bus space before being written to LLI.{SAR,DAR}. If eDMA is embedded in the locally accessible DW PCIe Root Port/Endpoint, software-based translation isn't required since hardware will translate it via the Outbound iATU as long as the DMA_BYPASS flag is cleared. If DMA_BYPASS is set or there is no Outbound iATU entry that contains the SAR or DAR (for Read and Write channel respectively), there won't be any translation performed but DMA will proceed with the corresponding source/destination address as-is. Link: https://lore.kernel.org/r/20230113171409.30470-8-Sergey.Semin@baikalelectronics.ru Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Acked-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Fix invalid interleaved xfers semanticsSerge Semin2023-01-271-11/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The interleaved DMA transfer support added by 85e7518f42c8 ("dmaengine: dw-edma: Add device_prep_interleave_dma() support") seems contradictory to what the DMA engine defines. The next conditional statements: if (!xfer->xfer.il->numf) return NULL; if (xfer->xfer.il->numf > 0 && xfer->xfer.il->frame_size > 0) return NULL; mean that numf can't be zero and frame_size must always be zero, otherwise the transfer won't be executed. Furthermore, the transfer execution method takes the frame size from the dma_interleaved_template.sgl[] array for each frame. That array in accordance with [1] is supposed to be of dma_interleaved_template.frame_size size, which as we discovered before the code expects to be zero. So judging by the dw_edma_device_transfer() implementation, the method implies the dma_interleaved_template.sgl[] array being of dma_interleaved_template.numf size, which is wrong. Since the dw_edma_device_transfer() method doesn't permit dma_interleaved_template.frame_size being non-zero, the multi-chunk interleaved transfer turns to be unsupported even though the code implies having it supported. Add fully functioning support of interleaved DMA transfers. First of all, dma_interleaved_template.frame_size is supposed to be greater or equal to one thus having at least simple linear chunked frames. Secondly, we can create a walk-through over all the chunks and frames by initializing the number of the eDMA burst transactions as a multiple of dma_interleaved_template.numf and dma_interleaved_template.frame_size and getting the frame_size-modulo of the iteration step as an index of the dma_interleaved_template.sgl[] array. [1] include/linux/dmaengine.h: doc struct dma_interleaved_template Link: https://lore.kernel.org/r/20230113171409.30470-7-Sergey.Semin@baikalelectronics.ru Fixes: 85e7518f42c8 ("dmaengine: dw-edma: Add device_prep_interleave_dma() support") Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Acked-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Don't permit non-inc interleaved xfersSerge Semin2023-01-271-6/+6
| | | | | | | | | | | | | | | | | | | | | The DW eDMA controller always increments both source and destination addresses. Permitting DMA interleaved transfers with no src_inc/dst_inc flags set may lead to unexpected behaviour for the device users. Terminate interleaved transfers if at least one of the dma_interleaved_template.{src_inc,dst_inc} flag is initialized to "false". Note that in addition, we need to increase the source and destination addresses after each iteration. Link: https://lore.kernel.org/r/20230113171409.30470-6-Sergey.Semin@baikalelectronics.ru Fixes: 85e7518f42c8 ("dmaengine: dw-edma: Add device_prep_interleave_dma() support") Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Acked-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Fix missing src/dst address of interleaved xfersSerge Semin2023-01-271-0/+4
| | | | | | | | | | | | | | | | | | | | Interleaved DMA transfer support was added by 85e7518f42c8 ("dmaengine: dw-edma: Add device_prep_interleave_dma() support"), but depending on the selected channel, either source or destination address are left uninitialized which was obviously wrong. Initialize the destination address of the eDMA burst descriptors for DEV_TO_MEM interleaved operations and the source address for MEM_TO_DEV operations. Link: https://lore.kernel.org/r/20230113171409.30470-5-Sergey.Semin@baikalelectronics.ru Fixes: 85e7518f42c8 ("dmaengine: dw-edma: Add device_prep_interleave_dma() support") Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Acked-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Release requested IRQs on failureSerge Semin2023-01-271-4/+10
| | | | | | | | | | | | | | | | | If dw_edma_irq_request() fails to initialize an IRQ handler, any previously requested IRQs will be left initialized. Release the previously requested IRQs in the cleanup-on-error path of dw_edma_irq_request(). Link: https://lore.kernel.org/r/20230113171409.30470-3-Sergey.Semin@baikalelectronics.ru Fixes: e63d79d1ffcd ("dmaengine: Add Synopsys eDMA IP core driver") Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Signed-off-by: Lorenzo Pieralisi <lpieralisi@kernel.org> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Acked-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Remove runtime PM supportManivannan Sadhasivam2022-09-291-12/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, the dw-edma driver enables the runtime_pm for parent device (chip->dev) and increments/decrements the refcount during alloc/free chan resources callbacks. This leads to a problem when the eDMA driver has been probed, but the channels were not used. This scenario can happen when the DW PCIe driver probes eDMA driver successfully, but the PCI EPF driver decides not to use eDMA channels and use iATU instead for PCI transfers. In this case, the underlying device would be runtime suspended due to pm_runtime_enable() in dw_edma_probe() and the PCI EPF driver would have no knowledge of it. Ideally, the eDMA driver should not be the one doing the runtime PM of the parent device. The responsibility should instead belong to the client drivers like PCI EPF. So let's remove the runtime PM support from eDMA driver. Cc: Serge Semin <fancer.lancer@gmail.com> Cc: Frank Li <Frank.Li@nxp.com> Reviewed-by: Serge Semin <fancer.lancer@gmail.com> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Link: https://lore.kernel.org/r/20220910054700.12205-1-manivannan.sadhasivam@linaro.org Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Fix eDMA Rd/Wr-channels and DMA-direction semanticsSerge Semin2022-06-231-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In accordance with [1, 2] the DW eDMA controller has been created to be part of the DW PCIe Root Port and DW PCIe End-point controllers and to offload the transferring of large blocks of data between application and remote PCIe domains leaving the system CPU free for other tasks. In the first case (eDMA being part of DW PCIe Root Port) the eDMA controller is always accessible via the CPU DBI interface and never over the PCIe wire. The latter case is more complex. Depending on the DW PCIe End-Point IP-core synthesize parameters it's possible to have the eDMA registers accessible not only from the application CPU side, but also via mapping the eDMA CSRs over a dedicated endpoint BAR. So based on the specifics denoted above the eDMA driver is supposed to support two types of the DMA controller setups: 1) eDMA embedded into the DW PCIe Root Port/End-point and accessible over the local CPU from the application side. 2) eDMA embedded into the DW PCIe End-point and accessible via the PCIe wire with MWr/MRd TLPs generated by the CPU PCIe host controller. Since the CPU memory resides different sides in these cases the semantics of the MEM_TO_DEV and DEV_TO_MEM operations is flipped with respect to the Tx and Rx DMA channels. So MEM_TO_DEV/DEV_TO_MEM corresponds to the Tx/Rx channels in setup 1) and to the Rx/Tx channels in case of setup 2). The DW eDMA driver has supported the case 2) since e63d79d1ffcd ("dmaengine: Add Synopsys eDMA IP core driver") in the framework of the drivers/dma/dw-edma/dw-edma-pcie.c driver. The case 1) support was added later by bd96f1b2f43a ("dmaengine: dw-edma: support local dma device transfer semantics"). Afterwards the driver was supposed to cover the both possible eDMA setups, but the latter commit turned out to be not fully correct. The problem was that the commit together with the new functionality support also changed the channel direction semantics so the eDMA Read-channel (corresponding to the DMA_DEV_TO_MEM direction for case 1) now uses the sgl/cyclic base addresses as the Source addresses of the DMA transfers and dma_slave_config.dst_addr as the Destination address of the DMA transfers. Similarly the eDMA Write-channel (corresponding to the DMA_MEM_TO_DEV direction for case 1) now uses dma_slave_config.src_addr as a source address of the DMA transfers and sgl/cyclic base address as the Destination address of the DMA transfers. This contradicts the logic of the DMA-interface, which implies that DEV side is supposed to belong to the PCIe device memory and MEM - to the CPU/Application memory. Indeed it seems irrational to have the SG-list defined in the PCIe bus space, while expecting a contiguous buffer allocated in the CPU memory. Moreover the passed SG-list and cyclic DMA buffers are supposed to be mapped in a way so to be seen by the DW eDMA Application (CPU) interface. So in order to have the correct DW eDMA interface we need to invert the eDMA Rd/Wr-channels and DMA-slave directions semantics by selecting the src/dst addresses based on the DMA transfer direction instead of using the channel direction capability. [1] DesignWare Cores PCI Express Controller Databook - DWC PCIe Root Port, v.5.40a, March 2019, p.1092 [2] DesignWare Cores PCI Express Controller Databook - DWC PCIe Endpoint, v.5.40a, March 2019, p.1189 Co-developed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Fixes: bd96f1b2f43a ("dmaengine: dw-edma: support local dma device transfer semantics") Link: https://lore.kernel.org/r/20220524152159.2370739-7-Frank.Li@nxp.com Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Signed-off-by: Frank Li <Frank.Li@nxp.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-By: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Drop dma_slave_config.direction field usageSerge Semin2022-06-231-15/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The dma_slave_config.direction field usage in the DW eDMA driver was introduced by bd96f1b2f43a ("dmaengine: dw-edma: support local dma device transfer semantics"). Mainly the change introduced there was correct (indeed DEV_TO_MEM means using RD-channel and MEM_TO_DEV - WR-channel for the case of having eDMA accessed locally from CPU/Application side), but providing an additional MEM_TO_MEM/DEV_TO_DEV-based semantics was quite redundant if not to say potentially harmful (when it comes to removing the denoted field). First of all since the dma_slave_config.direction field has been marked as obsolete (see [1] and the struct dma_slave_config [2]) and will be discarded in future, using it especially in a non-standard way is discouraged. Secondly in accordance with the commit denoted above the default dw_edma_device_transfer() semantics has been changed despite what its message said. So claiming that the method was left backward compatible was wrong. Fix the problems denoted above and simplify the dw_edma_device_transfer() method by dropping the parsing of the DMA-channel direction field. Instead of having that implicit dma_slave_config.direction field semantic, use the recently added DW_EDMA_CHIP_LOCAL flag to distinguish between the local and remote DW eDMA setups thus preserving support for both cases. Add an ASCII figure to clarify the situation. [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/driver-api/dmaengine/provider.rst?id=v5.18#n478 [2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/include/linux/dmaengine.h?id=v5.18#n389 [bhelgaas: convert references to specific URLs] Co-developed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Link: https://lore.kernel.org/r/20220524152159.2370739-6-Frank.Li@nxp.com Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru> Signed-off-by: Frank Li <Frank.Li@nxp.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Acked-By: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Rename wr(rd)_ch_cnt to ll_wr(rd)_cnt in struct dw_edma_chipFrank Li2022-06-231-2/+2
| | | | | | | | | | | | | | | | | The struct dw_edma contains wr(rd)_ch_cnt fields. The EDMA driver gets write(read) channel number from register, then saves these into dw_edma. The wr(rd)_ch_cnt in dw_edma_chip actually means how many link list memory are available in ll_region_wr(rd)[EDMA_MAX_WR_CH]. Rename it to ll_wr(rd)_cnt to indicate actual usage. Link: https://lore.kernel.org/r/20220524152159.2370739-5-Frank.Li@nxp.com Tested-by: Serge Semin <fancer.lancer@gmail.com> Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Frank Li <Frank.Li@nxp.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Serge Semin <fancer.lancer@gmail.com> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Acked-By: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Detach the private data and chip info structuresFrank Li2022-06-231-41/+49
| | | | | | | | | | | | | | | | | | | | | "struct dw_edma_chip" contains an internal structure "struct dw_edma" that is used by the eDMA core internally and should not be touched by the eDMA controller drivers themselves. But currently, the eDMA controller drivers like "dw-edma-pci" allocate and populate this internal structure before passing it on to the eDMA core. The eDMA core further populates the structure and uses it. This is wrong! Hence, move all the "struct dw_edma" specifics from controller drivers to the eDMA core. Link: https://lore.kernel.org/r/20220524152159.2370739-3-Frank.Li@nxp.com Tested-by: Serge Semin <fancer.lancer@gmail.com> Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Frank Li <Frank.Li@nxp.com> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Serge Semin <fancer.lancer@gmail.com> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Acked-By: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Remove an unused variableChristophe JAILLET2021-10-181-1/+0
| | | | | | | | 'head' is unused, remove it. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Link: https://lore.kernel.org/r/46e071be21fbc5ac5c35d4796a7e4249e94c3a77.1633847306.git.christophe.jaillet@wanadoo.fr Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Revert fix scatter-gather address calculationGustavo Pimentel2021-03-161-4/+4
| | | | | | | | | | Reverting the applied patch because it caused a regression on ARC700 platform (32 bits). Fixes: 05655541c950 ("dmaengine: dw-edma: Fix scatter-gather address calculation") Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Link: https://lore.kernel.org/r/1778422e389fe40032e216b59b1b992c61ec9887.1613674948.git.gustavo.pimentel@synopsys.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Change DMA abbreviation from lower into upper caseGustavo Pimentel2021-03-161-3/+3
| | | | | | | | | To keep code consistent, some comments with dma keyword written in lower case are now in upper case. Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Link: https://lore.kernel.org/r/8c4b3db90767972a2b4cbb6fa818cf0e9c3d6fe3.1613674948.git.gustavo.pimentel@synopsys.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Fix crash on loading/unloading driverGustavo Pimentel2021-03-161-6/+5
| | | | | | | | | | | When the driver is compiled as a module and loaded if we try to unload it, the Kernel shows a crash log. This Kernel crash is due to the dma_async_device_unregister() call done after deleting the channels, this patch fixes this issue. Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Link: https://lore.kernel.org/r/4aa850c035cf7ee488f1d3fb6dee0e37be0dce0a.1613674948.git.gustavo.pimentel@synopsys.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Move struct dentry variable from static definition into ↵Gustavo Pimentel2021-03-161-1/+1
| | | | | | | | | | | | | dw_edma struct Move struct dentry variable from static definition (dw-edma-v0-debugfs.c) into dw_edma struct (dw-edma-core.h) Also the variable was renamed from base_dir to debugfs. Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Link: https://lore.kernel.org/r/07c1167b671e7b175700e2e7061cf0b3dd8c6adb.1613674948.git.gustavo.pimentel@synopsys.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Improve the linked list and data blocks definitionGustavo Pimentel2021-03-161-27/+24
| | | | | | | | | | | | | | | In the previous implementation, the driver assumed that there existed only two memory spaces that would equally distribute the amount of read/write channels. This might not be the case on some other implementations, therefore this patch change this requirement so that each write/read channel has its own linked list and data space well defined, which allows different sizes and locations. Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Link: https://lore.kernel.org/r/2e316cb983f8a1e09ce929029f87619dc92a52de.1613674948.git.gustavo.pimentel@synopsys.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Improve number of channels checkGustavo Pimentel2021-03-161-12/+9
| | | | | | | | | It was added some extra checks to ensure that the driver doesn't try to use more DMA channels than actually are available in hardware. Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Link: https://lore.kernel.org/r/cfb2b0a4f97ae9dc83ebe5ea59d6a51d69ea3654.1613674948.git.gustavo.pimentel@synopsys.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Add device_prep_interleave_dma() supportGustavo Pimentel2021-03-161-17/+68
| | | | | | | | | | | Add device_prep_interleave_dma() support to Synopsys DMA driver. This feature implements a similar data transfer mechanism to the scatter-gather implementation. Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Link: https://lore.kernel.org/r/73dc36264910654e266ae25814d892a0476e4427.1613674948.git.gustavo.pimentel@synopsys.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Add PCIe VSEC data retrieval supportGustavo Pimentel2021-03-161-8/+12
| | | | | | | | | | The latest eDMA IP development implements a Vendor-Specific Extended Capability that contains the eDMA BAR, offset, map format, and the number of read/write channels available. Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Link: https://lore.kernel.org/r/0b880b8893ff457ffc1b5071a1c7f47e61ceea1c.1613674948.git.gustavo.pimentel@synopsys.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Fix use after free in dw_edma_alloc_chunk()Dan Carpenter2020-12-291-2/+2
| | | | | | | | | | | | | | | | | If the dw_edma_alloc_burst() function fails then we free "chunk" but it's still on the "desc->chunk->list" list so it will lead to a use after free. Also the "->chunks_alloc" count is incremented when it shouldn't be. In current kernels small allocations are guaranteed to succeed and dw_edma_alloc_burst() can't fail so this will not actually affect runtime. Fixes: e63d79d1ffcd ("dmaengine: Add Synopsys eDMA IP core driver") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Link: https://lore.kernel.org/r/X9dTBFrUPEvvW7qc@mwanda Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Fix scatter-gather address calculationGustavo Pimentel2020-08-251-5/+6
| | | | | | | | | | | | | | | Fix the source and destination physical address calculation of a peripheral device on scatter-gather implementation. This issue manifested during tests using a 64 bits architecture system. The abnormal behavior wasn't visible before due to all previous tests were done using 32 bits architecture system, that masked his effect. Fixes: e63d79d1ffcd ("dmaengine: Add Synopsys eDMA IP core driver") Cc: stable@vger.kernel.org Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Link: https://lore.kernel.org/r/8d3ab7e2ba96563fe3495b32f60077fffb85307d.1597327623.git.gustavo.pimentel@synopsys.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: support local dma device transfer semanticsAlan Mikhak2020-05-041-7/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Modify dw_edma_device_transfer() to also support the semantics of dma device transfer for additional use cases involving pcitest utility as a local initiator. For its original use case, dw-edma supported the semantics of dma device transfer from the perspective of a remote initiator who is located across the PCIe bus from dma channel hardware. To a remote initiator, DMA_DEV_TO_MEM means using a remote dma WRITE channel to transfer from remote memory to local memory. A WRITE channel would be employed on the remote device in order to move the contents of remote memory to the bus destined for local memory. To a remote initiator, DMA_MEM_TO_DEV means using a remote dma READ channel to transfer from local memory to remote memory. A READ channel would be employed on the remote device in order to move the contents of local memory to the bus destined for remote memory. >From the perspective of a local dma initiator who is co-located on the same side of the PCIe bus as the dma channel hardware, the semantics of dma device transfer are flipped. To a local initiator, DMA_DEV_TO_MEM means using a local dma READ channel to transfer from remote memory to local memory. A READ channel would be employed on the local device in order to move the contents of remote memory to the bus destined for local memory. To a local initiator, DMA_MEM_TO_DEV means using a local dma WRITE channel to transfer from local memory to remote memory. A WRITE channel would be employed on the local device in order to move the contents of local memory to the bus destined for remote memory. To support local dma initiators, dw_edma_device_transfer() is modified to now examine the direction field of struct dma_slave_config for the channel which initiators can configure by calling dmaengine_slave_config(). If direction is configured as either DMA_DEV_TO_MEM or DMA_MEM_TO_DEV, local initiator semantics are used. If direction is a value other than DMA_DEV_TO_MEM nor DMA_MEM_TO_DEV, then remote initiator semantics are used. This should maintain backward compatibility with the original use case of dw-edma. The dw-edma-test utility is an example of a remote initiator. From reading its patch, dw-edma-test does not specifically set the direction field of struct dma_slave_config. Since dw_edma_device_transfer() also does not check the direction field of struct dma_slave_config, it seems safe to use this convention in dw-edma to support both local and remote initiator semantics. Signed-off-by: Alan Mikhak <alan.mikhak@sifive.com> Link: https://lore.kernel.org/r/1588122633-1552-1-git-send-email-alan.mikhak@sifive.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Check MSI descriptor before copyingAlan Mikhak2020-04-271-7/+10
| | | | | | | | | | | | | | | | Modify dw_edma_irq_request() to check if a struct msi_desc entry exists before copying the contents of its struct msi_msg pointer. Without this sanity check, __get_cached_msi_msg() crashes when invoked by dw_edma_irq_request() running on a Linux-based PCIe endpoint device. MSI interrupt are not received by PCIe endpoint devices. If irq_get_msi_desc() returns null, then there is no cached struct msi_msg to be copied. Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Alan Mikhak <alan.mikhak@sifive.com> Acked-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Link: https://lore.kernel.org/r/1587607101-31914-1-git-send-email-alan.mikhak@sifive.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: Decouple dw-edma-core.c from struct pci_devAlan Mikhak2020-04-171-9/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Decouple dw-edma-core.c from struct pci_dev as a step toward integration of dw-edma with pci-epf-test so the latter can initiate dma operations locally from the endpoint side. A barrier to such integration is the dependency of dw_edma_probe() and other functions in dw-edma-core.c on struct pci_dev. The Synopsys DesignWare dw-edma driver was designed to run on host side of PCIe link to initiate DMA operations remotely using eDMA channels of PCIe controller on the endpoint side. This can be inferred from seeing that dw-edma uses struct pci_dev and accesses hardware registers of dma channels across the bus using BAR0 and BAR2. The ops field of struct dw_edma in dw-edma-core.h is currenty undefined: const struct dw_edma_core_ops *ops; However, the kernel builds without failure even when dw-edma driver is enabled. Instead of removing the currently undefined and usued ops field, define struct dw_edma_core_ops and use the ops field to decouple dw-edma-core.c from struct pci_dev. Signed-off-by: Alan Mikhak <alan.mikhak@sifive.com> Acked-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Link: https://lore.kernel.org/r/1586971629-30196-1-git-send-email-alan.mikhak@sifive.com Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: dw-edma: fix semicolon.cocci warningskbuild test robot2019-06-251-1/+1
| | | | | | | | | | | | | | drivers/dma/dw-edma/dw-edma-core.c:617:2-3: Unneeded semicolon Remove unneeded semicolon. Generated by: scripts/coccinelle/misc/semicolon.cocci Fixes: e63d79d1ffcd ("dmaengine: Add Synopsys eDMA IP core driver") CC: Gustavo Pimentel <Gustavo.Pimentel@synopsys.com> Signed-off-by: kbuild test robot <lkp@intel.com> Acked-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: Add Synopsys eDMA IP version 0 supportGustavo Pimentel2019-06-101-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for the eDMA IP version 0 driver for both register maps (legacy and unroll). The legacy register mapping was the initial implementation, which consisted in having all registers belonging to channels multiplexed, which could be change anytime (which could led a race-condition) by view port register (access to only one channel available each time). This register mapping is not very effective and efficient in a multithread environment, which has led to the development of unroll registers mapping, which consists of having all channels registers accessible any time by spreading all channels registers by an offset between them. This version supports a maximum of 16 independent channels (8 write + 8 read), which can run simultaneously. Implements a scatter-gather transfer through a linked list, where the size of linked list depends on the allocated memory divided equally among all channels. Each linked list descriptor can transfer from 1 byte to 4 Gbytes and is alignmented to DWORD. Both SAR (Source Address Register) and DAR (Destination Address Register) are alignmented to byte. Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Cc: Vinod Koul <vkoul@kernel.org> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Russell King <rmk+kernel@armlinux.org.uk> Cc: Joao Pinto <jpinto@synopsys.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>
* dmaengine: Add Synopsys eDMA IP core driverGustavo Pimentel2019-06-101-0/+936
Add Synopsys PCIe Endpoint eDMA IP core driver to kernel. This IP is generally distributed with Synopsys PCIe Endpoint IP (depends of the use and licensing agreement). This core driver, initializes and configures the eDMA IP using vma-helpers functions and dma-engine subsystem. This driver can be compile as built-in or external module in kernel. To enable this driver just select DW_EDMA option in kernel configuration, however it requires and selects automatically DMA_ENGINE and DMA_VIRTUAL_CHANNELS option too. In order to transfer data from point A to B as fast as possible this IP requires a dedicated memory space containing linked list of elements. All elements of this linked list are continuous and each one describes a data transfer (source and destination addresses, length and a control variable). For the sake of simplicity, lets assume a memory space for channel write 0 which allows about 42 elements. +---------+ | Desc #0 |-+ +---------+ | V +----------+ | Chunk #0 |-+ | CB = 1 | | +----------+ +-----+ +-----------+ +-----+ +----------+ +->| Burst #0 |->| ... |->| Burst #41 |->| llp | | +----------+ +-----+ +-----------+ +-----+ V +----------+ | Chunk #1 |-+ | CB = 0 | | +-----------+ +-----+ +-----------+ +-----+ +----------+ +->| Burst #42 |->| ... |->| Burst #83 |->| llp | | +-----------+ +-----+ +-----------+ +-----+ V +----------+ | Chunk #2 |-+ | CB = 1 | | +-----------+ +-----+ +------------+ +-----+ +----------+ +->| Burst #84 |->| ... |->| Burst #125 |->| llp | | +-----------+ +-----+ +------------+ +-----+ V +----------+ | Chunk #3 |-+ | CB = 0 | | +------------+ +-----+ +------------+ +-----+ +----------+ +->| Burst #126 |->| ... |->| Burst #129 |->| llp | +------------+ +-----+ +------------+ +-----+ Legend: - Linked list, also know as Chunk - Linked list element*, also know as Burst *CB*, also know as Change Bit, it's a control bit (and typically is toggled) that allows to easily identify and differentiate between the current linked list and the previous or the next one. - LLP, is a special element that indicates the end of the linked list element stream also informs that the next CB should be toggle On every last Burst of the Chunk (Burst #41, Burst #83, Burst #125 or even Burst #129) is set some flags on their control variable (RIE and LIE bits) that will trigger the send of "done" interruption. On the interruptions callback, is decided whether to recycle the linked list memory space by writing a new set of Bursts elements (if still exists Chunks to transfer) or is considered completed (if there is no Chunks available to transfer). On scatter-gather transfer mode, the client will submit a scatter-gather list of n (on this case 130) elements, that will be divide in multiple Chunks, each Chunk will have (on this case 42) a limited number of Bursts and after transferring all Bursts, an interrupt will be triggered, which will allow to recycle the all linked list dedicated memory again with the new information relative to the next Chunk and respective Burst associated and repeat the whole cycle again. On cyclic transfer mode, the client will submit a buffer pointer, length of it and number of repetitions, in this case each burst will correspond directly to each repetition. Each Burst can describes a data transfer from point A(source) to point B(destination) with a length that can be from 1 byte up to 4 GB. Since dedicated the memory space where the linked list will reside is limited, the whole n burst elements will be organized in several Chunks, that will be used later to recycle the dedicated memory space to initiate a new sequence of data transfers. The whole transfer is considered has completed when it was transferred all bursts. Currently this IP has a set well-known register map, which includes support for legacy and unroll modes. Legacy mode is version of this register map that has multiplexer register that allows to switch registers between all write and read channels and the unroll modes repeats all write and read channels registers with an offset between them. This register map is called v0. The IP team is creating a new register map more suitable to the latest PCIe features, that very likely will change the map register, which this version will be called v1. As soon as this new version is released by the IP team the support for this version in be included on this driver. According to the logic, patches 1, 2 and 3 should be squashed into 1 unique patch, but for the sake of simplicity of review, it was divided in this 3 patches files. Signed-off-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com> Cc: Vinod Koul <vkoul@kernel.org> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Russell King <rmk+kernel@armlinux.org.uk> Cc: Joao Pinto <jpinto@synopsys.com> Signed-off-by: Vinod Koul <vkoul@kernel.org>