summaryrefslogtreecommitdiffstats
path: root/drivers/cxl
Commit message (Collapse)AuthorAgeFilesLines
* cxl/acpi: Annotate struct cxl_cxims_data with __counted_byKees Cook2023-09-221-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Prepare for the coming implementation by GCC and Clang of the __counted_by attribute. Flexible array members annotated with __counted_by can have their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS (for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family functions). As found with Coccinelle[1], add __counted_by for struct cxl_cxims_data. Additionally, since the element count member must be set before accessing the annotated flexible array member, move its initialization earlier. [1] https://github.com/kees/kernel-tools/blob/trunk/coccinelle/examples/counted_by.cocci Cc: Davidlohr Bueso <dave@stgolabs.net> Cc: Jonathan Cameron <jonathan.cameron@huawei.com> Cc: Dave Jiang <dave.jiang@intel.com> Cc: Alison Schofield <alison.schofield@intel.com> Cc: Vishal Verma <vishal.l.verma@intel.com> Cc: Ira Weiny <ira.weiny@intel.com> Cc: Dan Williams <dan.j.williams@intel.com> Cc: linux-cxl@vger.kernel.org Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: Vishal Verma <vishal.l.verma@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20230922175319.work.096-kees@kernel.org Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* cxl/port: Fix cxl_test register enumeration regressionDan Williams2023-09-221-4/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The cxl_test unit test environment models a CXL topology for sysfs/user-ABI regression testing. It uses interface mocking via the "--wrap=" linker option to redirect cxl_core routines that parse hardware registers with versions that just publish objects, like devm_cxl_enumerate_decoders(). Starting with: Commit 19ab69a60e3b ("cxl/port: Store the port's Component Register mappings in struct cxl_port") ...port register enumeration is moved into devm_cxl_add_port(). This conflicts with the "cxl_test avoids emulating registers stance" so either the port code needs to be refactored (too violent), or modified so that register enumeration is skipped on "fake" cxl_test ports (annoying, but straightforward). This conflict has happened previously and the "check for platform device" workaround to avoid instrusive refactoring was deployed in those scenarios. In general, refactoring should only benefit production code, test code needs to remain minimally instrusive to the greatest extent possible. This was missed previously because it may sometimes just cause warning messages to be emitted, but it can also cause test failures. The backport to -stable is only nice to have for clean cxl_test runs. Fixes: 19ab69a60e3b ("cxl/port: Store the port's Component Register mappings in struct cxl_port") Cc: stable@vger.kernel.org Reported-by: Alison Schofield <alison.schofield@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Tested-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/169476525052.1013896.6235102957693675187.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* cxl/region: Refactor granularity select in cxl_port_setup_targets()Alison Schofield2023-09-141-9/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In cxl_port_setup_targets() the region driver validates the configuration of auto-discovered region decoders, as well as decoders the driver is preparing to program. The existing calculations use the encoded interleave granularity value to create an interleave granularity that properly fans out when routing an x1 interleave to a greater than x1 interleave. That all worked well, until this config came along: Host Bridge: 2 way at 256 granularity Switch Decoder_A: 1 way at 512 Endpoint_X: 2 way at 256 Switch Decoder_B: 1 way at 512 Endpoint_Y: 2 way at 256 When the Host Bridge interleave is greater than 1 and the root decoder interleave is exactly 1, the region driver needs to consider the number of targets in the region when calculating the expected granularity. While examining the existing logic, and trying to cover the case above, a couple of simplifications appeared, hence this proposed refactoring. The first simplification is to apply the logic to the nominal values and use the existing helper function granularity_to_eig() to translate the desired granularity to the encoded form. This means the comment and code regarding setting address bits is discarded. Although that logic is not wrong, it adds a level of complexity that is not required in the granularity selection. The eig and eiw are indeed part of the routing instructions programmed into the decoders. Up-level the discussion to nominal ways and granularity for clearer analysis. The second simplification reduces the logic to a single granularity calculation that works for all cases. The new calculation doesn't care if parent_iw => 1 because parent_iw is used as a multiplier. The refactor cleans up a useless assignment of eiw made after the iw is already calculated. Regression testing included an examination of all of the ways and granularity selections made during a run of the cxl_test unit tests. There were no differences in selections before and after this patch. Fixes: ("27b3f8d13830 cxl/region: Program target lists") Signed-off-by: Alison Schofield <alison.schofield@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20230822180928.117596-1-alison.schofield@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* cxl/region: Match auto-discovered region decoders by HPA rangeAlison Schofield2023-09-141-1/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, when the region driver attaches a region to a port, it selects the ports next available decoder to program. With the addition of auto-discovered regions, a port decoder has already been programmed so grabbing the next available decoder can be a mismatch when there is more than one region using the port. The failure appears like this with CXL DEBUG enabled: [] cxl_core:alloc_region_ref:754: cxl region0: endpoint9: HPA order violation region0:[mem 0x14780000000-0x1478fffffff flags 0x200] vs [mem 0x880000000-0x185fffffff flags 0x200] [] cxl_core:cxl_port_attach_region:972: cxl region0: endpoint9: failed to allocate region reference When CXL DEBUG is not enabled, there is no failure message. The region just never materializes. Users can suspect this issue if they know their firmware has programmed decoders so that more than one region is using a port. Note that the problem may appear intermittently, ie not on every reboot. Add a matching method for auto-discovered regions that finds a decoder based on an HPA range. The decoder range must exactly match the region resource parameter. Fixes: a32320b71f08 ("cxl/region: Add region autodiscovery") Signed-off-by: Alison Schofield <alison.schofield@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Davidlohr Bueso <dave@stgolabs.net> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230905211007.256385-1-alison.schofield@intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* cxl/mbox: Fix CEL logic for poison and security commandsIra Weiny2023-09-141-11/+12
| | | | | | | | | | | | | | | | | | | | | The following debug output was observed while testing CXL cxl_core:cxl_walk_cel:721: cxl_mock_mem cxl_mem.0: Opcode 0x4300 unsupported by driver opcode 0x4300 (Get Poison) is supported by the driver and the mock device supports it. The logic should be checking that the opcode is both not poison and not security. Fix the logic to allow poison and security commands. Fixes: ad64f5952ce3 ("cxl/memdev: Only show sanitize sysfs files when supported") Cc: <stable@vger.kernel.org> Signed-off-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Davidlohr Bueso <dave@stgolabs.net> Acked-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230903-cxl-cel-fix-v1-1-e260c9467be3@intel.com [cleanup cxl_walk_cel() to centralized "enabled" checks] Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* cxl/pci: Replace host_bridge->native_aer with pcie_aer_is_native()Smita Koralahalli2023-09-111-2/+1
| | | | | | | | | | | | | | Use pcie_aer_is_native() to determine the native AER ownership as the usage of host_bride->native_aer does not cover command line override of AER ownership. Signed-off-by: Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com> Reviewed-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com> Reviewed-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20230823234305.27333-4-Smita.KoralahalliChannabasappa@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* cxl/pci: Fix appropriate checking for _OSC while handling CXL RAS registersSmita Koralahalli2023-09-111-3/+3
| | | | | | | | | | | | | | | | | | | | | cxl_pci fails to unmask CXL protocol errors when CXL memory error reporting is not granted native control. Given that CXL memory error reporting uses the event interface and protocol errors use AER, unmask protocol errors based only on the native AER setting. Without this change end user deployments will fail to report protocol errors in the case where native memory error handling is not granted to Linux. Also, return zero instead of an error code to not block the communication with the cxl device when in native memory error reporting mode. Fixes: 248529edc86f ("cxl: add RAS status unmasking for CXL") Cc: <stable@vger.kernel.org> Signed-off-by: Smita Koralahalli <Smita.KoralahalliChannabasappa@amd.com> Reviewed-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20230823234305.27333-2-Smita.KoralahalliChannabasappa@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* cxl/memdev: Only show sanitize sysfs files when supportedDavidlohr Bueso2023-07-283-1/+78
| | | | | | | | | | | | | | If the device does not support Sanitize or Secure Erase commands, hide the respective sysfs interfaces such that the operation can never be attempted. In order to be generic, keep track of the enabled security commands found in the CEL - the driver does not support Security Passthrough. Signed-off-by: Davidlohr Bueso <dave@stgolabs.net> Link: https://lore.kernel.org/r/20230726051940.3570-4-dave@stgolabs.net Reviewed-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
* cxl/memdev: Document security state in kern-docDavidlohr Bueso2023-07-281-0/+1
| | | | | | | | | ... as is the case with all members of struct cxl_memdev_state. Signed-off-by: Davidlohr Bueso <dave@stgolabs.net> Link: https://lore.kernel.org/r/20230726051940.3570-3-dave@stgolabs.net Reviewed-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
* cxl/acpi: Return 'rc' instead of '0' in cxl_parse_cfmws()Breno Leitao2023-07-181-1/+1
| | | | | | | | | | | | | Driver initialization returned success (return 0) even if the initialization (cxl_decoder_add() or acpi_table_parse_cedt()) failed. Return the error instead of swallowing it. Fixes: f4ce1f766f1e ("cxl/acpi: Convert CFMWS parsing to ACPI sub-table helpers") Signed-off-by: Breno Leitao <leitao@debian.org> Link: https://lore.kernel.org/r/20230714093146.2253438-2-leitao@debian.org Reviewed-by: Alison Schofield <alison.schofield@intel.com> Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
* cxl/acpi: Fix a use-after-free in cxl_parse_cfmws()Breno Leitao2023-07-181-2/+1
| | | | | | | | | | | | | | | | | | | | | | | KASAN and KFENCE detected an user-after-free in the CXL driver. This happens in the cxl_decoder_add() fail path. KASAN prints the following error: BUG: KASAN: slab-use-after-free in cxl_parse_cfmws (drivers/cxl/acpi.c:299) This happens in cxl_parse_cfmws(), where put_device() is called, releasing cxld, which is accessed later. Use the local variables in the dev_err() instead of pointing to the released memory. Since the dev_err() is printing a resource, change the open coded print format to use the %pr format specifier. Fixes: e50fe01e1f2a ("cxl/core: Drop ->platform_res attribute for root decoders") Signed-off-by: Breno Leitao <leitao@debian.org> Link: https://lore.kernel.org/r/20230714093146.2253438-1-leitao@debian.org Reviewed-by: Alison Schofield <alison.schofield@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
* cxl/mem: Fix a double shift bugDan Carpenter2023-07-141-1/+1
| | | | | | | | | | | | | | | | The CXL_FW_CANCEL macro is used with set/test_bit() so it should be a bit number and not the shifted value. The original code is the equivalent of using BIT(BIT(0)) so it's 0x2 instead of 0x1. This has no effect on runtime because it's done consistently and nothing else was using the 0x2 bit. Fixes: 9521875bbe00 ("cxl: add a firmware update mechanism using the sysfs firmware loader") Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org> Link: https://lore.kernel.org/r/a11b0c78-4717-4f4e-90be-f47f300d607c@moroto.mountain Reviewed-by: Vishal Verma <vishal.l.verma@intel.com> Reviewed-by: Davidlohr Bueso <dave@stgolabs.net> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
* cxl: fix CONFIG_FW_LOADER dependencyArnd Bergmann2023-07-141-1/+2
| | | | | | | | | | | | | | | | | | | | When FW_LOADER is disabled, cxl fails to link: arm-linux-gnueabi-ld: drivers/cxl/core/memdev.o: in function `cxl_memdev_setup_fw_upload': memdev.c:(.text+0x90e): undefined reference to `firmware_upload_register' memdev.c:(.text+0x93c): undefined reference to `firmware_upload_unregister' In order to use the firmware_upload_register() function, both FW_LOADER and FW_UPLOAD have to be enabled, which is a bit confusing. In addition, the dependency is on the wrong symbol, as the caller is part of the cxl_core.ko module, not the cxl_mem.ko module. Fixes: 9521875bbe005 ("cxl: add a firmware update mechanism using the sysfs firmware loader") Signed-off-by: Arnd Bergmann <arnd@arndb.de> Link: https://lore.kernel.org/r/20230703112928.332321-1-arnd@kernel.org Reviewed-by: Xiao Yang <yangx.jy@fujitsu.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
* cxl: Fix one kernel-doc commentYang Li2023-06-291-1/+1
| | | | | | | | | | | | | | Fix a merge error that updated the argument to cxl_mem_get_fw_info() but not the kernel-doc. drivers/cxl/core/memdev.c:678: warning: Function parameter or member 'mds' not described in 'cxl_mem_get_fw_info' drivers/cxl/core/memdev.c:678: warning: Excess function parameter 'cxlds' description in 'cxl_mem_get_fw_info' Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Link: https://lore.kernel.org/r/20230629021118.102744-1-yang.lee@linux.alibaba.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* cxl/pci: Use correct flag for sanitize pollingDavidlohr Bueso2023-06-271-1/+1
| | | | | | | | | This is a bogus value, left behind from a previous version. Fixes: 0c36b6ad436a ("cxl/mbox: Add sanitization handling machinery") Signed-off-by: Davidlohr Bueso <dave@stgolabs.net> Link: https://lore.kernel.org/r/7q3vcjqidtmxmys4n34g6b3mygvhaen7yikzxanpz56lw43fz7@7subbtbfkmyx Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* Merge branch 'for-6.5/cxl-rch-eh' into for-6.5/cxlDan Williams2023-06-2512-291/+443
|\ | | | | | | | | | | Pick up the first half of the RCH error handling series. The back half needs some fixups for test regressions. Small conflicts with the PMU work around register enumeration and setup helpers.
| * cxl/port: Store the downstream port's Component Register mappings in struct ↵Robert Richter2023-06-252-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | cxl_dport Same as for ports, also store the downstream port's Component Register mappings, use struct cxl_dport for that. Signed-off-by: Robert Richter <rrichter@amd.com> Signed-off-by: Terry Bowman <terry.bowman@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230622205523.85375-16-terry.bowman@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * cxl/port: Store the port's Component Register mappings in struct cxl_portRobert Richter2023-06-252-0/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CXL capabilities are stored in the Component Registers. To use them, the specific I/O ranges of the capabilities must be determined by probing the registers. For this, the whole Component Register range needs to be mapped temporarily to detect the offset and length of a capability range. In order to use more than one capability of a component (e.g. RAS and HDM) the Component Register are probed and its mappings created multiple times. This also causes overlapping I/O ranges as the whole Component Register range must be mapped again while a capability's I/O range is already mapped. Different capabilities cannot be setup at the same time. E.g. the RAS capability must be made available as soon as the PCI driver is bound, the HDM decoder is setup later during port enumeration. Moreover, during early setup it is still unknown if a certain capability is needed. A central capability setup is therefore not possible, capabilities must be individually enabled once needed during initialization. To avoid a duplicate register probe and overlapping I/O mappings, only probe the Component Registers one time and store the Component Register mapping in struct port. The stored mappings can be used later to iomap the capability register range when enabling the capability, which will be implemented in a follow-on patch. Signed-off-by: Robert Richter <rrichter@amd.com> Signed-off-by: Terry Bowman <terry.bowman@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230622205523.85375-15-terry.bowman@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * cxl/pci: Early setup RCH dport component registers from RCRBRobert Richter2023-06-254-18/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CXL RAS capabilities must be enabled and accessible as soon as the CXL endpoint is detected in the PCI hierarchy and bound to the cxl_pci driver. This needs to be independent of other modules such as cxl_port or cxl_mem. CXL RAS capabilities reside in the Component Registers. For an RCH this is determined by probing RCRB which is implemented very late once the CXL Memory Device is created. Change this by moving the RCRB probe to the cxl_pci driver. Do this by using a new introduced function cxl_pci_find_port() similar to cxl_mem_find_port() to determine the involved dport by the endpoint's PCI handle. Plug this into the existing cxl_pci_setup_regs() function to setup Component Registers. Probe the RCRB in case the Component Registers cannot be located through the CXL Register Locator capability. This unifies code and early sets up the Component Registers at the same time for both, VH and RCH mode. Only the cxl_pci driver is involved for this. This allows an early mapping of the CXL RAS capability registers. Signed-off-by: Robert Richter <rrichter@amd.com> Signed-off-by: Terry Bowman <terry.bowman@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230622205523.85375-14-terry.bowman@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * cxl/mem: Prepare for early RCH dport component register setupRobert Richter2023-06-251-5/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | In order to move the RCH dport component register setup to cxl_pci the base address must be stored in CXL device state (cxlds) for both modes, RCH and VH. Store it in cxlds->component_reg_phys and use it for endpoint creation. Signed-off-by: Robert Richter <rrichter@amd.com> Signed-off-by: Terry Bowman <terry.bowman@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230622205523.85375-13-terry.bowman@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * cxl/regs: Remove early capability checks in Component Register setupRobert Richter2023-06-253-9/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When probing the Component Registers in function cxl_probe_regs() there are also checks for the existence of the HDM and RAS capabilities. The checks may fail for components that do not implement the HDM capability causing the Component Registers setup to fail too. Remove the checks for a generalized use of cxl_probe_regs() and check them directly before mapping the RAS or HDM capabilities. This allows it to setup other Component Registers esp. of an RCH Downstream Port, which will be implemented in a follow-on patch. Signed-off-by: Robert Richter <rrichter@amd.com> Signed-off-by: Terry Bowman <terry.bowman@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230622205523.85375-12-terry.bowman@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * cxl/port: Remove Component Register base address from struct cxl_dportRobert Richter2023-06-252-3/+0
| | | | | | | | | | | | | | | | | | | | | | | | The Component Register base address @component_reg_phys is no longer used after the rework of the Component Register setup which now uses struct member @comp_map instead. Remove the base address. Signed-off-by: Robert Richter <rrichter@amd.com> Signed-off-by: Terry Bowman <terry.bowman@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230622205523.85375-11-terry.bowman@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * cxl/acpi: Directly bind the CEDT detected CHBCR to the Host Bridge's portRobert Richter2023-06-251-28/+63
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During a Host Bridge's downstream port enumeration the CHBS entries in the CEDT table are parsed, its Component Register base address extracted and then stored in struct cxl_dport. The CHBS may contain either the RCRB (RCH mode) or the Host Bridge's Component Registers (CHBCR, VH mode). The RCRB further contains the CXL downstream port register base address, while in VH mode the CXL Downstream Switch Ports are visible in the PCI hierarchy and the DP's component regs are disovered using the CXL DVSEC register locator capability. The Component Registers derived from the CHBS for both modes are different and thus also must be treated differently. That is, in RCH mode, the component regs base should be bound to the dport, but in VH mode to the CXL host bridge's port object. The current implementation stores the CHBCR in addition in struct cxl_dport and copies it later from there to struct cxl_port. As a result, the dport contains the wrong Component Registers base address and, e.g. the RAS capability of a CXL Root Port cannot be detected. To fix the CHBCR binding, attach it directly to the Host Bridge's @cxl_port structure. Do this during port creation of the Host Bridge in add_host_bridge_uport(). Factor out CHBS parsing code in add_host_bridge_dport() and use it in both functions. Co-developed-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Robert Richter <rrichter@amd.com> Signed-off-by: Terry Bowman <terry.bowman@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230622205523.85375-10-terry.bowman@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * cxl/acpi: Move add_host_bridge_uport() after cxl_get_chbs()Robert Richter2023-06-251-45/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | Just moving code to reorder functions to later share cxl_get_chbs() with add_host_bridge_uport(). This makes changes in the next patch visible. No other changes at all. Signed-off-by: Robert Richter <rrichter@amd.com> Signed-off-by: Terry Bowman <terry.bowman@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230622205523.85375-9-terry.bowman@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * cxl/pci: Refactor component register discovery for reuseTerry Bowman2023-06-253-74/+83
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The endpoint implements component register setup code. Refactor it for reuse with RCRB, downstream port, and upstream port setup. Move PCI specifics from cxl_setup_regs() into cxl_pci_setup_regs(). Move cxl_setup_regs() into cxl/core/regs.c and export it. This also includes supporting static functions cxl_map_registerblock(), cxl_unmap_register_block() and cxl_probe_regs(). Co-developed-by: Robert Richter <rrichter@amd.com> Signed-off-by: Robert Richter <rrichter@amd.com> Signed-off-by: Terry Bowman <terry.bowman@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230622205523.85375-8-terry.bowman@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * cxl/core/regs: Add @dev to cxl_register_mapRobert Richter2023-06-254-24/+31
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The corresponding device of a register mapping is used for devm operations and logging. For operations with struct cxl_register_map the device needs to be kept track separately. To simpify the involved function interfaces, add @dev to cxl_register_map. While at it also reorder function arguments of cxl_map_device_regs() and cxl_map_component_regs() to have the object @cxl_register_map first. As a result a bunch of functions are available to be used with a @cxl_register_map object. This patch is in preparation of reworking the component register setup code. Signed-off-by: Robert Richter <rrichter@amd.com> Signed-off-by: Terry Bowman <terry.bowman@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230622205523.85375-7-terry.bowman@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * cxl: Rename 'uport' to 'uport_dev'Dan Williams2023-06-257-63/+71
| | | | | | | | | | | | | | | | | | | | | | | | For symmetry with the recent rename of ->dport_dev for a 'struct cxl_dport', add the "_dev" suffix to the ->uport property of a 'struct cxl_port'. These devices represent the downstream-port-device and upstream-port-device respectively in the CXL/PCIe topology. Signed-off-by: Terry Bowman <terry.bowman@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230622205523.85375-6-terry.bowman@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * cxl: Rename member @dport of struct cxl_dport to @dport_devRobert Richter2023-06-253-14/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Reading code like dport->dport does not immediately suggest that this points to the corresponding device structure of the dport. Rename struct member @dport to @dport_dev. While at it, also rename @new argument of add_dport() to @dport. This better describes the variable as a dport (e.g. new->dport becomes to dport->dport_dev). Co-developed-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Terry Bowman <terry.bowman@amd.com> Signed-off-by: Robert Richter <rrichter@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230622205523.85375-5-terry.bowman@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * cxl/rch: Prepare for caching the MMIO mapped PCIe AER capabilityDan Williams2023-06-254-7/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Prepare cxl_probe_rcrb() for retrieving more than just the component register block. The RCH AER handling code wants to get back to the AER capability that happens to be MMIO mapped rather then configuration cycles. Move RCRB specific downstream port data, like the RCRB base and the AER capability offset, into its own data structure ('struct cxl_rcrb_info') for cxl_probe_rcrb() to fill. Extend 'struct cxl_dport' to include a 'struct cxl_rcrb_info' attribute. This centralizes all RCRB scanning in one routine. Co-developed-by: Robert Richter <rrichter@amd.com> Signed-off-by: Robert Richter <rrichter@amd.com> Signed-off-by: Terry Bowman <terry.bowman@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230622205523.85375-4-terry.bowman@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * cxl/acpi: Probe RCRB later during RCH downstream port creationRobert Richter2023-06-256-50/+61
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The RCRB is extracted already during ACPI CEDT table parsing while the data of this is needed not earlier than dport creation. This implementation comes with drawbacks: During ACPI table scan there is already MMIO access including mapping and unmapping, but only ACPI data should be collected here. The collected data must be transferred through a couple of interfaces until it is finally consumed when creating the dport. This causes complex data structures and function interfaces. Additionally, RCRB parsing will be extended to also extract AER data, it would be much easier do this at a later point during port and dport creation when the data structures are available to hold that data. To simplify all that, probe the RCRB at a later point during RCH downstream port creation. Change ACPI table parser to only extract the base address of either the component registers or the RCRB. Parse and extract the RCRB in devm_cxl_add_rch_dport(). This is in preparation to centralize all RCRB scanning. Signed-off-by: Robert Richter <rrichter@amd.com> Signed-off-by: Terry Bowman <terry.bowman@amd.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230622205523.85375-2-terry.bowman@amd.com Co-developed-by: Dan Williams <dan.j.williams@intel.com> Link: https://lore.kernel.org/r/20230622205523.85375-3-terry.bowman@amd.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* | Merge branch 'for-6.5/cxl-perf' into for-6.5/cxlDan Williams2023-06-2510-7/+224
|\ \ | | | | | | | | | | | | | | | Pick up initial support for the CXL 3.0 performance monitoring definition. Small conflicts with the firmware update work as they both placed their init code in the same location.
| * | perf: CXL Performance Monitoring Unit driverJonathan Cameron2023-06-251-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CXL rev 3.0 introduces a standard performance monitoring hardware block to CXL. Instances are discovered using CXL Register Locator DVSEC entries. Each CXL component may have multiple PMUs. This initial driver supports a subset of types of counter. It supports counters that are either fixed or configurable, but requires that they support the ability to freeze and write value whilst frozen. Development done with QEMU model which will be posted shortly. Example: $ perf stat -a -e cxl_pmu_mem0.0/h2d_req_snpcur/ -e cxl_pmu_mem0.0/h2d_req_snpdata/ -e cxl_pmu_mem0.0/clock_ticks/ sleep 1 Performance counter stats for 'system wide': 96,757,023,244,321 cxl_pmu_mem0.0/h2d_req_snpcur/ 96,757,023,244,365 cxl_pmu_mem0.0/h2d_req_snpdata/ 193,514,046,488,653 cxl_pmu_mem0.0/clock_ticks/ 1.090539600 seconds time elapsed Reviewed-by: Dave Jiang <dave.jiang@intel.com> Reviewed-by: Kan Liang <kan.liang@linux.intel.com> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230526095824.16336-5-Jonathan.Cameron@huawei.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * | cxl/pci: Find and register CXL PMU devicesJonathan Cameron2023-05-309-1/+155
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CXL PMU devices can be found from entries in the Register Locator DVSEC. Reviewed-by: Dan Williams <dan.j.williams@intel.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20230526095824.16336-4-Jonathan.Cameron@huawei.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * | cxl: Add functions to get an instance of / count regblocks of a given typeJonathan Cameron2023-05-302-6/+56
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Until the recently release CXL 3.0 specification, there was only ever one instance of any given register block pointed to by the Register Block Locator DVSEC. Now, the specification allows for multiple CXL PMU instances, each with their own register block. To enable this add cxl_find_regblock_instance() that takes an index parameter and use that to implement cxl_count_regblock() and cxl_find_regblock(). Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/20230526095824.16336-3-Jonathan.Cameron@huawei.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* | Merge branch 'for-6.5/cxl-region-fixes' into for-6.5/cxlDan Williams2023-06-252-46/+72
|\ \ | | | | | | | | | | | | | | | Pick up the recent fixes to how CPU caches are managed relative to region setup / teardown, and make sure that all decoders transition successfully before updating the region state from COMMIT => ACTIVE.
| * | cxl/region: Fix state transitions after reset failureDan Williams2023-06-251-11/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Jonathan reports that failed attempts to reset a region (teardown its HDM decoder configuration) mistakenly advance the state of the region to "not committed". Revert to the previous state of the region on reset failure so that the reset can be re-attempted. Reported-by: Jonathan Cameron <Jonathan.Cameron@Huawei.com> Closes: http://lore.kernel.org/r/20230316171441.0000205b@Huawei.com Fixes: 176baefb2eb5 ("cxl/hdm: Commit decoder state to hardware") Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168696507968.3590522.14484000711718573626.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * | cxl/region: Flag partially torn down regions as unusableDan Williams2023-06-252-0/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | cxl_region_decode_reset() walks all the decoders associated with a given region and disables them. Due to decoder ordering rules it is possible that a switch in the topology notices that a given decoder can not be shutdown before another region with a higher HPA is shutdown first. That can leave the region in a partially committed state. Capture that state in a new CXL_REGION_F_NEEDS_RESET flag and require that a successful cxl_region_decode_reset() attempt must be completed before cxl_region_probe() accepts the region. This is a corollary for the bug that Jonathan identified in "CXL/region : commit reset of out of order region appears to succeed." [1]. Cc: Jonathan Cameron <Jonathan.Cameron@Huawei.com> Link: http://lore.kernel.org/r/20230316171441.0000205b@Huawei.com [1] Fixes: 176baefb2eb5 ("cxl/hdm: Commit decoder state to hardware") Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168696507423.3590522.16254212607926684429.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * | cxl/region: Move cache invalidation before region teardown, and before setupDan Williams2023-06-252-36/+38
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Vikram raised a concern with the theoretical case of a CPU sending MemClnEvict to a device that is not prepared to receive. MemClnEvict is a message that is sent after a CPU has taken ownership of a cacheline from accelerator memory (HDM-DB). In the case of hotplug or HDM decoder reconfiguration it is possible that the CPU is holding old contents for a new device that has taken over the physical address range being cached by the CPU. To avoid this scenario, invalidate caches prior to tearing down an HDM decoder configuration. Now, this poses another problem that it is possible for something to speculate into that space while the decode configuration is still up, so to close that gap also invalidate prior to establish new contents behind a given physical address range. With this change the cache invalidation is now explicit and need not be checked in cxl_region_probe(), and that obviates the need for CXL_REGION_F_INCOHERENT. Cc: Jonathan Cameron <Jonathan.Cameron@Huawei.com> Fixes: d18bc74aced6 ("cxl/region: Manage CPU caches relative to DPA invalidation events") Reported-by: Vikram Sethi <vsethi@nvidia.com> Closes: http://lore.kernel.org/r/BYAPR12MB33364B5EB908BF7239BB996BBD53A@BYAPR12MB3336.namprd12.prod.outlook.com Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168696506886.3590522.4597053660991916591.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* | Merge branch 'for-6.5/cxl-type-2' into for-6.5/cxlDan Williams2023-06-2516-445/+515
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pick up the driver cleanups identified in preparation for CXL "type-2" (accelerator) device support. The major change here from a conflict generation perspective is the split of 'struct cxl_memdev_state' from the core 'struct cxl_dev_state'. Since an accelerator may not care about all the optional features that are standard on a CXL "type-3" (host-only memory expander) device. A silent conflict also occurs with the move of the endpoint port to be a formal property of a 'struct cxl_memdev' rather than drvdata.
| * | Revert "cxl/port: Enable the HDM decoder capability for switch ports"Dan Williams2023-06-253-33/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit eb0764b822b9 ("cxl/port: Enable the HDM decoder capability for switch ports") ...was added on the observation of CXL memory not being accessible after setting up a region on a "cold-plugged" device. A "cold-plugged" CXL device is one that was not present at boot, so platform-firmware/BIOS has no chance to set it up. While it is true that the debug found the enable bit clear in the host-bridge's instance of the global control register (CXL 3.0 8.2.4.19.2 CXL HDM Decoder Global Control Register), that bit is described as: "This bit is only applicable to CXL.mem devices and shall return 0 on CXL Host Bridges and Upstream Switch Ports." So it is meant to be zero, and further testing confirmed that this "fix" had no effect on the failure. Revert it, and be more vigilant about proposed fixes in the future. Since the original copied stable@, flag this revert for stable@ as well. Cc: <stable@vger.kernel.org> Fixes: eb0764b822b9 ("cxl/port: Enable the HDM decoder capability for switch ports") Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168685882012.3475336.16733084892658264991.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * | cxl/memdev: Formalize endpoint port linkageDan Williams2023-06-254-5/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Move the endpoint port that the cxl_mem driver establishes from drvdata to a first class attribute. This is in preparation for device-memory drivers reusing the CXL core for memory region management. Those drivers need a type-safe method to retrieve their CXL port linkage. Leave drvdata for private usage of the cxl_mem driver not external consumers of a 'struct cxl_memdev' object. Reviewed-by: Fan Ni <fan.ni@samsung.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168679264292.3436160.3901392135863405807.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * | cxl/pci: Unconditionally unmask 256B Flit errorsDan Williams2023-06-251-16/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current check for 256B Flit mode is incomplete and unnecessary. It is incomplete because it fails to consider the link speed, or check for CXL link capabilities. It is unnecessary because unconditionally unmasking 256B Flit errors is a nop when 256B Flit operation is not available. Remove this check in preparation for creating a cxl_probe_link() helper to centralize this detection. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168679263124.3436160.6228910132469454346.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * | cxl/region: Manage decoder target_type at decoder-attach timeDan Williams2023-06-251-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Switch-level (mid-level) decoders between the platform root and an endpoint can dynamically switch modes between HDM-H and HDM-D[B] depending on which region they target. Use the region type to fixup each decoder that gets allocated to map the given region. Note that endpoint decoders are meant to determine the region type, so warn if those ever need to be fixed up, but since it is possible to continue do so. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168679262543.3436160.13053831955768440312.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * | cxl/hdm: Default CXL_DEVTYPE_DEVMEM decoders to CXL_DECODER_DEVMEMDan Williams2023-06-252-10/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In preparation for device-memory region creation, arrange for decoders of CXL_DEVTYPE_DEVMEM memdevs to default to CXL_DECODER_DEVMEM for their target type. Revisit this if a device ever shows up that wants to offer mixed HDM-H (Host-Only Memory) and HDM-DB support, or an CXL_DEVTYPE_DEVMEM device that supports HDM-H. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168679261945.3436160.11673393474107374595.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * | cxl/port: Rename CXL_DECODER_{EXPANDER, ACCELERATOR} => {HOSTONLYMEM, DEVMEM}Dan Williams2023-06-255-12/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In preparation for support for HDM-D and HDM-DB configuration (device-memory, and device-memory with back-invalidate). Rename the current type designators to use HOSTONLYMEM and DEVMEM as a suffix. HDM-DB can be supported by devices that are not accelerators, so DEVMEM is a more generic term for that case. Fixup one location where this type value was open coded. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168679261369.3436160.7042443847605280593.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * | cxl/memdev: Make mailbox functionality optionalDan Williams2023-06-253-1/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In support of the Linux CXL core scaling for a wider set of CXL devices, allow for the creation of memdevs with some memory device capabilities disabled. Specifically, allow for CXL devices outside of those claiming to be compliant with the generic CXL memory device class code, like vendor specific Type-2/3 devices that host CXL.mem. This implies, allow for the creation of memdevs that only support component-registers, not necessarily memory-device-registers (like mailbox registers). A memdev derived from a CXL endpoint that does not support generic class code expectations is tagged "CXL_DEVTYPE_DEVMEM", while a memdev derived from a class-code compliant endpoint is tagged "CXL_DEVTYPE_CLASSMEM". The primary assumption of a CXL_DEVTYPE_DEVMEM memdev is that it optionally may not host a mailbox. Disable the command passthrough ioctl for memdevs that are not CXL_DEVTYPE_CLASSMEM, and return empty strings from memdev attributes associated with data retrieved via the class-device-standard IDENTIFY command. Note that empty strings were chosen over attribute visibility to maintain compatibility with shipping versions of cxl-cli that expect those attributes to always be present. Once cxl-cli has dropped that requirement this workaround can be deprecated. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168679260782.3436160.7587293613945445365.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * | cxl/mbox: Move mailbox related driver state to its own data structureDan Williams2023-06-257-271/+312
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 'struct cxl_dev_state' makes too many assumptions about the capabilities of a CXL device. In particular it assumes a CXL device has a mailbox and all of the infrastructure and state that comes along with that. In preparation for supporting accelerator / Type-2 devices that may not have a mailbox and in general maintain a minimal core context structure, make mailbox functionality a super-set of 'struct cxl_dev_state' with 'struct cxl_memdev_state'. With this reorganization it allows for CXL devices that support HDM decoder mapping, but not other general-expander / Type-3 capabilities, to only enable that subset without the rest of the mailbox infrastructure coming along for the ride. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168679260240.3436160.15520641540463704524.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * | cxl: Remove leftover attribute documentation in 'struct cxl_dev_state'Dan Williams2023-06-251-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit 14d788740774 ("cxl/mem: Consolidate CXL DVSEC Range enumeration in the core") ...removed @info from 'struct cxl_dev_state', but neglected to remove the corresponding kernel-doc entry. Complete the removal. Reported-by: Jonathan Cameron <Jonathan.Cameron@Huawei.com> Closes: http://lore.kernel.org/r/20230606121054.000069e1@Huawei.com Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168679259703.3436160.12583306507362357946.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * | cxl: Fix kernel-doc warningsDan Williams2023-06-251-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After Jonathan noticed [1] that 'struct cxl_dev_state' had a kernel-doc entry without a corresponding struct attribute I ran the kernel-doc script to see what else might be broken. Fix these warnings: drivers/cxl/cxlmem.h:199: warning: This comment starts with '/**', but isn't a kernel-doc comment. Refer Documentation/doc-guide/kernel-doc.rst * Event Interrupt Policy drivers/cxl/cxlmem.h:224: warning: Function parameter or member 'buf' not described in 'cxl_event_state' drivers/cxl/cxlmem.h:224: warning: Function parameter or member 'log_lock' not described in 'cxl_event_state' Note that scripts/kernel-doc only finds missing kernel-doc entries. It does not warn on too many kernel-doc entries, i.e. it did not catch the fact that @info refers to a not present member. Link: http://lore.kernel.org/r/20230606121054.000069e1@Huawei.com [1] Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168679259170.3436160.3686460404739136336.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| * | cxl/regs: Clarify when a 'struct cxl_register_map' is input vs outputDan Williams2023-06-252-6/+6
| |/ | | | | | | | | | | | | | | | | | | | | | | | | The @map parameter to cxl_probe_X_registers() is filled in with the mapping parameters of the register block. The @map parameter to cxl_map_X_registers() only reads that information to perform the mapping. Mark @map const for cxl_map_X_registers() to clarify that it is only an input to those helpers. Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Reviewed-by: Dave Jiang <dave.jiang@intel.com> Link: https://lore.kernel.org/r/168679258103.3436160.4941603739448763855.stgit@dwillia2-xfh.jf.intel.com Signed-off-by: Dan Williams <dan.j.williams@intel.com>