1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
|
# SPDX-License-Identifier: GPL-2.0-only
menuconfig CXL_BUS
tristate "CXL (Compute Express Link) Devices Support"
depends on PCI
select FW_LOADER
select FW_UPLOAD
select PCI_DOE
help
CXL is a bus that is electrically compatible with PCI Express, but
layers three protocols on that signalling (CXL.io, CXL.cache, and
CXL.mem). The CXL.cache protocol allows devices to hold cachelines
locally, the CXL.mem protocol allows devices to be fully coherent
memory targets, the CXL.io protocol is equivalent to PCI Express.
Say 'y' to enable support for the configuration and management of
devices supporting these protocols.
if CXL_BUS
config CXL_PCI
tristate "PCI manageability"
default CXL_BUS
help
The CXL specification defines a "CXL memory device" sub-class in the
PCI "memory controller" base class of devices. Device's identified by
this class code provide support for volatile and / or persistent
memory to be mapped into the system address map (Host-managed Device
Memory (HDM)).
Say 'y/m' to enable a driver that will attach to CXL memory expander
devices enumerated by the memory device class code for configuration
and management primarily via the mailbox interface. See Chapter 2.3
Type 3 CXL Device in the CXL 2.0 specification for more details.
If unsure say 'm'.
config CXL_MEM_RAW_COMMANDS
bool "RAW Command Interface for Memory Devices"
depends on CXL_PCI
help
Enable CXL RAW command interface.
The CXL driver ioctl interface may assign a kernel ioctl command
number for each specification defined opcode. At any given point in
time the number of opcodes that the specification defines and a device
may implement may exceed the kernel's set of associated ioctl function
numbers. The mismatch is either by omission, specification is too new,
or by design. When prototyping new hardware, or developing / debugging
the driver it is useful to be able to submit any possible command to
the hardware, even commands that may crash the kernel due to their
potential impact to memory currently in use by the kernel.
If developing CXL hardware or the driver say Y, otherwise say N.
config CXL_ACPI
tristate "CXL ACPI: Platform Support"
depends on ACPI
default CXL_BUS
select ACPI_TABLE_LIB
help
Enable support for host managed device memory (HDM) resources
published by a platform's ACPI CXL memory layout description. See
Chapter 9.14.1 CXL Early Discovery Table (CEDT) in the CXL 2.0
specification, and CXL Fixed Memory Window Structures (CEDT.CFMWS)
(https://www.computeexpresslink.org/spec-landing). The CXL core
consumes these resource to publish the root of a cxl_port decode
hierarchy to map regions that represent System RAM, or Persistent
Memory regions to be managed by LIBNVDIMM.
If unsure say 'm'.
config CXL_PMEM
tristate "CXL PMEM: Persistent Memory Support"
depends on LIBNVDIMM
default CXL_BUS
help
In addition to typical memory resources a platform may also advertise
support for persistent memory attached via CXL. This support is
managed via a bridge driver from CXL to the LIBNVDIMM system
subsystem. Say 'y/m' to enable support for enumerating and
provisioning the persistent memory capacity of CXL memory expanders.
If unsure say 'm'.
config CXL_MEM
tristate "CXL: Memory Expansion"
depends on CXL_PCI
default CXL_BUS
help
The CXL.mem protocol allows a device to act as a provider of "System
RAM" and/or "Persistent Memory" that is fully coherent as if the
memory were attached to the typical CPU memory controller. This is
known as HDM "Host-managed Device Memory".
Say 'y/m' to enable a driver that will attach to CXL.mem devices for
memory expansion and control of HDM. See Chapter 9.13 in the CXL 2.0
specification for a detailed description of HDM.
If unsure say 'm'.
config CXL_PORT
default CXL_BUS
tristate
config CXL_SUSPEND
def_bool y
depends on SUSPEND && CXL_MEM
config CXL_REGION
bool "CXL: Region Support"
default CXL_BUS
# For MAX_PHYSMEM_BITS
depends on SPARSEMEM
select MEMREGION
select GET_FREE_REGION
help
Enable the CXL core to enumerate and provision CXL regions. A CXL
region is defined by one or more CXL expanders that decode a given
system-physical address range. For CXL regions established by
platform-firmware this option enables memory error handling to
identify the devices participating in a given interleaved memory
range. Otherwise, platform-firmware managed CXL is enabled by being
placed in the system address map and does not need a driver.
If unsure say 'y'
config CXL_REGION_INVALIDATION_TEST
bool "CXL: Region Cache Management Bypass (TEST)"
depends on CXL_REGION
help
CXL Region management and security operations potentially invalidate
the content of CPU caches without notifying those caches to
invalidate the affected cachelines. The CXL Region driver attempts
to invalidate caches when those events occur. If that invalidation
fails the region will fail to enable. Reasons for cache
invalidation failure are due to the CPU not providing a cache
invalidation mechanism. For example usage of wbinvd is restricted to
bare metal x86. However, for testing purposes toggling this option
can disable that data integrity safety and proceed with enabling
regions when there might be conflicting contents in the CPU cache.
If unsure, or if this kernel is meant for production environments,
say N.
config CXL_PMU
tristate "CXL Performance Monitoring Unit"
default CXL_BUS
depends on PERF_EVENTS
help
Support performance monitoring as defined in CXL rev 3.0
section 13.2: Performance Monitoring. CXL components may have
one or more CXL Performance Monitoring Units (CPMUs).
Say 'y/m' to enable a driver that will attach to performance
monitoring units and provide standard perf based interfaces.
If unsure say 'm'.
endif
|