diff options
author | Michael Kelley <mikelley@microsoft.com> | 2021-03-02 13:38:18 -0800 |
---|---|---|
committer | Wei Liu <wei.liu@kernel.org> | 2021-03-08 17:33:00 +0000 |
commit | d608715d4771cf2d63de07a5d7b026b6f52a70a5 (patch) | |
tree | 73a567821ae3e494a4df3c3f08deffb6b3d8afdc /arch | |
parent | 946f4b8680b8ad177f6489e023a1d95e82d502e2 (diff) | |
download | linux-d608715d4771cf2d63de07a5d7b026b6f52a70a5.tar.gz linux-d608715d4771cf2d63de07a5d7b026b6f52a70a5.tar.bz2 linux-d608715d4771cf2d63de07a5d7b026b6f52a70a5.zip |
Drivers: hv: vmbus: Move handling of VMbus interrupts
VMbus interrupts are most naturally modelled as per-cpu IRQs. But
because x86/x64 doesn't have per-cpu IRQs, the core VMbus interrupt
handling machinery is done in code under arch/x86 and Linux IRQs are
not used. Adding support for ARM64 means adding equivalent code
using per-cpu IRQs under arch/arm64.
A better model is to treat per-cpu IRQs as the normal path (which it is
for modern architectures), and the x86/x64 path as the exception. Do this
by incorporating standard Linux per-cpu IRQ allocation into the main VMbus
driver, and bypassing it in the x86/x64 exception case. For x86/x64,
special case code is retained under arch/x86, but no VMbus interrupt
handling code is needed under arch/arm64.
No functional change.
Signed-off-by: Michael Kelley <mikelley@microsoft.com>
Reviewed-by: Boqun Feng <boqun.feng@gmail.com>
Link: https://lore.kernel.org/r/1614721102-2241-7-git-send-email-mikelley@microsoft.com
Signed-off-by: Wei Liu <wei.liu@kernel.org>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/x86/include/asm/mshyperv.h | 1 | ||||
-rw-r--r-- | arch/x86/kernel/cpu/mshyperv.c | 13 |
2 files changed, 4 insertions, 10 deletions
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h index a6c608df0217..c10dd1c9ed81 100644 --- a/arch/x86/include/asm/mshyperv.h +++ b/arch/x86/include/asm/mshyperv.h @@ -32,7 +32,6 @@ static inline u64 hv_get_register(unsigned int reg) #define hv_enable_vdso_clocksource() \ vclocks_set_used(VDSO_CLOCKMODE_HVCLOCK); #define hv_get_raw_timer() rdtsc_ordered() -#define hv_get_vector() HYPERVISOR_CALLBACK_VECTOR /* * Reference to pv_ops must be inline so objtool diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c index e88bc296afca..41fd84a88783 100644 --- a/arch/x86/kernel/cpu/mshyperv.c +++ b/arch/x86/kernel/cpu/mshyperv.c @@ -60,23 +60,18 @@ DEFINE_IDTENTRY_SYSVEC(sysvec_hyperv_callback) set_irq_regs(old_regs); } -int hv_setup_vmbus_irq(int irq, void (*handler)(void)) +void hv_setup_vmbus_handler(void (*handler)(void)) { - /* - * The 'irq' argument is ignored on x86/x64 because a hard-coded - * interrupt vector is used for Hyper-V interrupts. - */ vmbus_handler = handler; - return 0; } +EXPORT_SYMBOL_GPL(hv_setup_vmbus_handler); -void hv_remove_vmbus_irq(void) +void hv_remove_vmbus_handler(void) { /* We have no way to deallocate the interrupt gate */ vmbus_handler = NULL; } -EXPORT_SYMBOL_GPL(hv_setup_vmbus_irq); -EXPORT_SYMBOL_GPL(hv_remove_vmbus_irq); +EXPORT_SYMBOL_GPL(hv_remove_vmbus_handler); /* * Routines to do per-architecture handling of stimer0 |