summaryrefslogtreecommitdiffstats
path: root/arch/i386/pci/pci.h
diff options
context:
space:
mode:
authorRajesh Shah <rajesh.shah@intel.com>2006-05-03 15:27:47 -0700
committerGreg Kroah-Hartman <gregkh@suse.de>2006-06-21 11:59:59 -0700
commit53e4d30dd666d7f83598957ee4a415eefb47c9a6 (patch)
tree3fb71e7d79e6290ea7758ae45d297912a4407ae9 /arch/i386/pci/pci.h
parent9c273b95808c270149e9be9e172e4ef19f5d5c98 (diff)
downloadlinux-53e4d30dd666d7f83598957ee4a415eefb47c9a6.tar.gz
linux-53e4d30dd666d7f83598957ee4a415eefb47c9a6.tar.bz2
linux-53e4d30dd666d7f83598957ee4a415eefb47c9a6.zip
[PATCH] PCI: i386/x86_84: disable PCI resource decode on device disable
When a PCI device is disabled via pci_disable_device(), it's still left decoding its BAR resource ranges even though its driver will have likely released those regions (and may even have unloaded). pci_enable_device() already explicitly enables BAR resource decode for the device being enabled. This patch disables resource decode for the PCI device being disabled, making it symmetric with the enable call. I saw this while doing something else, not because of a problem report. Still, seems to be the correct thing to do. Signed-off-by: Rajesh Shah <rajesh.shah@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Diffstat (limited to 'arch/i386/pci/pci.h')
-rw-r--r--arch/i386/pci/pci.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/arch/i386/pci/pci.h b/arch/i386/pci/pci.h
index 12035e29108b..12bf3d8dda29 100644
--- a/arch/i386/pci/pci.h
+++ b/arch/i386/pci/pci.h
@@ -35,6 +35,7 @@ extern unsigned int pcibios_max_latency;
void pcibios_resource_survey(void);
int pcibios_enable_resources(struct pci_dev *, int);
+void pcibios_disable_resources(struct pci_dev *);
/* pci-pc.c */