diff options
Diffstat (limited to 'Documentation')
75 files changed, 4321 insertions, 1159 deletions
diff --git a/Documentation/ABI/obsolete/dv1394 b/Documentation/ABI/obsolete/dv1394 deleted file mode 100644 index 2ee36864ca10..000000000000 --- a/Documentation/ABI/obsolete/dv1394 +++ /dev/null @@ -1,9 +0,0 @@ -What: dv1394 (a.k.a. "OHCI-DV I/O support" for FireWire) -Contact: linux1394-devel@lists.sourceforge.net -Description: - New application development should use raw1394 + userspace libraries - instead, notably libiec61883 which is functionally equivalent. - -Users: - ffmpeg/libavformat (used by a variety of media players) - dvgrab v1.x (replaced by dvgrab2 on top of raw1394 and resp. libraries) diff --git a/Documentation/ABI/removed/dv1394 b/Documentation/ABI/removed/dv1394 new file mode 100644 index 000000000000..c2310b6676f4 --- /dev/null +++ b/Documentation/ABI/removed/dv1394 @@ -0,0 +1,14 @@ +What: dv1394 (a.k.a. "OHCI-DV I/O support" for FireWire) +Date: May 2010 (scheduled), finally removed in kernel v2.6.37 +Contact: linux1394-devel@lists.sourceforge.net +Description: + /dev/dv1394/* were character device files, one for each FireWire + controller and for NTSC and PAL respectively, from which DV data + could be received by read() or transmitted by write(). A few + ioctl()s allowed limited control. + This special-purpose interface has been superseded by libraw1394 + + libiec61883 which are functionally equivalent, support HDV, and + transparently work on top of the newer firewire kernel drivers. + +Users: + ffmpeg/libavformat (if configured for DV1394) diff --git a/Documentation/ABI/removed/raw1394 b/Documentation/ABI/removed/raw1394 new file mode 100644 index 000000000000..490aa1efc4ae --- /dev/null +++ b/Documentation/ABI/removed/raw1394 @@ -0,0 +1,15 @@ +What: raw1394 (a.k.a. "Raw IEEE1394 I/O support" for FireWire) +Date: May 2010 (scheduled), finally removed in kernel v2.6.37 +Contact: linux1394-devel@lists.sourceforge.net +Description: + /dev/raw1394 was a character device file that allowed low-level + access to FireWire buses. Its major drawbacks were its inability + to implement sensible device security policies, and its low level + of abstraction that required userspace clients do duplicate much + of the kernel's ieee1394 core functionality. + Replaced by /dev/fw*, i.e. the <linux/firewire-cdev.h> ABI of + firewire-core. + +Users: + libraw1394 (works with firewire-cdev too, transparent to library ABI + users) diff --git a/Documentation/ABI/removed/raw1394_legacy_isochronous b/Documentation/ABI/removed/raw1394_legacy_isochronous deleted file mode 100644 index 1b629622d883..000000000000 --- a/Documentation/ABI/removed/raw1394_legacy_isochronous +++ /dev/null @@ -1,16 +0,0 @@ -What: legacy isochronous ABI of raw1394 (1st generation iso ABI) -Date: June 2007 (scheduled), removed in kernel v2.6.23 -Contact: linux1394-devel@lists.sourceforge.net -Description: - The two request types RAW1394_REQ_ISO_SEND, RAW1394_REQ_ISO_LISTEN have - been deprecated for quite some time. They are very inefficient as they - come with high interrupt load and several layers of callbacks for each - packet. Because of these deficiencies, the video1394 and dv1394 drivers - and the 3rd-generation isochronous ABI in raw1394 (rawiso) were created. - -Users: - libraw1394 users via the long deprecated API raw1394_iso_write, - raw1394_start_iso_write, raw1394_start_iso_rcv, raw1394_stop_iso_rcv - - libdc1394, which optionally uses these old libraw1394 calls - alternatively to the more efficient video1394 ABI diff --git a/Documentation/ABI/removed/video1394 b/Documentation/ABI/removed/video1394 new file mode 100644 index 000000000000..c39c25aee77b --- /dev/null +++ b/Documentation/ABI/removed/video1394 @@ -0,0 +1,16 @@ +What: video1394 (a.k.a. "OHCI-1394 Video support" for FireWire) +Date: May 2010 (scheduled), finally removed in kernel v2.6.37 +Contact: linux1394-devel@lists.sourceforge.net +Description: + /dev/video1394/* were character device files, one for each FireWire + controller, which were used for isochronous I/O. It was added as an + alternative to raw1394's isochronous I/O functionality which had + performance issues in its first generation. Any video1394 user had + to use raw1394 + libraw1394 too because video1394 did not provide + asynchronous I/O for device discovery and configuration. + Replaced by /dev/fw*, i.e. the <linux/firewire-cdev.h> ABI of + firewire-core. + +Users: + libdc1394 (works with firewire-cdev too, transparent to library ABI + users) diff --git a/Documentation/ABI/testing/sysfs-ata b/Documentation/ABI/testing/sysfs-ata new file mode 100644 index 000000000000..0a932155cbba --- /dev/null +++ b/Documentation/ABI/testing/sysfs-ata @@ -0,0 +1,99 @@ +What: /sys/class/ata_... +Date: August 2008 +Contact: Gwendal Grignou<gwendal@google.com> +Description: + +Provide a place in sysfs for storing the ATA topology of the system. This allows +retrieving various information about ATA objects. + +Files under /sys/class/ata_port +------------------------------- + + For each port, a directory ataX is created where X is the ata_port_id of + the port. The device parent is the ata host device. + +idle_irq (read) + + Number of IRQ received by the port while idle [some ata HBA only]. + +nr_pmp_links (read) + + If a SATA Port Multiplier (PM) is connected, number of link behind it. + +Files under /sys/class/ata_link +------------------------------- + + Behind each port, there is a ata_link. If there is a SATA PM in the + topology, 15 ata_link objects are created. + + If a link is behind a port, the directory name is linkX, where X is + ata_port_id of the port. + If a link is behind a PM, its name is linkX.Y where X is ata_port_id + of the parent port and Y the PM port. + +hw_sata_spd_limit + + Maximum speed supported by the connected SATA device. + +sata_spd_limit + + Maximum speed imposed by libata. + +sata_spd + + Current speed of the link [1.5, 3Gps,...]. + +Files under /sys/class/ata_device +--------------------------------- + + Behind each link, up to two ata device are created. + The name of the directory is devX[.Y].Z where: + - X is ata_port_id of the port where the device is connected, + - Y the port of the PM if any, and + - Z the device id: for PATA, there is usually 2 devices [0,1], + only 1 for SATA. + +class + Device class. Can be "ata" for disk, "atapi" for packet device, + "pmp" for PM, or "none" if no device was found behind the link. + +dma_mode + + Transfer modes supported by the device when in DMA mode. + Mostly used by PATA device. + +pio_mode + + Transfer modes supported by the device when in PIO mode. + Mostly used by PATA device. + +xfer_mode + + Current transfer mode. + +id + + Cached result of IDENTIFY command, as described in ATA8 7.16 and 7.17. + Only valid if the device is not a PM. + +gscr + + Cached result of the dump of PM GSCR register. + Valid registers are: + 0: SATA_PMP_GSCR_PROD_ID, + 1: SATA_PMP_GSCR_REV, + 2: SATA_PMP_GSCR_PORT_INFO, + 32: SATA_PMP_GSCR_ERROR, + 33: SATA_PMP_GSCR_ERROR_EN, + 64: SATA_PMP_GSCR_FEAT, + 96: SATA_PMP_GSCR_FEAT_EN, + 130: SATA_PMP_GSCR_SII_GPIO + Only valid if the device is a PM. + +spdn_cnt + + Number of time libata decided to lower the speed of link due to errors. + +ering + + Formatted output of the error ring of the device. diff --git a/Documentation/ABI/testing/sysfs-devices-power b/Documentation/ABI/testing/sysfs-devices-power index 6123c523bfd7..7628cd1bc36a 100644 --- a/Documentation/ABI/testing/sysfs-devices-power +++ b/Documentation/ABI/testing/sysfs-devices-power @@ -77,3 +77,91 @@ Description: devices this attribute is set to "enabled" by bus type code or device drivers and in that cases it should be safe to leave the default value. + +What: /sys/devices/.../power/wakeup_count +Date: September 2010 +Contact: Rafael J. Wysocki <rjw@sisk.pl> +Description: + The /sys/devices/.../wakeup_count attribute contains the number + of signaled wakeup events associated with the device. This + attribute is read-only. If the device is not enabled to wake up + the system from sleep states, this attribute is empty. + +What: /sys/devices/.../power/wakeup_active_count +Date: September 2010 +Contact: Rafael J. Wysocki <rjw@sisk.pl> +Description: + The /sys/devices/.../wakeup_active_count attribute contains the + number of times the processing of wakeup events associated with + the device was completed (at the kernel level). This attribute + is read-only. If the device is not enabled to wake up the + system from sleep states, this attribute is empty. + +What: /sys/devices/.../power/wakeup_hit_count +Date: September 2010 +Contact: Rafael J. Wysocki <rjw@sisk.pl> +Description: + The /sys/devices/.../wakeup_hit_count attribute contains the + number of times the processing of a wakeup event associated with + the device might prevent the system from entering a sleep state. + This attribute is read-only. If the device is not enabled to + wake up the system from sleep states, this attribute is empty. + +What: /sys/devices/.../power/wakeup_active +Date: September 2010 +Contact: Rafael J. Wysocki <rjw@sisk.pl> +Description: + The /sys/devices/.../wakeup_active attribute contains either 1, + or 0, depending on whether or not a wakeup event associated with + the device is being processed (1). This attribute is read-only. + If the device is not enabled to wake up the system from sleep + states, this attribute is empty. + +What: /sys/devices/.../power/wakeup_total_time_ms +Date: September 2010 +Contact: Rafael J. Wysocki <rjw@sisk.pl> +Description: + The /sys/devices/.../wakeup_total_time_ms attribute contains + the total time of processing wakeup events associated with the + device, in milliseconds. This attribute is read-only. If the + device is not enabled to wake up the system from sleep states, + this attribute is empty. + +What: /sys/devices/.../power/wakeup_max_time_ms +Date: September 2010 +Contact: Rafael J. Wysocki <rjw@sisk.pl> +Description: + The /sys/devices/.../wakeup_max_time_ms attribute contains + the maximum time of processing a single wakeup event associated + with the device, in milliseconds. This attribute is read-only. + If the device is not enabled to wake up the system from sleep + states, this attribute is empty. + +What: /sys/devices/.../power/wakeup_last_time_ms +Date: September 2010 +Contact: Rafael J. Wysocki <rjw@sisk.pl> +Description: + The /sys/devices/.../wakeup_last_time_ms attribute contains + the value of the monotonic clock corresponding to the time of + signaling the last wakeup event associated with the device, in + milliseconds. This attribute is read-only. If the device is + not enabled to wake up the system from sleep states, this + attribute is empty. + +What: /sys/devices/.../power/autosuspend_delay_ms +Date: September 2010 +Contact: Alan Stern <stern@rowland.harvard.edu> +Description: + The /sys/devices/.../power/autosuspend_delay_ms attribute + contains the autosuspend delay value (in milliseconds). Some + drivers do not want their device to suspend as soon as it + becomes idle at run time; they want the device to remain + inactive for a certain minimum period of time first. That + period is called the autosuspend delay. Negative values will + prevent the device from being suspended at run time (similar + to writing "on" to the power/control attribute). Values >= + 1000 will cause the autosuspend timer expiration to be rounded + up to the nearest second. + + Not all drivers support this attribute. If it isn't supported, + attempts to read or write it will yield I/O errors. diff --git a/Documentation/ABI/testing/sysfs-devices-system-ibm-rtl b/Documentation/ABI/testing/sysfs-devices-system-ibm-rtl new file mode 100644 index 000000000000..b82deeaec314 --- /dev/null +++ b/Documentation/ABI/testing/sysfs-devices-system-ibm-rtl @@ -0,0 +1,22 @@ +What: state +Date: Sep 2010 +KernelVersion: 2.6.37 +Contact: Vernon Mauery <vernux@us.ibm.com> +Description: The state file allows a means by which to change in and + out of Premium Real-Time Mode (PRTM), as well as the + ability to query the current state. + 0 => PRTM off + 1 => PRTM enabled +Users: The ibm-prtm userspace daemon uses this interface. + + +What: version +Date: Sep 2010 +KernelVersion: 2.6.37 +Contact: Vernon Mauery <vernux@us.ibm.com> +Description: The version file provides a means by which to query + the RTL table version that lives in the Extended + BIOS Data Area (EBDA). +Users: The ibm-prtm userspace daemon uses this interface. + + diff --git a/Documentation/ABI/testing/sysfs-driver-hid-roccat-pyra b/Documentation/ABI/testing/sysfs-driver-hid-roccat-pyra new file mode 100644 index 000000000000..ad1125b02ff4 --- /dev/null +++ b/Documentation/ABI/testing/sysfs-driver-hid-roccat-pyra @@ -0,0 +1,98 @@ +What: /sys/bus/usb/devices/<busnum>-<devnum>:<config num>.<interface num>/actual_cpi +Date: August 2010 +Contact: Stefan Achatz <erazor_de@users.sourceforge.net> +Description: It is possible to switch the cpi setting of the mouse with the + press of a button. + When read, this file returns the raw number of the actual cpi + setting reported by the mouse. This number has to be further + processed to receive the real dpi value. + + VALUE DPI + 1 400 + 2 800 + 4 1600 + + This file is readonly. + +What: /sys/bus/usb/devices/<busnum>-<devnum>:<config num>.<interface num>/actual_profile +Date: August 2010 +Contact: Stefan Achatz <erazor_de@users.sourceforge.net> +Description: When read, this file returns the number of the actual profile in + range 0-4. + This file is readonly. + +What: /sys/bus/usb/devices/<busnum>-<devnum>:<config num>.<interface num>/firmware_version +Date: August 2010 +Contact: Stefan Achatz <erazor_de@users.sourceforge.net> +Description: When read, this file returns the raw integer version number of the + firmware reported by the mouse. Using the integer value eases + further usage in other programs. To receive the real version + number the decimal point has to be shifted 2 positions to the + left. E.g. a returned value of 138 means 1.38 + This file is readonly. + +What: /sys/bus/usb/devices/<busnum>-<devnum>:<config num>.<interface num>/profile_settings +Date: August 2010 +Contact: Stefan Achatz <erazor_de@users.sourceforge.net> +Description: The mouse can store 5 profiles which can be switched by the + press of a button. A profile is split in settings and buttons. + profile_settings holds informations like resolution, sensitivity + and light effects. + When written, this file lets one write the respective profile + settings back to the mouse. The data has to be 13 bytes long. + The mouse will reject invalid data. + Which profile to write is determined by the profile number + contained in the data. + This file is writeonly. + +What: /sys/bus/usb/devices/<busnum>-<devnum>:<config num>.<interface num>/profile[1-5]_settings +Date: August 2010 +Contact: Stefan Achatz <erazor_de@users.sourceforge.net> +Description: The mouse can store 5 profiles which can be switched by the + press of a button. A profile is split in settings and buttons. + profile_settings holds informations like resolution, sensitivity + and light effects. + When read, these files return the respective profile settings. + The returned data is 13 bytes in size. + This file is readonly. + +What: /sys/bus/usb/devices/<busnum>-<devnum>:<config num>.<interface num>/profile_buttons +Date: August 2010 +Contact: Stefan Achatz <erazor_de@users.sourceforge.net> +Description: The mouse can store 5 profiles which can be switched by the + press of a button. A profile is split in settings and buttons. + profile_buttons holds informations about button layout. + When written, this file lets one write the respective profile + buttons back to the mouse. The data has to be 19 bytes long. + The mouse will reject invalid data. + Which profile to write is determined by the profile number + contained in the data. + This file is writeonly. + +What: /sys/bus/usb/devices/<busnum>-<devnum>:<config num>.<interface num>/profile[1-5]_buttons +Date: August 2010 +Contact: Stefan Achatz <erazor_de@users.sourceforge.net> +Description: The mouse can store 5 profiles which can be switched by the + press of a button. A profile is split in settings and buttons. + profile_buttons holds informations about button layout. + When read, these files return the respective profile buttons. + The returned data is 19 bytes in size. + This file is readonly. + +What: /sys/bus/usb/devices/<busnum>-<devnum>:<config num>.<interface num>/startup_profile +Date: August 2010 +Contact: Stefan Achatz <erazor_de@users.sourceforge.net> +Description: The integer value of this attribute ranges from 0-4. + When read, this attribute returns the number of the profile + that's active when the mouse is powered on. + This file is readonly. + +What: /sys/bus/usb/devices/<busnum>-<devnum>:<config num>.<interface num>/settings +Date: August 2010 +Contact: Stefan Achatz <erazor_de@users.sourceforge.net> +Description: When read, this file returns the settings stored in the mouse. + The size of the data is 3 bytes and holds information on the + startup_profile. + When written, this file lets write settings back to the mouse. + The data has to be 3 bytes long. The mouse will reject invalid + data. diff --git a/Documentation/ABI/testing/sysfs-module b/Documentation/ABI/testing/sysfs-module new file mode 100644 index 000000000000..cfcec3bffc0a --- /dev/null +++ b/Documentation/ABI/testing/sysfs-module @@ -0,0 +1,12 @@ +What: /sys/module/pch_phub/drivers/.../pch_mac +Date: August 2010 +KernelVersion: 2.6.35 +Contact: masa-korg@dsn.okisemi.com +Description: Write/read GbE MAC address. + +What: /sys/module/pch_phub/drivers/.../pch_firmware +Date: August 2010 +KernelVersion: 2.6.35 +Contact: masa-korg@dsn.okisemi.com +Description: Write/read Option ROM data. + diff --git a/Documentation/ABI/testing/sysfs-power b/Documentation/ABI/testing/sysfs-power index 2875f1f74a07..194ca446ac28 100644 --- a/Documentation/ABI/testing/sysfs-power +++ b/Documentation/ABI/testing/sysfs-power @@ -99,9 +99,38 @@ Description: dmesg -s 1000000 | grep 'hash matches' + If you do not get any matches (or they appear to be false + positives), it is possible that the last PM event point + referred to a device created by a loadable kernel module. In + this case cat /sys/power/pm_trace_dev_match (see below) after + your system is started up and the kernel modules are loaded. + CAUTION: Using it will cause your machine's real-time (CMOS) clock to be set to a random invalid time after a resume. +What; /sys/power/pm_trace_dev_match +Date: October 2010 +Contact: James Hogan <james@albanarts.com> +Description: + The /sys/power/pm_trace_dev_match file contains the name of the + device associated with the last PM event point saved in the RTC + across reboots when pm_trace has been used. More precisely it + contains the list of current devices (including those + registered by loadable kernel modules since boot) which match + the device hash in the RTC at boot, with a newline after each + one. + + The advantage of this file over the hash matches printed to the + kernel log (see /sys/power/pm_trace), is that it includes + devices created after boot by loadable kernel modules. + + Due to the small hash size necessary to fit in the RTC, it is + possible that more than one device matches the hash, in which + case further investigation is required to determine which + device is causing the problem. Note that genuine RTC clock + values (such as when pm_trace has not been used), can still + match a device and output it's name here. + What: /sys/power/pm_async Date: January 2009 Contact: Rafael J. Wysocki <rjw@sisk.pl> diff --git a/Documentation/DocBook/80211.tmpl b/Documentation/DocBook/80211.tmpl new file mode 100644 index 000000000000..19a1210c2530 --- /dev/null +++ b/Documentation/DocBook/80211.tmpl @@ -0,0 +1,495 @@ +<?xml version="1.0" encoding="UTF-8"?> +<!DOCTYPE set PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN" + "http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd" []> +<set> + <setinfo> + <title>The 802.11 subsystems – for kernel developers</title> + <subtitle> + Explaining wireless 802.11 networking in the Linux kernel + </subtitle> + + <copyright> + <year>2007-2009</year> + <holder>Johannes Berg</holder> + </copyright> + + <authorgroup> + <author> + <firstname>Johannes</firstname> + <surname>Berg</surname> + <affiliation> + <address><email>johannes@sipsolutions.net</email></address> + </affiliation> + </author> + </authorgroup> + + <legalnotice> + <para> + This documentation is free software; you can redistribute + it and/or modify it under the terms of the GNU General Public + License version 2 as published by the Free Software Foundation. + </para> + <para> + This documentation is distributed in the hope that it will be + useful, but WITHOUT ANY WARRANTY; without even the implied + warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. + See the GNU General Public License for more details. + </para> + <para> + You should have received a copy of the GNU General Public + License along with this documentation; if not, write to the Free + Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, + MA 02111-1307 USA + </para> + <para> + For more details see the file COPYING in the source + distribution of Linux. + </para> + </legalnotice> + + <abstract> + <para> + These books attempt to give a description of the + various subsystems that play a role in 802.11 wireless + networking in Linux. Since these books are for kernel + developers they attempts to document the structures + and functions used in the kernel as well as giving a + higher-level overview. + </para> + <para> + The reader is expected to be familiar with the 802.11 + standard as published by the IEEE in 802.11-2007 (or + possibly later versions). References to this standard + will be given as "802.11-2007 8.1.5". + </para> + </abstract> + </setinfo> + <book id="cfg80211-developers-guide"> + <bookinfo> + <title>The cfg80211 subsystem</title> + + <abstract> +!Pinclude/net/cfg80211.h Introduction + </abstract> + </bookinfo> + <chapter> + <title>Device registration</title> +!Pinclude/net/cfg80211.h Device registration +!Finclude/net/cfg80211.h ieee80211_band +!Finclude/net/cfg80211.h ieee80211_channel_flags +!Finclude/net/cfg80211.h ieee80211_channel +!Finclude/net/cfg80211.h ieee80211_rate_flags +!Finclude/net/cfg80211.h ieee80211_rate +!Finclude/net/cfg80211.h ieee80211_sta_ht_cap +!Finclude/net/cfg80211.h ieee80211_supported_band +!Finclude/net/cfg80211.h cfg80211_signal_type +!Finclude/net/cfg80211.h wiphy_params_flags +!Finclude/net/cfg80211.h wiphy_flags +!Finclude/net/cfg80211.h wiphy +!Finclude/net/cfg80211.h wireless_dev +!Finclude/net/cfg80211.h wiphy_new +!Finclude/net/cfg80211.h wiphy_register +!Finclude/net/cfg80211.h wiphy_unregister +!Finclude/net/cfg80211.h wiphy_free + +!Finclude/net/cfg80211.h wiphy_name +!Finclude/net/cfg80211.h wiphy_dev +!Finclude/net/cfg80211.h wiphy_priv +!Finclude/net/cfg80211.h priv_to_wiphy +!Finclude/net/cfg80211.h set_wiphy_dev +!Finclude/net/cfg80211.h wdev_priv + </chapter> + <chapter> + <title>Actions and configuration</title> +!Pinclude/net/cfg80211.h Actions and configuration +!Finclude/net/cfg80211.h cfg80211_ops +!Finclude/net/cfg80211.h vif_params +!Finclude/net/cfg80211.h key_params +!Finclude/net/cfg80211.h survey_info_flags +!Finclude/net/cfg80211.h survey_info +!Finclude/net/cfg80211.h beacon_parameters +!Finclude/net/cfg80211.h plink_actions +!Finclude/net/cfg80211.h station_parameters +!Finclude/net/cfg80211.h station_info_flags +!Finclude/net/cfg80211.h rate_info_flags +!Finclude/net/cfg80211.h rate_info +!Finclude/net/cfg80211.h station_info +!Finclude/net/cfg80211.h monitor_flags +!Finclude/net/cfg80211.h mpath_info_flags +!Finclude/net/cfg80211.h mpath_info +!Finclude/net/cfg80211.h bss_parameters +!Finclude/net/cfg80211.h ieee80211_txq_params +!Finclude/net/cfg80211.h cfg80211_crypto_settings +!Finclude/net/cfg80211.h cfg80211_auth_request +!Finclude/net/cfg80211.h cfg80211_assoc_request +!Finclude/net/cfg80211.h cfg80211_deauth_request +!Finclude/net/cfg80211.h cfg80211_disassoc_request +!Finclude/net/cfg80211.h cfg80211_ibss_params +!Finclude/net/cfg80211.h cfg80211_connect_params +!Finclude/net/cfg80211.h cfg80211_pmksa +!Finclude/net/cfg80211.h cfg80211_send_rx_auth +!Finclude/net/cfg80211.h cfg80211_send_auth_timeout +!Finclude/net/cfg80211.h __cfg80211_auth_canceled +!Finclude/net/cfg80211.h cfg80211_send_rx_assoc +!Finclude/net/cfg80211.h cfg80211_send_assoc_timeout +!Finclude/net/cfg80211.h cfg80211_send_deauth +!Finclude/net/cfg80211.h __cfg80211_send_deauth +!Finclude/net/cfg80211.h cfg80211_send_disassoc +!Finclude/net/cfg80211.h __cfg80211_send_disassoc +!Finclude/net/cfg80211.h cfg80211_ibss_joined +!Finclude/net/cfg80211.h cfg80211_connect_result +!Finclude/net/cfg80211.h cfg80211_roamed +!Finclude/net/cfg80211.h cfg80211_disconnected +!Finclude/net/cfg80211.h cfg80211_ready_on_channel +!Finclude/net/cfg80211.h cfg80211_remain_on_channel_expired +!Finclude/net/cfg80211.h cfg80211_new_sta +!Finclude/net/cfg80211.h cfg80211_rx_mgmt +!Finclude/net/cfg80211.h cfg80211_mgmt_tx_status +!Finclude/net/cfg80211.h cfg80211_cqm_rssi_notify +!Finclude/net/cfg80211.h cfg80211_michael_mic_failure + </chapter> + <chapter> + <title>Scanning and BSS list handling</title> +!Pinclude/net/cfg80211.h Scanning and BSS list handling +!Finclude/net/cfg80211.h cfg80211_ssid +!Finclude/net/cfg80211.h cfg80211_scan_request +!Finclude/net/cfg80211.h cfg80211_scan_done +!Finclude/net/cfg80211.h cfg80211_bss +!Finclude/net/cfg80211.h cfg80211_inform_bss_frame +!Finclude/net/cfg80211.h cfg80211_inform_bss +!Finclude/net/cfg80211.h cfg80211_unlink_bss +!Finclude/net/cfg80211.h cfg80211_find_ie +!Finclude/net/cfg80211.h ieee80211_bss_get_ie + </chapter> + <chapter> + <title>Utility functions</title> +!Pinclude/net/cfg80211.h Utility functions +!Finclude/net/cfg80211.h ieee80211_channel_to_frequency +!Finclude/net/cfg80211.h ieee80211_frequency_to_channel +!Finclude/net/cfg80211.h ieee80211_get_channel +!Finclude/net/cfg80211.h ieee80211_get_response_rate +!Finclude/net/cfg80211.h ieee80211_hdrlen +!Finclude/net/cfg80211.h ieee80211_get_hdrlen_from_skb +!Finclude/net/cfg80211.h ieee80211_radiotap_iterator + </chapter> + <chapter> + <title>Data path helpers</title> +!Pinclude/net/cfg80211.h Data path helpers +!Finclude/net/cfg80211.h ieee80211_data_to_8023 +!Finclude/net/cfg80211.h ieee80211_data_from_8023 +!Finclude/net/cfg80211.h ieee80211_amsdu_to_8023s +!Finclude/net/cfg80211.h cfg80211_classify8021d + </chapter> + <chapter> + <title>Regulatory enforcement infrastructure</title> +!Pinclude/net/cfg80211.h Regulatory enforcement infrastructure +!Finclude/net/cfg80211.h regulatory_hint +!Finclude/net/cfg80211.h wiphy_apply_custom_regulatory +!Finclude/net/cfg80211.h freq_reg_info + </chapter> + <chapter> + <title>RFkill integration</title> +!Pinclude/net/cfg80211.h RFkill integration +!Finclude/net/cfg80211.h wiphy_rfkill_set_hw_state +!Finclude/net/cfg80211.h wiphy_rfkill_start_polling +!Finclude/net/cfg80211.h wiphy_rfkill_stop_polling + </chapter> + <chapter> + <title>Test mode</title> +!Pinclude/net/cfg80211.h Test mode +!Finclude/net/cfg80211.h cfg80211_testmode_alloc_reply_skb +!Finclude/net/cfg80211.h cfg80211_testmode_reply +!Finclude/net/cfg80211.h cfg80211_testmode_alloc_event_skb +!Finclude/net/cfg80211.h cfg80211_testmode_event + </chapter> + </book> + <book id="mac80211-developers-guide"> + <bookinfo> + <title>The mac80211 subsystem</title> + <abstract> +!Pinclude/net/mac80211.h Introduction +!Pinclude/net/mac80211.h Warning + </abstract> + </bookinfo> + + <toc></toc> + + <!-- + Generally, this document shall be ordered by increasing complexity. + It is important to note that readers should be able to read only + the first few sections to get a working driver and only advanced + usage should require reading the full document. + --> + + <part> + <title>The basic mac80211 driver interface</title> + <partintro> + <para> + You should read and understand the information contained + within this part of the book while implementing a driver. + In some chapters, advanced usage is noted, that may be + skipped at first. + </para> + <para> + This part of the book only covers station and monitor mode + functionality, additional information required to implement + the other modes is covered in the second part of the book. + </para> + </partintro> + + <chapter id="basics"> + <title>Basic hardware handling</title> + <para>TBD</para> + <para> + This chapter shall contain information on getting a hw + struct allocated and registered with mac80211. + </para> + <para> + Since it is required to allocate rates/modes before registering + a hw struct, this chapter shall also contain information on setting + up the rate/mode structs. + </para> + <para> + Additionally, some discussion about the callbacks and + the general programming model should be in here, including + the definition of ieee80211_ops which will be referred to + a lot. + </para> + <para> + Finally, a discussion of hardware capabilities should be done + with references to other parts of the book. + </para> + <!-- intentionally multiple !F lines to get proper order --> +!Finclude/net/mac80211.h ieee80211_hw +!Finclude/net/mac80211.h ieee80211_hw_flags +!Finclude/net/mac80211.h SET_IEEE80211_DEV +!Finclude/net/mac80211.h SET_IEEE80211_PERM_ADDR +!Finclude/net/mac80211.h ieee80211_ops +!Finclude/net/mac80211.h ieee80211_alloc_hw +!Finclude/net/mac80211.h ieee80211_register_hw +!Finclude/net/mac80211.h ieee80211_get_tx_led_name +!Finclude/net/mac80211.h ieee80211_get_rx_led_name +!Finclude/net/mac80211.h ieee80211_get_assoc_led_name +!Finclude/net/mac80211.h ieee80211_get_radio_led_name +!Finclude/net/mac80211.h ieee80211_unregister_hw +!Finclude/net/mac80211.h ieee80211_free_hw + </chapter> + + <chapter id="phy-handling"> + <title>PHY configuration</title> + <para>TBD</para> + <para> + This chapter should describe PHY handling including + start/stop callbacks and the various structures used. + </para> +!Finclude/net/mac80211.h ieee80211_conf +!Finclude/net/mac80211.h ieee80211_conf_flags + </chapter> + + <chapter id="iface-handling"> + <title>Virtual interfaces</title> + <para>TBD</para> + <para> + This chapter should describe virtual interface basics + that are relevant to the driver (VLANs, MGMT etc are not.) + It should explain the use of the add_iface/remove_iface + callbacks as well as the interface configuration callbacks. + </para> + <para>Things related to AP mode should be discussed there.</para> + <para> + Things related to supporting multiple interfaces should be + in the appropriate chapter, a BIG FAT note should be here about + this though and the recommendation to allow only a single + interface in STA mode at first! + </para> +!Finclude/net/mac80211.h ieee80211_vif + </chapter> + + <chapter id="rx-tx"> + <title>Receive and transmit processing</title> + <sect1> + <title>what should be here</title> + <para>TBD</para> + <para> + This should describe the receive and transmit + paths in mac80211/the drivers as well as + transmit status handling. + </para> + </sect1> + <sect1> + <title>Frame format</title> +!Pinclude/net/mac80211.h Frame format + </sect1> + <sect1> + <title>Packet alignment</title> +!Pnet/mac80211/rx.c Packet alignment + </sect1> + <sect1> + <title>Calling into mac80211 from interrupts</title> +!Pinclude/net/mac80211.h Calling mac80211 from interrupts + </sect1> + <sect1> + <title>functions/definitions</title> +!Finclude/net/mac80211.h ieee80211_rx_status +!Finclude/net/mac80211.h mac80211_rx_flags +!Finclude/net/mac80211.h ieee80211_tx_info +!Finclude/net/mac80211.h ieee80211_rx +!Finclude/net/mac80211.h ieee80211_rx_irqsafe +!Finclude/net/mac80211.h ieee80211_tx_status +!Finclude/net/mac80211.h ieee80211_tx_status_irqsafe +!Finclude/net/mac80211.h ieee80211_rts_get +!Finclude/net/mac80211.h ieee80211_rts_duration +!Finclude/net/mac80211.h ieee80211_ctstoself_get +!Finclude/net/mac80211.h ieee80211_ctstoself_duration +!Finclude/net/mac80211.h ieee80211_generic_frame_duration +!Finclude/net/mac80211.h ieee80211_wake_queue +!Finclude/net/mac80211.h ieee80211_stop_queue +!Finclude/net/mac80211.h ieee80211_wake_queues +!Finclude/net/mac80211.h ieee80211_stop_queues + </sect1> + </chapter> + + <chapter id="filters"> + <title>Frame filtering</title> +!Pinclude/net/mac80211.h Frame filtering +!Finclude/net/mac80211.h ieee80211_filter_flags + </chapter> + </part> + + <part id="advanced"> + <title>Advanced driver interface</title> + <partintro> + <para> + Information contained within this part of the book is + of interest only for advanced interaction of mac80211 + with drivers to exploit more hardware capabilities and + improve performance. + </para> + </partintro> + + <chapter id="hardware-crypto-offload"> + <title>Hardware crypto acceleration</title> +!Pinclude/net/mac80211.h Hardware crypto acceleration + <!-- intentionally multiple !F lines to get proper order --> +!Finclude/net/mac80211.h set_key_cmd +!Finclude/net/mac80211.h ieee80211_key_conf +!Finclude/net/mac80211.h ieee80211_key_flags + </chapter> + + <chapter id="powersave"> + <title>Powersave support</title> +!Pinclude/net/mac80211.h Powersave support + </chapter> + + <chapter id="beacon-filter"> + <title>Beacon filter support</title> +!Pinclude/net/mac80211.h Beacon filter support +!Finclude/net/mac80211.h ieee80211_beacon_loss + </chapter> + + <chapter id="qos"> + <title>Multiple queues and QoS support</title> + <para>TBD</para> +!Finclude/net/mac80211.h ieee80211_tx_queue_params + </chapter> + + <chapter id="AP"> + <title>Access point mode support</title> + <para>TBD</para> + <para>Some parts of the if_conf should be discussed here instead</para> + <para> + Insert notes about VLAN interfaces with hw crypto here or + in the hw crypto chapter. + </para> +!Finclude/net/mac80211.h ieee80211_get_buffered_bc +!Finclude/net/mac80211.h ieee80211_beacon_get + </chapter> + + <chapter id="multi-iface"> + <title>Supporting multiple virtual interfaces</title> + <para>TBD</para> + <para> + Note: WDS with identical MAC address should almost always be OK + </para> + <para> + Insert notes about having multiple virtual interfaces with + different MAC addresses here, note which configurations are + supported by mac80211, add notes about supporting hw crypto + with it. + </para> + </chapter> + + <chapter id="hardware-scan-offload"> + <title>Hardware scan offload</title> + <para>TBD</para> +!Finclude/net/mac80211.h ieee80211_scan_completed + </chapter> + </part> + + <part id="rate-control"> + <title>Rate control interface</title> + <partintro> + <para>TBD</para> + <para> + This part of the book describes the rate control algorithm + interface and how it relates to mac80211 and drivers. + </para> + </partintro> + <chapter id="dummy"> + <title>dummy chapter</title> + <para>TBD</para> + </chapter> + </part> + + <part id="internal"> + <title>Internals</title> + <partintro> + <para>TBD</para> + <para> + This part of the book describes mac80211 internals. + </para> + </partintro> + + <chapter id="key-handling"> + <title>Key handling</title> + <sect1> + <title>Key handling basics</title> +!Pnet/mac80211/key.c Key handling basics + </sect1> + <sect1> + <title>MORE TBD</title> + <para>TBD</para> + </sect1> + </chapter> + + <chapter id="rx-processing"> + <title>Receive processing</title> + <para>TBD</para> + </chapter> + + <chapter id="tx-processing"> + <title>Transmit processing</title> + <para>TBD</para> + </chapter> + + <chapter id="sta-info"> + <title>Station info handling</title> + <sect1> + <title>Programming information</title> +!Fnet/mac80211/sta_info.h sta_info +!Fnet/mac80211/sta_info.h ieee80211_sta_info_flags + </sect1> + <sect1> + <title>STA information lifetime rules</title> +!Pnet/mac80211/sta_info.c STA information lifetime rules + </sect1> + </chapter> + + <chapter id="synchronisation"> + <title>Synchronisation</title> + <para>TBD</para> + <para>Locking, lots of RCU</para> + </chapter> + </part> + </book> +</set> diff --git a/Documentation/DocBook/Makefile b/Documentation/DocBook/Makefile index 34929f24c284..8b6e00a71034 100644 --- a/Documentation/DocBook/Makefile +++ b/Documentation/DocBook/Makefile @@ -12,7 +12,7 @@ DOCBOOKS := z8530book.xml mcabook.xml device-drivers.xml \ kernel-api.xml filesystems.xml lsm.xml usb.xml kgdb.xml \ gadget.xml libata.xml mtdnand.xml librs.xml rapidio.xml \ genericirq.xml s390-drivers.xml uio-howto.xml scsi.xml \ - mac80211.xml debugobjects.xml sh.xml regulator.xml \ + 80211.xml debugobjects.xml sh.xml regulator.xml \ alsa-driver-api.xml writing-an-alsa-driver.xml \ tracepoint.xml media.xml drm.xml diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl index ecd35e9d4410..feca0758391e 100644 --- a/Documentation/DocBook/device-drivers.tmpl +++ b/Documentation/DocBook/device-drivers.tmpl @@ -46,7 +46,6 @@ <sect1><title>Atomic and pointer manipulation</title> !Iarch/x86/include/asm/atomic.h -!Iarch/x86/include/asm/unaligned.h </sect1> <sect1><title>Delaying, scheduling, and timer routines</title> diff --git a/Documentation/DocBook/drm.tmpl b/Documentation/DocBook/drm.tmpl index 910c923a9b86..2861055afd7a 100644 --- a/Documentation/DocBook/drm.tmpl +++ b/Documentation/DocBook/drm.tmpl @@ -136,6 +136,7 @@ #ifdef CONFIG_COMPAT .compat_ioctl = i915_compat_ioctl, #endif + .llseek = noop_llseek, }, .pci_driver = { .name = DRIVER_NAME, diff --git a/Documentation/DocBook/genericirq.tmpl b/Documentation/DocBook/genericirq.tmpl index 1448b33fd222..fb10fd08c05c 100644 --- a/Documentation/DocBook/genericirq.tmpl +++ b/Documentation/DocBook/genericirq.tmpl @@ -28,7 +28,7 @@ </authorgroup> <copyright> - <year>2005-2006</year> + <year>2005-2010</year> <holder>Thomas Gleixner</holder> </copyright> <copyright> @@ -100,6 +100,10 @@ <listitem><para>Edge type</para></listitem> <listitem><para>Simple type</para></listitem> </itemizedlist> + During the implementation we identified another type: + <itemizedlist> + <listitem><para>Fast EOI type</para></listitem> + </itemizedlist> In the SMP world of the __do_IRQ() super-handler another type was identified: <itemizedlist> @@ -153,6 +157,7 @@ is still available. This leads to a kind of duality for the time being. Over time the new model should be used in more and more architectures, as it enables smaller and cleaner IRQ subsystems. + It's deprecated for three years now and about to be removed. </para> </chapter> <chapter id="bugs"> @@ -217,6 +222,7 @@ <itemizedlist> <listitem><para>handle_level_irq</para></listitem> <listitem><para>handle_edge_irq</para></listitem> + <listitem><para>handle_fasteoi_irq</para></listitem> <listitem><para>handle_simple_irq</para></listitem> <listitem><para>handle_percpu_irq</para></listitem> </itemizedlist> @@ -233,33 +239,33 @@ are used by the default flow implementations. The following helper functions are implemented (simplified excerpt): <programlisting> -default_enable(irq) +default_enable(struct irq_data *data) { - desc->chip->unmask(irq); + desc->chip->irq_unmask(data); } -default_disable(irq) +default_disable(struct irq_data *data) { - if (!delay_disable(irq)) - desc->chip->mask(irq); + if (!delay_disable(data)) + desc->chip->irq_mask(data); } -default_ack(irq) +default_ack(struct irq_data *data) { - chip->ack(irq); + chip->irq_ack(data); } -default_mask_ack(irq) +default_mask_ack(struct irq_data *data) { - if (chip->mask_ack) { - chip->mask_ack(irq); + if (chip->irq_mask_ack) { + chip->irq_mask_ack(data); } else { - chip->mask(irq); - chip->ack(irq); + chip->irq_mask(data); + chip->irq_ack(data); } } -noop(irq) +noop(struct irq_data *data)) { } @@ -278,12 +284,27 @@ noop(irq) <para> The following control flow is implemented (simplified excerpt): <programlisting> -desc->chip->start(); +desc->chip->irq_mask(); handle_IRQ_event(desc->action); -desc->chip->end(); +desc->chip->irq_unmask(); </programlisting> </para> - </sect3> + </sect3> + <sect3 id="Default_FASTEOI_IRQ_flow_handler"> + <title>Default Fast EOI IRQ flow handler</title> + <para> + handle_fasteoi_irq provides a generic implementation + for interrupts, which only need an EOI at the end of + the handler + </para> + <para> + The following control flow is implemented (simplified excerpt): + <programlisting> +handle_IRQ_event(desc->action); +desc->chip->irq_eoi(); + </programlisting> + </para> + </sect3> <sect3 id="Default_Edge_IRQ_flow_handler"> <title>Default Edge IRQ flow handler</title> <para> @@ -294,20 +315,19 @@ desc->chip->end(); The following control flow is implemented (simplified excerpt): <programlisting> if (desc->status & running) { - desc->chip->hold(); + desc->chip->irq_mask(); desc->status |= pending | masked; return; } -desc->chip->start(); +desc->chip->irq_ack(); desc->status |= running; do { if (desc->status & masked) - desc->chip->enable(); + desc->chip->irq_unmask(); desc->status &= ~pending; handle_IRQ_event(desc->action); } while (status & pending); desc->status &= ~running; -desc->chip->end(); </programlisting> </para> </sect3> @@ -342,9 +362,9 @@ handle_IRQ_event(desc->action); <para> The following control flow is implemented (simplified excerpt): <programlisting> -desc->chip->start(); handle_IRQ_event(desc->action); -desc->chip->end(); +if (desc->chip->irq_eoi) + desc->chip->irq_eoi(); </programlisting> </para> </sect3> @@ -375,8 +395,7 @@ desc->chip->end(); mechanism. (It's necessary to enable CONFIG_HARDIRQS_SW_RESEND when you want to use the delayed interrupt disable feature and your hardware is not capable of retriggering an interrupt.) - The delayed interrupt disable can be runtime enabled, per interrupt, - by setting the IRQ_DELAYED_DISABLE flag in the irq_desc status field. + The delayed interrupt disable is not configurable. </para> </sect2> </sect1> @@ -387,13 +406,13 @@ desc->chip->end(); contains all the direct chip relevant functions, which can be utilized by the irq flow implementations. <itemizedlist> - <listitem><para>ack()</para></listitem> - <listitem><para>mask_ack() - Optional, recommended for performance</para></listitem> - <listitem><para>mask()</para></listitem> - <listitem><para>unmask()</para></listitem> - <listitem><para>retrigger() - Optional</para></listitem> - <listitem><para>set_type() - Optional</para></listitem> - <listitem><para>set_wake() - Optional</para></listitem> + <listitem><para>irq_ack()</para></listitem> + <listitem><para>irq_mask_ack() - Optional, recommended for performance</para></listitem> + <listitem><para>irq_mask()</para></listitem> + <listitem><para>irq_unmask()</para></listitem> + <listitem><para>irq_retrigger() - Optional</para></listitem> + <listitem><para>irq_set_type() - Optional</para></listitem> + <listitem><para>irq_set_wake() - Optional</para></listitem> </itemizedlist> These primitives are strictly intended to mean what they say: ack means ACK, masking means masking of an IRQ line, etc. It is up to the flow @@ -458,6 +477,7 @@ desc->chip->end(); <para> This chapter contains the autogenerated documentation of the internal functions. </para> +!Ikernel/irq/irqdesc.c !Ikernel/irq/handle.c !Ikernel/irq/chip.c </chapter> diff --git a/Documentation/DocBook/kernel-api.tmpl b/Documentation/DocBook/kernel-api.tmpl index a20c6f6fffc3..6b4e07f28b69 100644 --- a/Documentation/DocBook/kernel-api.tmpl +++ b/Documentation/DocBook/kernel-api.tmpl @@ -57,7 +57,6 @@ </para> <sect1><title>String Conversions</title> -!Ilib/vsprintf.c !Elib/vsprintf.c </sect1> <sect1><title>String Manipulation</title> @@ -258,7 +257,8 @@ X!Earch/x86/kernel/mca_32.c !Iblock/blk-sysfs.c !Eblock/blk-settings.c !Eblock/blk-exec.c -!Eblock/blk-barrier.c +!Eblock/blk-flush.c +!Eblock/blk-lib.c !Eblock/blk-tag.c !Iblock/blk-tag.c !Eblock/blk-integrity.c diff --git a/Documentation/DocBook/kernel-locking.tmpl b/Documentation/DocBook/kernel-locking.tmpl index 0b1a3f97f285..f66f4df18690 100644 --- a/Documentation/DocBook/kernel-locking.tmpl +++ b/Documentation/DocBook/kernel-locking.tmpl @@ -1645,7 +1645,9 @@ the amount of locking which needs to be done. all the readers who were traversing the list when we deleted the element are finished. We use <function>call_rcu()</function> to register a callback which will actually destroy the object once - the readers are finished. + all pre-existing readers are finished. Alternatively, + <function>synchronize_rcu()</function> may be used to block until + all pre-existing are finished. </para> <para> But how does Read Copy Update know when the readers are @@ -1714,7 +1716,7 @@ the amount of locking which needs to be done. - object_put(obj); + list_del_rcu(&obj->list); cache_num--; -+ call_rcu(&obj->rcu, cache_delete_rcu, obj); ++ call_rcu(&obj->rcu, cache_delete_rcu); } /* Must be holding cache_lock */ @@ -1725,14 +1727,6 @@ the amount of locking which needs to be done. if (++cache_num > MAX_CACHE_SIZE) { struct object *i, *outcast = NULL; list_for_each_entry(i, &cache, list) { -@@ -85,6 +94,7 @@ - obj->popularity = 0; - atomic_set(&obj->refcnt, 1); /* The cache holds a reference */ - spin_lock_init(&obj->lock); -+ INIT_RCU_HEAD(&obj->rcu); - - spin_lock_irqsave(&cache_lock, flags); - __cache_add(obj); @@ -104,12 +114,11 @@ struct object *cache_find(int id) { @@ -1961,6 +1955,12 @@ machines due to caching. </sect1> </chapter> + <chapter id="apiref"> + <title>Mutex API reference</title> +!Iinclude/linux/mutex.h +!Ekernel/mutex.c + </chapter> + <chapter id="references"> <title>Further reading</title> diff --git a/Documentation/DocBook/mac80211.tmpl b/Documentation/DocBook/mac80211.tmpl deleted file mode 100644 index affb15a344a1..000000000000 --- a/Documentation/DocBook/mac80211.tmpl +++ /dev/null @@ -1,337 +0,0 @@ -<?xml version="1.0" encoding="UTF-8"?> -<!DOCTYPE book PUBLIC "-//OASIS//DTD DocBook XML V4.1.2//EN" - "http://www.oasis-open.org/docbook/xml/4.1.2/docbookx.dtd" []> - -<book id="mac80211-developers-guide"> - <bookinfo> - <title>The mac80211 subsystem for kernel developers</title> - - <authorgroup> - <author> - <firstname>Johannes</firstname> - <surname>Berg</surname> - <affiliation> - <address><email>johannes@sipsolutions.net</email></address> - </affiliation> - </author> - </authorgroup> - - <copyright> - <year>2007-2009</year> - <holder>Johannes Berg</holder> - </copyright> - - <legalnotice> - <para> - This documentation is free software; you can redistribute - it and/or modify it under the terms of the GNU General Public - License version 2 as published by the Free Software Foundation. - </para> - - <para> - This documentation is distributed in the hope that it will be - useful, but WITHOUT ANY WARRANTY; without even the implied - warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - See the GNU General Public License for more details. - </para> - - <para> - You should have received a copy of the GNU General Public - License along with this documentation; if not, write to the Free - Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, - MA 02111-1307 USA - </para> - - <para> - For more details see the file COPYING in the source - distribution of Linux. - </para> - </legalnotice> - - <abstract> -!Pinclude/net/mac80211.h Introduction -!Pinclude/net/mac80211.h Warning - </abstract> - </bookinfo> - - <toc></toc> - -<!-- -Generally, this document shall be ordered by increasing complexity. -It is important to note that readers should be able to read only -the first few sections to get a working driver and only advanced -usage should require reading the full document. ---> - - <part> - <title>The basic mac80211 driver interface</title> - <partintro> - <para> - You should read and understand the information contained - within this part of the book while implementing a driver. - In some chapters, advanced usage is noted, that may be - skipped at first. - </para> - <para> - This part of the book only covers station and monitor mode - functionality, additional information required to implement - the other modes is covered in the second part of the book. - </para> - </partintro> - - <chapter id="basics"> - <title>Basic hardware handling</title> - <para>TBD</para> - <para> - This chapter shall contain information on getting a hw - struct allocated and registered with mac80211. - </para> - <para> - Since it is required to allocate rates/modes before registering - a hw struct, this chapter shall also contain information on setting - up the rate/mode structs. - </para> - <para> - Additionally, some discussion about the callbacks and - the general programming model should be in here, including - the definition of ieee80211_ops which will be referred to - a lot. - </para> - <para> - Finally, a discussion of hardware capabilities should be done - with references to other parts of the book. - </para> -<!-- intentionally multiple !F lines to get proper order --> -!Finclude/net/mac80211.h ieee80211_hw -!Finclude/net/mac80211.h ieee80211_hw_flags -!Finclude/net/mac80211.h SET_IEEE80211_DEV -!Finclude/net/mac80211.h SET_IEEE80211_PERM_ADDR -!Finclude/net/mac80211.h ieee80211_ops -!Finclude/net/mac80211.h ieee80211_alloc_hw -!Finclude/net/mac80211.h ieee80211_register_hw -!Finclude/net/mac80211.h ieee80211_get_tx_led_name -!Finclude/net/mac80211.h ieee80211_get_rx_led_name -!Finclude/net/mac80211.h ieee80211_get_assoc_led_name -!Finclude/net/mac80211.h ieee80211_get_radio_led_name -!Finclude/net/mac80211.h ieee80211_unregister_hw -!Finclude/net/mac80211.h ieee80211_free_hw - </chapter> - - <chapter id="phy-handling"> - <title>PHY configuration</title> - <para>TBD</para> - <para> - This chapter should describe PHY handling including - start/stop callbacks and the various structures used. - </para> -!Finclude/net/mac80211.h ieee80211_conf -!Finclude/net/mac80211.h ieee80211_conf_flags - </chapter> - - <chapter id="iface-handling"> - <title>Virtual interfaces</title> - <para>TBD</para> - <para> - This chapter should describe virtual interface basics - that are relevant to the driver (VLANs, MGMT etc are not.) - It should explain the use of the add_iface/remove_iface - callbacks as well as the interface configuration callbacks. - </para> - <para>Things related to AP mode should be discussed there.</para> - <para> - Things related to supporting multiple interfaces should be - in the appropriate chapter, a BIG FAT note should be here about - this though and the recommendation to allow only a single - interface in STA mode at first! - </para> -!Finclude/net/mac80211.h ieee80211_vif - </chapter> - - <chapter id="rx-tx"> - <title>Receive and transmit processing</title> - <sect1> - <title>what should be here</title> - <para>TBD</para> - <para> - This should describe the receive and transmit - paths in mac80211/the drivers as well as - transmit status handling. - </para> - </sect1> - <sect1> - <title>Frame format</title> -!Pinclude/net/mac80211.h Frame format - </sect1> - <sect1> - <title>Packet alignment</title> -!Pnet/mac80211/rx.c Packet alignment - </sect1> - <sect1> - <title>Calling into mac80211 from interrupts</title> -!Pinclude/net/mac80211.h Calling mac80211 from interrupts - </sect1> - <sect1> - <title>functions/definitions</title> -!Finclude/net/mac80211.h ieee80211_rx_status -!Finclude/net/mac80211.h mac80211_rx_flags -!Finclude/net/mac80211.h ieee80211_tx_info -!Finclude/net/mac80211.h ieee80211_rx -!Finclude/net/mac80211.h ieee80211_rx_irqsafe -!Finclude/net/mac80211.h ieee80211_tx_status -!Finclude/net/mac80211.h ieee80211_tx_status_irqsafe -!Finclude/net/mac80211.h ieee80211_rts_get -!Finclude/net/mac80211.h ieee80211_rts_duration -!Finclude/net/mac80211.h ieee80211_ctstoself_get -!Finclude/net/mac80211.h ieee80211_ctstoself_duration -!Finclude/net/mac80211.h ieee80211_generic_frame_duration -!Finclude/net/mac80211.h ieee80211_wake_queue -!Finclude/net/mac80211.h ieee80211_stop_queue -!Finclude/net/mac80211.h ieee80211_wake_queues -!Finclude/net/mac80211.h ieee80211_stop_queues - </sect1> - </chapter> - - <chapter id="filters"> - <title>Frame filtering</title> -!Pinclude/net/mac80211.h Frame filtering -!Finclude/net/mac80211.h ieee80211_filter_flags - </chapter> - </part> - - <part id="advanced"> - <title>Advanced driver interface</title> - <partintro> - <para> - Information contained within this part of the book is - of interest only for advanced interaction of mac80211 - with drivers to exploit more hardware capabilities and - improve performance. - </para> - </partintro> - - <chapter id="hardware-crypto-offload"> - <title>Hardware crypto acceleration</title> -!Pinclude/net/mac80211.h Hardware crypto acceleration -<!-- intentionally multiple !F lines to get proper order --> -!Finclude/net/mac80211.h set_key_cmd -!Finclude/net/mac80211.h ieee80211_key_conf -!Finclude/net/mac80211.h ieee80211_key_alg -!Finclude/net/mac80211.h ieee80211_key_flags - </chapter> - - <chapter id="powersave"> - <title>Powersave support</title> -!Pinclude/net/mac80211.h Powersave support - </chapter> - - <chapter id="beacon-filter"> - <title>Beacon filter support</title> -!Pinclude/net/mac80211.h Beacon filter support -!Finclude/net/mac80211.h ieee80211_beacon_loss - </chapter> - - <chapter id="qos"> - <title>Multiple queues and QoS support</title> - <para>TBD</para> -!Finclude/net/mac80211.h ieee80211_tx_queue_params - </chapter> - - <chapter id="AP"> - <title>Access point mode support</title> - <para>TBD</para> - <para>Some parts of the if_conf should be discussed here instead</para> - <para> - Insert notes about VLAN interfaces with hw crypto here or - in the hw crypto chapter. - </para> -!Finclude/net/mac80211.h ieee80211_get_buffered_bc -!Finclude/net/mac80211.h ieee80211_beacon_get - </chapter> - - <chapter id="multi-iface"> - <title>Supporting multiple virtual interfaces</title> - <para>TBD</para> - <para> - Note: WDS with identical MAC address should almost always be OK - </para> - <para> - Insert notes about having multiple virtual interfaces with - different MAC addresses here, note which configurations are - supported by mac80211, add notes about supporting hw crypto - with it. - </para> - </chapter> - - <chapter id="hardware-scan-offload"> - <title>Hardware scan offload</title> - <para>TBD</para> -!Finclude/net/mac80211.h ieee80211_scan_completed - </chapter> - </part> - - <part id="rate-control"> - <title>Rate control interface</title> - <partintro> - <para>TBD</para> - <para> - This part of the book describes the rate control algorithm - interface and how it relates to mac80211 and drivers. - </para> - </partintro> - <chapter id="dummy"> - <title>dummy chapter</title> - <para>TBD</para> - </chapter> - </part> - - <part id="internal"> - <title>Internals</title> - <partintro> - <para>TBD</para> - <para> - This part of the book describes mac80211 internals. - </para> - </partintro> - - <chapter id="key-handling"> - <title>Key handling</title> - <sect1> - <title>Key handling basics</title> -!Pnet/mac80211/key.c Key handling basics - </sect1> - <sect1> - <title>MORE TBD</title> - <para>TBD</para> - </sect1> - </chapter> - - <chapter id="rx-processing"> - <title>Receive processing</title> - <para>TBD</para> - </chapter> - - <chapter id="tx-processing"> - <title>Transmit processing</title> - <para>TBD</para> - </chapter> - - <chapter id="sta-info"> - <title>Station info handling</title> - <sect1> - <title>Programming information</title> -!Fnet/mac80211/sta_info.h sta_info -!Fnet/mac80211/sta_info.h ieee80211_sta_info_flags - </sect1> - <sect1> - <title>STA information lifetime rules</title> -!Pnet/mac80211/sta_info.c STA information lifetime rules - </sect1> - </chapter> - - <chapter id="synchronisation"> - <title>Synchronisation</title> - <para>TBD</para> - <para>Locking, lots of RCU</para> - </chapter> - </part> -</book> diff --git a/Documentation/DocBook/tracepoint.tmpl b/Documentation/DocBook/tracepoint.tmpl index e8473eae2a20..b57a9ede3224 100644 --- a/Documentation/DocBook/tracepoint.tmpl +++ b/Documentation/DocBook/tracepoint.tmpl @@ -104,4 +104,9 @@ <title>Block IO</title> !Iinclude/trace/events/block.h </chapter> + + <chapter id="workqueue"> + <title>Workqueue</title> +!Iinclude/trace/events/workqueue.h + </chapter> </book> diff --git a/Documentation/RCU/checklist.txt b/Documentation/RCU/checklist.txt index 790d1a812376..0c134f8afc6f 100644 --- a/Documentation/RCU/checklist.txt +++ b/Documentation/RCU/checklist.txt @@ -218,13 +218,22 @@ over a rather long period of time, but improvements are always welcome! include: a. Keeping a count of the number of data-structure elements - used by the RCU-protected data structure, including those - waiting for a grace period to elapse. Enforce a limit - on this number, stalling updates as needed to allow - previously deferred frees to complete. - - Alternatively, limit only the number awaiting deferred - free rather than the total number of elements. + used by the RCU-protected data structure, including + those waiting for a grace period to elapse. Enforce a + limit on this number, stalling updates as needed to allow + previously deferred frees to complete. Alternatively, + limit only the number awaiting deferred free rather than + the total number of elements. + + One way to stall the updates is to acquire the update-side + mutex. (Don't try this with a spinlock -- other CPUs + spinning on the lock could prevent the grace period + from ever ending.) Another way to stall the updates + is for the updates to use a wrapper function around + the memory allocator, so that this wrapper function + simulates OOM when there is too much memory awaiting an + RCU grace period. There are of course many other + variations on this theme. b. Limiting update rate. For example, if updates occur only once per hour, then no explicit rate limiting is required, @@ -365,3 +374,26 @@ over a rather long period of time, but improvements are always welcome! and the compiler to freely reorder code into and out of RCU read-side critical sections. It is the responsibility of the RCU update-side primitives to deal with this. + +17. Use CONFIG_PROVE_RCU, CONFIG_DEBUG_OBJECTS_RCU_HEAD, and + the __rcu sparse checks to validate your RCU code. These + can help find problems as follows: + + CONFIG_PROVE_RCU: check that accesses to RCU-protected data + structures are carried out under the proper RCU + read-side critical section, while holding the right + combination of locks, or whatever other conditions + are appropriate. + + CONFIG_DEBUG_OBJECTS_RCU_HEAD: check that you don't pass the + same object to call_rcu() (or friends) before an RCU + grace period has elapsed since the last time that you + passed that same object to call_rcu() (or friends). + + __rcu sparse checks: tag the pointer to the RCU-protected data + structure with __rcu, and sparse will warn you if you + access that pointer without the services of one of the + variants of rcu_dereference(). + + These debugging aids can help you find problems that are + otherwise extremely difficult to spot. diff --git a/Documentation/RCU/stallwarn.txt b/Documentation/RCU/stallwarn.txt index 44c6dcc93d6d..862c08ef1fde 100644 --- a/Documentation/RCU/stallwarn.txt +++ b/Documentation/RCU/stallwarn.txt @@ -80,6 +80,24 @@ o A CPU looping with bottom halves disabled. This condition can o For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the kernel without invoking schedule(). +o A CPU-bound real-time task in a CONFIG_PREEMPT kernel, which might + happen to preempt a low-priority task in the middle of an RCU + read-side critical section. This is especially damaging if + that low-priority task is not permitted to run on any other CPU, + in which case the next RCU grace period can never complete, which + will eventually cause the system to run out of memory and hang. + While the system is in the process of running itself out of + memory, you might see stall-warning messages. + +o A CPU-bound real-time task in a CONFIG_PREEMPT_RT kernel that + is running at a higher priority than the RCU softirq threads. + This will prevent RCU callbacks from ever being invoked, + and in a CONFIG_TREE_PREEMPT_RCU kernel will further prevent + RCU grace periods from ever completing. Either way, the + system will eventually run out of memory and hang. In the + CONFIG_TREE_PREEMPT_RCU case, you might see stall-warning + messages. + o A bug in the RCU implementation. o A hardware failure. This is quite unlikely, but has occurred diff --git a/Documentation/RCU/trace.txt b/Documentation/RCU/trace.txt index efd8cc95c06b..a851118775d8 100644 --- a/Documentation/RCU/trace.txt +++ b/Documentation/RCU/trace.txt @@ -125,6 +125,17 @@ o "b" is the batch limit for this CPU. If more than this number of RCU callbacks is ready to invoke, then the remainder will be deferred. +o "ci" is the number of RCU callbacks that have been invoked for + this CPU. Note that ci+ql is the number of callbacks that have + been registered in absence of CPU-hotplug activity. + +o "co" is the number of RCU callbacks that have been orphaned due to + this CPU going offline. + +o "ca" is the number of RCU callbacks that have been adopted due to + other CPUs going offline. Note that ci+co-ca+ql is the number of + RCU callbacks registered on this CPU. + There is also an rcu/rcudata.csv file with the same information in comma-separated-variable spreadsheet format. @@ -180,7 +191,7 @@ o "s" is the "signaled" state that drives force_quiescent_state()'s o "jfq" is the number of jiffies remaining for this grace period before force_quiescent_state() is invoked to help push things - along. Note that CPUs in dyntick-idle mode thoughout the grace + along. Note that CPUs in dyntick-idle mode throughout the grace period will not report on their own, but rather must be check by some other CPU via force_quiescent_state(). diff --git a/Documentation/arm/00-INDEX b/Documentation/arm/00-INDEX index 7f5fc3ba9c91..ecf7d04bca26 100644 --- a/Documentation/arm/00-INDEX +++ b/Documentation/arm/00-INDEX @@ -6,6 +6,8 @@ Interrupts - ARM Interrupt subsystem documentation IXP2000 - Release Notes for Linux on Intel's IXP2000 Network Processor +msm + - MSM specific documentation Netwinder - Netwinder specific documentation Porting diff --git a/Documentation/arm/SA1100/FreeBird b/Documentation/arm/SA1100/FreeBird index fb23b770aaf4..ab9193663b2b 100644 --- a/Documentation/arm/SA1100/FreeBird +++ b/Documentation/arm/SA1100/FreeBird @@ -1,6 +1,6 @@ -Freebird-1.1 is produced by Legned(C) ,Inc. +Freebird-1.1 is produced by Legend(C), Inc. http://web.archive.org/web/*/http://www.legend.com.cn -and software/linux mainatined by Coventive(C),Inc. +and software/linux maintained by Coventive(C), Inc. (http://www.coventive.com) Based on the Nicolas's strongarm kernel tree. diff --git a/Documentation/arm/msm/gpiomux.txt b/Documentation/arm/msm/gpiomux.txt new file mode 100644 index 000000000000..67a81620adf6 --- /dev/null +++ b/Documentation/arm/msm/gpiomux.txt @@ -0,0 +1,176 @@ +This document provides an overview of the msm_gpiomux interface, which +is used to provide gpio pin multiplexing and configuration on mach-msm +targets. + +History +======= + +The first-generation API for gpio configuration & multiplexing on msm +is the function gpio_tlmm_config(). This function has a few notable +shortcomings, which led to its deprecation and replacement by gpiomux: + +The 'disable' parameter: Setting the second parameter to +gpio_tlmm_config to GPIO_CFG_DISABLE tells the peripheral +processor in charge of the subsystem to perform a look-up into a +low-power table and apply the low-power/sleep setting for the pin. +As the msm family evolved this became problematic. Not all pins +have sleep settings, not all peripheral processors will accept requests +to apply said sleep settings, and not all msm targets have their gpio +subsystems managed by a peripheral processor. In order to get consistent +behavior on all targets, drivers are forced to ignore this parameter, +rendering it useless. + +The 'direction' flag: for all mux-settings other than raw-gpio (0), +the output-enable bit of a gpio is hard-wired to a known +input (usually VDD or ground). For those settings, the direction flag +is meaningless at best, and deceptive at worst. In addition, using the +direction flag to change output-enable (OE) directly can cause trouble in +gpiolib, which has no visibility into gpio direction changes made +in this way. Direction control in gpio mode should be made through gpiolib. + +Key Features of gpiomux +======================= + +- A consistent interface across all generations of msm. Drivers can expect +the same results on every target. +- gpiomux plays nicely with gpiolib. Functions that should belong to gpiolib +are left to gpiolib and not duplicated here. gpiomux is written with the +intent that gpio_chips will call gpiomux reference-counting methods +from their request() and free() hooks, providing full integration. +- Tabular configuration. Instead of having to call gpio_tlmm_config +hundreds of times, gpio configuration is placed in a single table. +- Per-gpio sleep. Each gpio is individually reference counted, allowing only +those lines which are in use to be put in high-power states. +- 0 means 'do nothing': all flags are designed so that the default memset-zero +equates to a sensible default of 'no configuration', preventing users +from having to provide hundreds of 'no-op' configs for unused or +unwanted lines. + +Usage +===== + +To use gpiomux, provide configuration information for relevant gpio lines +in the msm_gpiomux_configs table. Since a 0 equates to "unconfigured", +only those lines to be managed by gpiomux need to be specified. Here +is a completely fictional example: + +struct msm_gpiomux_config msm_gpiomux_configs[GPIOMUX_NGPIOS] = { + [12] = { + .active = GPIOMUX_VALID | GPIOMUX_DRV_8MA | GPIOMUX_FUNC_1, + .suspended = GPIOMUX_VALID | GPIOMUX_PULL_DOWN, + }, + [34] = { + .suspended = GPIOMUX_VALID | GPIOMUX_PULL_DOWN, + }, +}; + +To indicate that a gpio is in use, call msm_gpiomux_get() to increase +its reference count. To decrease the reference count, call msm_gpiomux_put(). + +The effect of this configuration is as follows: + +When the system boots, gpios 12 and 34 will be initialized with their +'suspended' configurations. All other gpios, which were left unconfigured, +will not be touched. + +When msm_gpiomux_get() is called on gpio 12 to raise its reference count +above 0, its active configuration will be applied. Since no other gpio +line has a valid active configuration, msm_gpiomux_get() will have no +effect on any other line. + +When msm_gpiomux_put() is called on gpio 12 or 34 to drop their reference +count to 0, their suspended configurations will be applied. +Since no other gpio line has a valid suspended configuration, no other +gpio line will be effected by msm_gpiomux_put(). Since gpio 34 has no valid +active configuration, this is effectively a no-op for gpio 34 as well, +with one small caveat, see the section "About Output-Enable Settings". + +All of the GPIOMUX_VALID flags may seem like unnecessary overhead, but +they address some important issues. As unused entries (all those +except 12 and 34) are zero-filled, gpiomux needs a way to distinguish +the used fields from the unused. In addition, the all-zero pattern +is a valid configuration! Therefore, gpiomux defines an additional bit +which is used to indicate when a field is used. This has the pleasant +side-effect of allowing calls to msm_gpiomux_write to use '0' to indicate +that a value should not be changed: + + msm_gpiomux_write(0, GPIOMUX_VALID, 0); + +replaces the active configuration of gpio 0 with an all-zero configuration, +but leaves the suspended configuration as it was. + +Static Configurations +===================== + +To install a static configuration, which is applied at boot and does +not change after that, install a configuration with a suspended component +but no active component, as in the previous example: + + [34] = { + .suspended = GPIOMUX_VALID | GPIOMUX_PULL_DOWN, + }, + +The suspended setting is applied during boot, and the lack of any valid +active setting prevents any other setting from being applied at runtime. +If other subsystems attempting to access the line is a concern, one could +*really* anchor the configuration down by calling msm_gpiomux_get on the +line at initialization to move the line into active mode. With the line +held, it will never be re-suspended, and with no valid active configuration, +no new configurations will be applied. + +But then, if having other subsystems grabbing for the line is truly a concern, +it should be reserved with gpio_request instead, which carries an implicit +msm_gpiomux_get. + +gpiomux and gpiolib +=================== + +It is expected that msm gpio_chips will call msm_gpiomux_get() and +msm_gpiomux_put() from their request and free hooks, like this fictional +example: + +static int request(struct gpio_chip *chip, unsigned offset) +{ + return msm_gpiomux_get(chip->base + offset); +} + +static void free(struct gpio_chip *chip, unsigned offset) +{ + msm_gpiomux_put(chip->base + offset); +} + + ...somewhere in a gpio_chip declaration... + .request = request, + .free = free, + +This provides important functionality: +- It guarantees that a gpio line will have its 'active' config applied + when the line is requested, and will not be suspended while the line + remains requested; and +- It guarantees that gpio-direction settings from gpiolib behave sensibly. + See "About Output-Enable Settings." + +This mechanism allows for "auto-request" of gpiomux lines via gpiolib +when it is suitable. Drivers wishing more exact control are, of course, +free to also use msm_gpiomux_set and msm_gpiomux_get. + +About Output-Enable Settings +============================ + +Some msm targets do not have the ability to query the current gpio +configuration setting. This means that changes made to the output-enable +(OE) bit by gpiolib cannot be consistently detected and preserved by gpiomux. +Therefore, when gpiomux applies a configuration setting, any direction +settings which may have been applied by gpiolib are lost and the default +input settings are re-applied. + +For this reason, drivers should not assume that gpio direction settings +continue to hold if they free and then re-request a gpio. This seems like +common sense - after all, anybody could have obtained the line in the +meantime - but it needs saying. + +This also means that calls to msm_gpiomux_write will reset the OE bit, +which means that if the gpio line is held by a client of gpiolib and +msm_gpiomux_write is called, the direction setting has been lost and +gpiolib's internal state has been broken. +Release gpio lines before reconfiguring them. diff --git a/Documentation/block/00-INDEX b/Documentation/block/00-INDEX index a406286f6f3e..d111e3b23db0 100644 --- a/Documentation/block/00-INDEX +++ b/Documentation/block/00-INDEX @@ -1,7 +1,5 @@ 00-INDEX - This file -barrier.txt - - I/O Barriers biodoc.txt - Notes on the Generic Block Layer Rewrite in Linux 2.5 capability.txt @@ -16,3 +14,5 @@ stat.txt - Block layer statistics in /sys/block/<dev>/stat switching-sched.txt - Switching I/O schedulers at runtime +writeback_cache_control.txt + - Control of volatile write back caches diff --git a/Documentation/block/barrier.txt b/Documentation/block/barrier.txt deleted file mode 100644 index 2c2f24f634e4..000000000000 --- a/Documentation/block/barrier.txt +++ /dev/null @@ -1,261 +0,0 @@ -I/O Barriers -============ -Tejun Heo <htejun@gmail.com>, July 22 2005 - -I/O barrier requests are used to guarantee ordering around the barrier -requests. Unless you're crazy enough to use disk drives for -implementing synchronization constructs (wow, sounds interesting...), -the ordering is meaningful only for write requests for things like -journal checkpoints. All requests queued before a barrier request -must be finished (made it to the physical medium) before the barrier -request is started, and all requests queued after the barrier request -must be started only after the barrier request is finished (again, -made it to the physical medium). - -In other words, I/O barrier requests have the following two properties. - -1. Request ordering - -Requests cannot pass the barrier request. Preceding requests are -processed before the barrier and following requests after. - -Depending on what features a drive supports, this can be done in one -of the following three ways. - -i. For devices which have queue depth greater than 1 (TCQ devices) and -support ordered tags, block layer can just issue the barrier as an -ordered request and the lower level driver, controller and drive -itself are responsible for making sure that the ordering constraint is -met. Most modern SCSI controllers/drives should support this. - -NOTE: SCSI ordered tag isn't currently used due to limitation in the - SCSI midlayer, see the following random notes section. - -ii. For devices which have queue depth greater than 1 but don't -support ordered tags, block layer ensures that the requests preceding -a barrier request finishes before issuing the barrier request. Also, -it defers requests following the barrier until the barrier request is -finished. Older SCSI controllers/drives and SATA drives fall in this -category. - -iii. Devices which have queue depth of 1. This is a degenerate case -of ii. Just keeping issue order suffices. Ancient SCSI -controllers/drives and IDE drives are in this category. - -2. Forced flushing to physical medium - -Again, if you're not gonna do synchronization with disk drives (dang, -it sounds even more appealing now!), the reason you use I/O barriers -is mainly to protect filesystem integrity when power failure or some -other events abruptly stop the drive from operating and possibly make -the drive lose data in its cache. So, I/O barriers need to guarantee -that requests actually get written to non-volatile medium in order. - -There are four cases, - -i. No write-back cache. Keeping requests ordered is enough. - -ii. Write-back cache but no flush operation. There's no way to -guarantee physical-medium commit order. This kind of devices can't to -I/O barriers. - -iii. Write-back cache and flush operation but no FUA (forced unit -access). We need two cache flushes - before and after the barrier -request. - -iv. Write-back cache, flush operation and FUA. We still need one -flush to make sure requests preceding a barrier are written to medium, -but post-barrier flush can be avoided by using FUA write on the -barrier itself. - - -How to support barrier requests in drivers ------------------------------------------- - -All barrier handling is done inside block layer proper. All low level -drivers have to are implementing its prepare_flush_fn and using one -the following two functions to indicate what barrier type it supports -and how to prepare flush requests. Note that the term 'ordered' is -used to indicate the whole sequence of performing barrier requests -including draining and flushing. - -typedef void (prepare_flush_fn)(struct request_queue *q, struct request *rq); - -int blk_queue_ordered(struct request_queue *q, unsigned ordered, - prepare_flush_fn *prepare_flush_fn); - -@q : the queue in question -@ordered : the ordered mode the driver/device supports -@prepare_flush_fn : this function should prepare @rq such that it - flushes cache to physical medium when executed - -For example, SCSI disk driver's prepare_flush_fn looks like the -following. - -static void sd_prepare_flush(struct request_queue *q, struct request *rq) -{ - memset(rq->cmd, 0, sizeof(rq->cmd)); - rq->cmd_type = REQ_TYPE_BLOCK_PC; - rq->timeout = SD_TIMEOUT; - rq->cmd[0] = SYNCHRONIZE_CACHE; - rq->cmd_len = 10; -} - -The following seven ordered modes are supported. The following table -shows which mode should be used depending on what features a -device/driver supports. In the leftmost column of table, -QUEUE_ORDERED_ prefix is omitted from the mode names to save space. - -The table is followed by description of each mode. Note that in the -descriptions of QUEUE_ORDERED_DRAIN*, '=>' is used whereas '->' is -used for QUEUE_ORDERED_TAG* descriptions. '=>' indicates that the -preceding step must be complete before proceeding to the next step. -'->' indicates that the next step can start as soon as the previous -step is issued. - - write-back cache ordered tag flush FUA ------------------------------------------------------------------------ -NONE yes/no N/A no N/A -DRAIN no no N/A N/A -DRAIN_FLUSH yes no yes no -DRAIN_FUA yes no yes yes -TAG no yes N/A N/A -TAG_FLUSH yes yes yes no -TAG_FUA yes yes yes yes - - -QUEUE_ORDERED_NONE - I/O barriers are not needed and/or supported. - - Sequence: N/A - -QUEUE_ORDERED_DRAIN - Requests are ordered by draining the request queue and cache - flushing isn't needed. - - Sequence: drain => barrier - -QUEUE_ORDERED_DRAIN_FLUSH - Requests are ordered by draining the request queue and both - pre-barrier and post-barrier cache flushings are needed. - - Sequence: drain => preflush => barrier => postflush - -QUEUE_ORDERED_DRAIN_FUA - Requests are ordered by draining the request queue and - pre-barrier cache flushing is needed. By using FUA on barrier - request, post-barrier flushing can be skipped. - - Sequence: drain => preflush => barrier - -QUEUE_ORDERED_TAG - Requests are ordered by ordered tag and cache flushing isn't - needed. - - Sequence: barrier - -QUEUE_ORDERED_TAG_FLUSH - Requests are ordered by ordered tag and both pre-barrier and - post-barrier cache flushings are needed. - - Sequence: preflush -> barrier -> postflush - -QUEUE_ORDERED_TAG_FUA - Requests are ordered by ordered tag and pre-barrier cache - flushing is needed. By using FUA on barrier request, - post-barrier flushing can be skipped. - - Sequence: preflush -> barrier - - -Random notes/caveats --------------------- - -* SCSI layer currently can't use TAG ordering even if the drive, -controller and driver support it. The problem is that SCSI midlayer -request dispatch function is not atomic. It releases queue lock and -switch to SCSI host lock during issue and it's possible and likely to -happen in time that requests change their relative positions. Once -this problem is solved, TAG ordering can be enabled. - -* Currently, no matter which ordered mode is used, there can be only -one barrier request in progress. All I/O barriers are held off by -block layer until the previous I/O barrier is complete. This doesn't -make any difference for DRAIN ordered devices, but, for TAG ordered -devices with very high command latency, passing multiple I/O barriers -to low level *might* be helpful if they are very frequent. Well, this -certainly is a non-issue. I'm writing this just to make clear that no -two I/O barrier is ever passed to low-level driver. - -* Completion order. Requests in ordered sequence are issued in order -but not required to finish in order. Barrier implementation can -handle out-of-order completion of ordered sequence. IOW, the requests -MUST be processed in order but the hardware/software completion paths -are allowed to reorder completion notifications - eg. current SCSI -midlayer doesn't preserve completion order during error handling. - -* Requeueing order. Low-level drivers are free to requeue any request -after they removed it from the request queue with -blkdev_dequeue_request(). As barrier sequence should be kept in order -when requeued, generic elevator code takes care of putting requests in -order around barrier. See blk_ordered_req_seq() and -ELEVATOR_INSERT_REQUEUE handling in __elv_add_request() for details. - -Note that block drivers must not requeue preceding requests while -completing latter requests in an ordered sequence. Currently, no -error checking is done against this. - -* Error handling. Currently, block layer will report error to upper -layer if any of requests in an ordered sequence fails. Unfortunately, -this doesn't seem to be enough. Look at the following request flow. -QUEUE_ORDERED_TAG_FLUSH is in use. - - [0] [1] [2] [3] [pre] [barrier] [post] < [4] [5] [6] ... > - still in elevator - -Let's say request [2], [3] are write requests to update file system -metadata (journal or whatever) and [barrier] is used to mark that -those updates are valid. Consider the following sequence. - - i. Requests [0] ~ [post] leaves the request queue and enters - low-level driver. - ii. After a while, unfortunately, something goes wrong and the - drive fails [2]. Note that any of [0], [1] and [3] could have - completed by this time, but [pre] couldn't have been finished - as the drive must process it in order and it failed before - processing that command. - iii. Error handling kicks in and determines that the error is - unrecoverable and fails [2], and resumes operation. - iv. [pre] [barrier] [post] gets processed. - v. *BOOM* power fails - -The problem here is that the barrier request is *supposed* to indicate -that filesystem update requests [2] and [3] made it safely to the -physical medium and, if the machine crashes after the barrier is -written, filesystem recovery code can depend on that. Sadly, that -isn't true in this case anymore. IOW, the success of a I/O barrier -should also be dependent on success of some of the preceding requests, -where only upper layer (filesystem) knows what 'some' is. - -This can be solved by implementing a way to tell the block layer which -requests affect the success of the following barrier request and -making lower lever drivers to resume operation on error only after -block layer tells it to do so. - -As the probability of this happening is very low and the drive should -be faulty, implementing the fix is probably an overkill. But, still, -it's there. - -* In previous drafts of barrier implementation, there was fallback -mechanism such that, if FUA or ordered TAG fails, less fancy ordered -mode can be selected and the failed barrier request is retried -automatically. The rationale for this feature was that as FUA is -pretty new in ATA world and ordered tag was never used widely, there -could be devices which report to support those features but choke when -actually given such requests. - - This was removed for two reasons 1. it's an overkill 2. it's -impossible to implement properly when TAG ordering is used as low -level drivers resume after an error automatically. If it's ever -needed adding it back and modifying low level drivers accordingly -shouldn't be difficult. diff --git a/Documentation/block/cfq-iosched.txt b/Documentation/block/cfq-iosched.txt new file mode 100644 index 000000000000..e578feed6d81 --- /dev/null +++ b/Documentation/block/cfq-iosched.txt @@ -0,0 +1,45 @@ +CFQ ioscheduler tunables +======================== + +slice_idle +---------- +This specifies how long CFQ should idle for next request on certain cfq queues +(for sequential workloads) and service trees (for random workloads) before +queue is expired and CFQ selects next queue to dispatch from. + +By default slice_idle is a non-zero value. That means by default we idle on +queues/service trees. This can be very helpful on highly seeky media like +single spindle SATA/SAS disks where we can cut down on overall number of +seeks and see improved throughput. + +Setting slice_idle to 0 will remove all the idling on queues/service tree +level and one should see an overall improved throughput on faster storage +devices like multiple SATA/SAS disks in hardware RAID configuration. The down +side is that isolation provided from WRITES also goes down and notion of +IO priority becomes weaker. + +So depending on storage and workload, it might be useful to set slice_idle=0. +In general I think for SATA/SAS disks and software RAID of SATA/SAS disks +keeping slice_idle enabled should be useful. For any configurations where +there are multiple spindles behind single LUN (Host based hardware RAID +controller or for storage arrays), setting slice_idle=0 might end up in better +throughput and acceptable latencies. + +CFQ IOPS Mode for group scheduling +=================================== +Basic CFQ design is to provide priority based time slices. Higher priority +process gets bigger time slice and lower priority process gets smaller time +slice. Measuring time becomes harder if storage is fast and supports NCQ and +it would be better to dispatch multiple requests from multiple cfq queues in +request queue at a time. In such scenario, it is not possible to measure time +consumed by single queue accurately. + +What is possible though is to measure number of requests dispatched from a +single queue and also allow dispatch from multiple cfq queue at the same time. +This effectively becomes the fairness in terms of IOPS (IO operations per +second). + +If one sets slice_idle=0 and if storage supports NCQ, CFQ internally switches +to IOPS mode and starts providing fairness in terms of number of requests +dispatched. Note that this mode switching takes effect only for group +scheduling. For non-cgroup users nothing should change. diff --git a/Documentation/block/writeback_cache_control.txt b/Documentation/block/writeback_cache_control.txt new file mode 100644 index 000000000000..83407d36630a --- /dev/null +++ b/Documentation/block/writeback_cache_control.txt @@ -0,0 +1,86 @@ + +Explicit volatile write back cache control +===================================== + +Introduction +------------ + +Many storage devices, especially in the consumer market, come with volatile +write back caches. That means the devices signal I/O completion to the +operating system before data actually has hit the non-volatile storage. This +behavior obviously speeds up various workloads, but it means the operating +system needs to force data out to the non-volatile storage when it performs +a data integrity operation like fsync, sync or an unmount. + +The Linux block layer provides two simple mechanisms that let filesystems +control the caching behavior of the storage device. These mechanisms are +a forced cache flush, and the Force Unit Access (FUA) flag for requests. + + +Explicit cache flushes +---------------------- + +The REQ_FLUSH flag can be OR ed into the r/w flags of a bio submitted from +the filesystem and will make sure the volatile cache of the storage device +has been flushed before the actual I/O operation is started. This explicitly +guarantees that previously completed write requests are on non-volatile +storage before the flagged bio starts. In addition the REQ_FLUSH flag can be +set on an otherwise empty bio structure, which causes only an explicit cache +flush without any dependent I/O. It is recommend to use +the blkdev_issue_flush() helper for a pure cache flush. + + +Forced Unit Access +----------------- + +The REQ_FUA flag can be OR ed into the r/w flags of a bio submitted from the +filesystem and will make sure that I/O completion for this request is only +signaled after the data has been committed to non-volatile storage. + + +Implementation details for filesystems +-------------------------------------- + +Filesystems can simply set the REQ_FLUSH and REQ_FUA bits and do not have to +worry if the underlying devices need any explicit cache flushing and how +the Forced Unit Access is implemented. The REQ_FLUSH and REQ_FUA flags +may both be set on a single bio. + + +Implementation details for make_request_fn based block drivers +-------------------------------------------------------------- + +These drivers will always see the REQ_FLUSH and REQ_FUA bits as they sit +directly below the submit_bio interface. For remapping drivers the REQ_FUA +bits need to be propagated to underlying devices, and a global flush needs +to be implemented for bios with the REQ_FLUSH bit set. For real device +drivers that do not have a volatile cache the REQ_FLUSH and REQ_FUA bits +on non-empty bios can simply be ignored, and REQ_FLUSH requests without +data can be completed successfully without doing any work. Drivers for +devices with volatile caches need to implement the support for these +flags themselves without any help from the block layer. + + +Implementation details for request_fn based block drivers +-------------------------------------------------------------- + +For devices that do not support volatile write caches there is no driver +support required, the block layer completes empty REQ_FLUSH requests before +entering the driver and strips off the REQ_FLUSH and REQ_FUA bits from +requests that have a payload. For devices with volatile write caches the +driver needs to tell the block layer that it supports flushing caches by +doing: + + blk_queue_flush(sdkp->disk->queue, REQ_FLUSH); + +and handle empty REQ_FLUSH requests in its prep_fn/request_fn. Note that +REQ_FLUSH requests with a payload are automatically turned into a sequence +of an empty REQ_FLUSH request followed by the actual write by the block +layer. For devices that also support the FUA bit the block layer needs +to be told to pass through the REQ_FUA bit using: + + blk_queue_flush(sdkp->disk->queue, REQ_FLUSH | REQ_FUA); + +and the driver must handle write requests that have the REQ_FUA bit set +in prep_fn/request_fn. If the FUA bit is not natively supported the block +layer turns it into an empty REQ_FLUSH request after the actual write. diff --git a/Documentation/cgroups/blkio-controller.txt b/Documentation/cgroups/blkio-controller.txt index 48e0b21b0059..d6da611f8f63 100644 --- a/Documentation/cgroups/blkio-controller.txt +++ b/Documentation/cgroups/blkio-controller.txt @@ -8,12 +8,17 @@ both at leaf nodes as well as at intermediate nodes in a storage hierarchy. Plan is to use the same cgroup based management interface for blkio controller and based on user options switch IO policies in the background. -In the first phase, this patchset implements proportional weight time based -division of disk policy. It is implemented in CFQ. Hence this policy takes -effect only on leaf nodes when CFQ is being used. +Currently two IO control policies are implemented. First one is proportional +weight time based division of disk policy. It is implemented in CFQ. Hence +this policy takes effect only on leaf nodes when CFQ is being used. The second +one is throttling policy which can be used to specify upper IO rate limits +on devices. This policy is implemented in generic block layer and can be +used on leaf nodes as well as higher level logical devices like device mapper. HOWTO ===== +Proportional Weight division of bandwidth +----------------------------------------- You can do a very simple testing of running two dd threads in two different cgroups. Here is what you can do. @@ -55,6 +60,35 @@ cgroups. Here is what you can do. group dispatched to the disk. We provide fairness in terms of disk time, so ideally io.disk_time of cgroups should be in proportion to the weight. +Throttling/Upper Limit policy +----------------------------- +- Enable Block IO controller + CONFIG_BLK_CGROUP=y + +- Enable throttling in block layer + CONFIG_BLK_DEV_THROTTLING=y + +- Mount blkio controller + mount -t cgroup -o blkio none /cgroup/blkio + +- Specify a bandwidth rate on particular device for root group. The format + for policy is "<major>:<minor> <byes_per_second>". + + echo "8:16 1048576" > /cgroup/blkio/blkio.read_bps_device + + Above will put a limit of 1MB/second on reads happening for root group + on device having major/minor number 8:16. + +- Run dd to read a file and see if rate is throttled to 1MB/s or not. + + # dd if=/mnt/common/zerofile of=/dev/null bs=4K count=1024 + # iflag=direct + 1024+0 records in + 1024+0 records out + 4194304 bytes (4.2 MB) copied, 4.0001 s, 1.0 MB/s + + Limits for writes can be put using blkio.write_bps_device file. + Various user visible config options =================================== CONFIG_BLK_CGROUP @@ -68,8 +102,13 @@ CONFIG_CFQ_GROUP_IOSCHED - Enables group scheduling in CFQ. Currently only 1 level of group creation is allowed. +CONFIG_BLK_DEV_THROTTLING + - Enable block device throttling support in block layer. + Details of cgroup files ======================= +Proportional weight policy files +-------------------------------- - blkio.weight - Specifies per cgroup weight. This is default weight of the group on all the devices until and unless overridden by per device rule. @@ -210,6 +249,67 @@ Details of cgroup files and minor number of the device and third field specifies the number of times a group was dequeued from a particular device. +Throttling/Upper limit policy files +----------------------------------- +- blkio.throttle.read_bps_device + - Specifies upper limit on READ rate from the device. IO rate is + specified in bytes per second. Rules are per deivce. Following is + the format. + + echo "<major>:<minor> <rate_bytes_per_second>" > /cgrp/blkio.read_bps_device + +- blkio.throttle.write_bps_device + - Specifies upper limit on WRITE rate to the device. IO rate is + specified in bytes per second. Rules are per deivce. Following is + the format. + + echo "<major>:<minor> <rate_bytes_per_second>" > /cgrp/blkio.write_bps_device + +- blkio.throttle.read_iops_device + - Specifies upper limit on READ rate from the device. IO rate is + specified in IO per second. Rules are per deivce. Following is + the format. + + echo "<major>:<minor> <rate_io_per_second>" > /cgrp/blkio.read_iops_device + +- blkio.throttle.write_iops_device + - Specifies upper limit on WRITE rate to the device. IO rate is + specified in io per second. Rules are per deivce. Following is + the format. + + echo "<major>:<minor> <rate_io_per_second>" > /cgrp/blkio.write_iops_device + +Note: If both BW and IOPS rules are specified for a device, then IO is + subjectd to both the constraints. + +- blkio.throttle.io_serviced + - Number of IOs (bio) completed to/from the disk by the group (as + seen by throttling policy). These are further divided by the type + of operation - read or write, sync or async. First two fields specify + the major and minor number of the device, third field specifies the + operation type and the fourth field specifies the number of IOs. + + blkio.io_serviced does accounting as seen by CFQ and counts are in + number of requests (struct request). On the other hand, + blkio.throttle.io_serviced counts number of IO in terms of number + of bios as seen by throttling policy. These bios can later be + merged by elevator and total number of requests completed can be + lesser. + +- blkio.throttle.io_service_bytes + - Number of bytes transferred to/from the disk by the group. These + are further divided by the type of operation - read or write, sync + or async. First two fields specify the major and minor number of the + device, third field specifies the operation type and the fourth field + specifies the number of bytes. + + These numbers should roughly be same as blkio.io_service_bytes as + updated by CFQ. The difference between two is that + blkio.io_service_bytes will not be updated if CFQ is not operating + on request queue. + +Common files among various policies +----------------------------------- - blkio.reset_stats - Writing an int to this file will result in resetting all the stats for that cgroup. @@ -217,6 +317,7 @@ Details of cgroup files CFQ sysfs tunable ================= /sys/block/<disk>/queue/iosched/group_isolation +----------------------------------------------- If group_isolation=1, it provides stronger isolation between groups at the expense of throughput. By default group_isolation is 0. In general that @@ -243,6 +344,33 @@ By default one should run with group_isolation=0. If that is not sufficient and one wants stronger isolation between groups, then set group_isolation=1 but this will come at cost of reduced throughput. +/sys/block/<disk>/queue/iosched/slice_idle +------------------------------------------ +On a faster hardware CFQ can be slow, especially with sequential workload. +This happens because CFQ idles on a single queue and single queue might not +drive deeper request queue depths to keep the storage busy. In such scenarios +one can try setting slice_idle=0 and that would switch CFQ to IOPS +(IO operations per second) mode on NCQ supporting hardware. + +That means CFQ will not idle between cfq queues of a cfq group and hence be +able to driver higher queue depth and achieve better throughput. That also +means that cfq provides fairness among groups in terms of IOPS and not in +terms of disk time. + +/sys/block/<disk>/queue/iosched/group_idle +------------------------------------------ +If one disables idling on individual cfq queues and cfq service trees by +setting slice_idle=0, group_idle kicks in. That means CFQ will still idle +on the group in an attempt to provide fairness among groups. + +By default group_idle is same as slice_idle and does not do anything if +slice_idle is enabled. + +One can experience an overall throughput drop if you have created multiple +groups and put applications in that group which are not driving enough +IO to keep disk busy. In that case set group_idle=0, and CFQ will not idle +on individual groups and throughput should improve. + What works ========== - Currently only sync IO queues are support. All the buffered writes are diff --git a/Documentation/cputopology.txt b/Documentation/cputopology.txt index f1c5c4bccd3e..902d3151f527 100644 --- a/Documentation/cputopology.txt +++ b/Documentation/cputopology.txt @@ -14,25 +14,39 @@ to /proc/cpuinfo. identifier (rather than the kernel's). The actual value is architecture and platform dependent. -3) /sys/devices/system/cpu/cpuX/topology/thread_siblings: +3) /sys/devices/system/cpu/cpuX/topology/book_id: + + the book ID of cpuX. Typically it is the hardware platform's + identifier (rather than the kernel's). The actual value is + architecture and platform dependent. + +4) /sys/devices/system/cpu/cpuX/topology/thread_siblings: internel kernel map of cpuX's hardware threads within the same core as cpuX -4) /sys/devices/system/cpu/cpuX/topology/core_siblings: +5) /sys/devices/system/cpu/cpuX/topology/core_siblings: internal kernel map of cpuX's hardware threads within the same physical_package_id. +6) /sys/devices/system/cpu/cpuX/topology/book_siblings: + + internal kernel map of cpuX's hardware threads within the same + book_id. + To implement it in an architecture-neutral way, a new source file, -drivers/base/topology.c, is to export the 4 attributes. +drivers/base/topology.c, is to export the 4 or 6 attributes. The two book +related sysfs files will only be created if CONFIG_SCHED_BOOK is selected. For an architecture to support this feature, it must define some of these macros in include/asm-XXX/topology.h: #define topology_physical_package_id(cpu) #define topology_core_id(cpu) +#define topology_book_id(cpu) #define topology_thread_cpumask(cpu) #define topology_core_cpumask(cpu) +#define topology_book_cpumask(cpu) The type of **_id is int. The type of siblings is (const) struct cpumask *. @@ -45,6 +59,9 @@ not defined by include/asm-XXX/topology.h: 3) thread_siblings: just the given CPU 4) core_siblings: just the given CPU +For architectures that don't support books (CONFIG_SCHED_BOOK) there are no +default definitions for topology_book_id() and topology_book_cpumask(). + Additionally, CPU topology information is provided under /sys/devices/system/cpu and includes these files. The internal source for the output is in brackets ("[]"). diff --git a/Documentation/devices.txt b/Documentation/devices.txt index d0d1df6cb5de..c58abf1ccc71 100644 --- a/Documentation/devices.txt +++ b/Documentation/devices.txt @@ -239,6 +239,7 @@ Your cooperation is appreciated. 0 = /dev/tty Current TTY device 1 = /dev/console System console 2 = /dev/ptmx PTY master multiplex + 3 = /dev/ttyprintk User messages via printk TTY device 64 = /dev/cua0 Callout device for ttyS0 ... 255 = /dev/cua191 Callout device for ttyS191 @@ -2553,7 +2554,10 @@ Your cooperation is appreciated. 175 = /dev/usb/legousbtower15 16th USB Legotower device 176 = /dev/usb/usbtmc1 First USB TMC device ... - 192 = /dev/usb/usbtmc16 16th USB TMC device + 191 = /dev/usb/usbtmc16 16th USB TMC device + 192 = /dev/usb/yurex1 First USB Yurex device + ... + 209 = /dev/usb/yurex16 16th USB Yurex device 240 = /dev/usb/dabusb0 First daubusb device ... 243 = /dev/usb/dabusb3 Fourth dabusb device diff --git a/Documentation/dynamic-debug-howto.txt b/Documentation/dynamic-debug-howto.txt index 674c5663d346..58ea64a96165 100644 --- a/Documentation/dynamic-debug-howto.txt +++ b/Documentation/dynamic-debug-howto.txt @@ -24,7 +24,7 @@ Dynamic debug has even more useful features: read to display the complete list of known debug statements, to help guide you Controlling dynamic debug Behaviour -=============================== +=================================== The behaviour of pr_debug()/dev_debug()s are controlled via writing to a control file in the 'debugfs' filesystem. Thus, you must first mount the debugfs @@ -212,6 +212,26 @@ Note the regexp ^[-+=][scp]+$ matches a flags specification. Note also that there is no convenient syntax to remove all the flags at once, you need to use "-psc". + +Debug messages during boot process +================================== + +To be able to activate debug messages during the boot process, +even before userspace and debugfs exists, use the boot parameter: +ddebug_query="QUERY" + +QUERY follows the syntax described above, but must not exceed 1023 +characters. The enablement of debug messages is done as an arch_initcall. +Thus you can enable debug messages in all code processed after this +arch_initcall via this boot parameter. +On an x86 system for example ACPI enablement is a subsys_initcall and +ddebug_query="file ec.c +p" +will show early Embedded Controller transactions during ACPI setup if +your machine (typically a laptop) has an Embedded Controller. +PCI (or other devices) initialization also is a hot candidate for using +this boot parameter for debugging purposes. + + Examples ======== diff --git a/Documentation/feature-removal-schedule.txt b/Documentation/feature-removal-schedule.txt index 842aa9de84a6..e833c8c81e69 100644 --- a/Documentation/feature-removal-schedule.txt +++ b/Documentation/feature-removal-schedule.txt @@ -386,34 +386,6 @@ Who: Tejun Heo <tj@kernel.org> ---------------------------- -What: Support for VMware's guest paravirtuliazation technique [VMI] will be - dropped. -When: 2.6.37 or earlier. -Why: With the recent innovations in CPU hardware acceleration technologies - from Intel and AMD, VMware ran a few experiments to compare these - techniques to guest paravirtualization technique on VMware's platform. - These hardware assisted virtualization techniques have outperformed the - performance benefits provided by VMI in most of the workloads. VMware - expects that these hardware features will be ubiquitous in a couple of - years, as a result, VMware has started a phased retirement of this - feature from the hypervisor. We will be removing this feature from the - Kernel too. Right now we are targeting 2.6.37 but can retire earlier if - technical reasons (read opportunity to remove major chunk of pvops) - arise. - - Please note that VMI has always been an optimization and non-VMI kernels - still work fine on VMware's platform. - Latest versions of VMware's product which support VMI are, - Workstation 7.0 and VSphere 4.0 on ESX side, future maintainence - releases for these products will continue supporting VMI. - - For more details about VMI retirement take a look at this, - http://blogs.vmware.com/guestosguide/2009/09/vmi-retirement.html - -Who: Alok N Kataria <akataria@vmware.com> - ----------------------------- - What: Support for lcd_switch and display_get in asus-laptop driver When: March 2010 Why: These two features use non-standard interfaces. There are the @@ -530,16 +502,6 @@ Who: Thomas Gleixner <tglx@linutronix.de> ---------------------------- -What: old ieee1394 subsystem (CONFIG_IEEE1394) -When: 2.6.37 -Files: drivers/ieee1394/ except init_ohci1394_dma.c -Why: superseded by drivers/firewire/ (CONFIG_FIREWIRE) which offers more - features, better performance, and better security, all with smaller - and more modern code base -Who: Stefan Richter <stefanr@s5r6.in-berlin.de> - ----------------------------- - What: The acpi_sleep=s4_nonvs command line option When: 2.6.37 Files: arch/x86/kernel/acpi/sleep.c @@ -564,3 +526,12 @@ Who: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> ---------------------------- +What: iwlwifi disable_hw_scan module parameters +When: 2.6.40 +Why: Hareware scan is the prefer method for iwlwifi devices for + scanning operation. Remove software scan support for all the + iwlwifi devices. + +Who: Wey-Yi Guy <wey-yi.w.guy@intel.com> + +---------------------------- diff --git a/Documentation/filesystems/ocfs2.txt b/Documentation/filesystems/ocfs2.txt index 1f7ae144f6d8..5393e6611691 100644 --- a/Documentation/filesystems/ocfs2.txt +++ b/Documentation/filesystems/ocfs2.txt @@ -87,3 +87,10 @@ dir_resv_level= (*) By default, directory reservations will scale with file reservations - users should rarely need to change this value. If allocation reservations are turned off, this option will have no effect. +coherency=full (*) Disallow concurrent O_DIRECT writes, cluster inode + lock will be taken to force other nodes drop cache, + therefore full cluster coherency is guaranteed even + for O_DIRECT writes. +coherency=buffered Allow concurrent O_DIRECT writes without EX lock among + nodes, which gains high performance at risk of getting + stale data on other nodes. diff --git a/Documentation/gpio.txt b/Documentation/gpio.txt index d96a6dba5748..9633da01ff46 100644 --- a/Documentation/gpio.txt +++ b/Documentation/gpio.txt @@ -109,17 +109,19 @@ use numbers 2000-2063 to identify GPIOs in a bank of I2C GPIO expanders. If you want to initialize a structure with an invalid GPIO number, use some negative number (perhaps "-EINVAL"); that will never be valid. To -test if a number could reference a GPIO, you may use this predicate: +test if such number from such a structure could reference a GPIO, you +may use this predicate: int gpio_is_valid(int number); A number that's not valid will be rejected by calls which may request or free GPIOs (see below). Other numbers may also be rejected; for -example, a number might be valid but unused on a given board. - -Whether a platform supports multiple GPIO controllers is currently a -platform-specific implementation issue. +example, a number might be valid but temporarily unused on a given board. +Whether a platform supports multiple GPIO controllers is a platform-specific +implementation issue, as are whether that support can leave "holes" in the space +of GPIO numbers, and whether new controllers can be added at runtime. Such issues +can affect things including whether adjacent GPIO numbers are both valid. Using GPIOs ----------- @@ -480,12 +482,16 @@ To support this framework, a platform's Kconfig will "select" either ARCH_REQUIRE_GPIOLIB or ARCH_WANT_OPTIONAL_GPIOLIB and arrange that its <asm/gpio.h> includes <asm-generic/gpio.h> and defines three functions: gpio_get_value(), gpio_set_value(), and gpio_cansleep(). -They may also want to provide a custom value for ARCH_NR_GPIOS. -ARCH_REQUIRE_GPIOLIB means that the gpio-lib code will always get compiled +It may also provide a custom value for ARCH_NR_GPIOS, so that it better +reflects the number of GPIOs in actual use on that platform, without +wasting static table space. (It should count both built-in/SoC GPIOs and +also ones on GPIO expanders. + +ARCH_REQUIRE_GPIOLIB means that the gpiolib code will always get compiled into the kernel on that architecture. -ARCH_WANT_OPTIONAL_GPIOLIB means the gpio-lib code defaults to off and the user +ARCH_WANT_OPTIONAL_GPIOLIB means the gpiolib code defaults to off and the user can enable it and build it into the kernel optionally. If neither of these options are selected, the platform does not support diff --git a/Documentation/hwmon/sysfs-interface b/Documentation/hwmon/sysfs-interface index ff45d1f837c8..48ceabedf55d 100644 --- a/Documentation/hwmon/sysfs-interface +++ b/Documentation/hwmon/sysfs-interface @@ -91,12 +91,11 @@ name The chip name. I2C devices get this attribute created automatically. RO -update_rate The rate at which the chip will update readings. +update_interval The interval at which the chip will update readings. Unit: millisecond RW - Some devices have a variable update rate. This attribute - can be used to change the update rate to the desired - frequency. + Some devices have a variable update rate or interval. + This attribute can be used to change it to the desired value. ************ diff --git a/Documentation/input/ntrig.txt b/Documentation/input/ntrig.txt new file mode 100644 index 000000000000..be1fd981f73f --- /dev/null +++ b/Documentation/input/ntrig.txt @@ -0,0 +1,126 @@ +N-Trig touchscreen Driver +------------------------- + Copyright (c) 2008-2010 Rafi Rubin <rafi@seas.upenn.edu> + Copyright (c) 2009-2010 Stephane Chatty + +This driver provides support for N-Trig pen and multi-touch sensors. Single +and multi-touch events are translated to the appropriate protocols for +the hid and input systems. Pen events are sufficiently hid compliant and +are left to the hid core. The driver also provides additional filtering +and utility functions accessible with sysfs and module parameters. + +This driver has been reported to work properly with multiple N-Trig devices +attached. + + +Parameters +---------- + +Note: values set at load time are global and will apply to all applicable +devices. Adjusting parameters with sysfs will override the load time values, +but only for that one device. + +The following parameters are used to configure filters to reduce noise: + +activate_slack number of fingers to ignore before processing events + +activation_height size threshold to activate immediately +activation_width + +min_height size threshold bellow which fingers are ignored +min_width both to decide activation and during activity + +deactivate_slack the number of "no contact" frames to ignore before + propagating the end of activity events + +When the last finger is removed from the device, it sends a number of empty +frames. By holding off on deactivation for a few frames we can tolerate false +erroneous disconnects, where the sensor may mistakenly not detect a finger that +is still present. Thus deactivate_slack addresses problems where a users might +see breaks in lines during drawing, or drop an object during a long drag. + + +Additional sysfs items +---------------------- + +These nodes just provide easy access to the ranges reported by the device. +sensor_logical_height the range for positions reported during activity +sensor_logical_width + +sensor_physical_height internal ranges not used for normal events but +sensor_physical_width useful for tuning + +All N-Trig devices with product id of 1 report events in the ranges of +X: 0-9600 +Y: 0-7200 +However not all of these devices have the same physical dimensions. Most +seem to be 12" sensors (Dell Latitude XT and XT2 and the HP TX2), and +at least one model (Dell Studio 17) has a 17" sensor. The ratio of physical +to logical sizes is used to adjust the size based filter parameters. + + +Filtering +--------- + +With the release of the early multi-touch firmwares it became increasingly +obvious that these sensors were prone to erroneous events. Users reported +seeing both inappropriately dropped contact and ghosts, contacts reported +where no finger was actually touching the screen. + +Deactivation slack helps prevent dropped contact for single touch use, but does +not address the problem of dropping one of more contacts while other contacts +are still active. Drops in the multi-touch context require additional +processing and should be handled in tandem with tacking. + +As observed ghost contacts are similar to actual use of the sensor, but they +seem to have different profiles. Ghost activity typically shows up as small +short lived touches. As such, I assume that the longer the continuous stream +of events the more likely those events are from a real contact, and that the +larger the size of each contact the more likely it is real. Balancing the +goals of preventing ghosts and accepting real events quickly (to minimize +user observable latency), the filter accumulates confidence for incoming +events until it hits thresholds and begins propagating. In the interest in +minimizing stored state as well as the cost of operations to make a decision, +I've kept that decision simple. + +Time is measured in terms of the number of fingers reported, not frames since +the probability of multiple simultaneous ghosts is expected to drop off +dramatically with increasing numbers. Rather than accumulate weight as a +function of size, I just use it as a binary threshold. A sufficiently large +contact immediately overrides the waiting period and leads to activation. + +Setting the activation size thresholds to large values will result in deciding +primarily on activation slack. If you see longer lived ghosts, turning up the +activation slack while reducing the size thresholds may suffice to eliminate +the ghosts while keeping the screen quite responsive to firm taps. + +Contacts continue to be filtered with min_height and min_width even after +the initial activation filter is satisfied. The intent is to provide +a mechanism for filtering out ghosts in the form of an extra finger while +you actually are using the screen. In practice this sort of ghost has +been far less problematic or relatively rare and I've left the defaults +set to 0 for both parameters, effectively turning off that filter. + +I don't know what the optimal values are for these filters. If the defaults +don't work for you, please play with the parameters. If you do find other +values more comfortable, I would appreciate feedback. + +The calibration of these devices does drift over time. If ghosts or contact +dropping worsen and interfere with the normal usage of your device, try +recalibrating it. + + +Calibration +----------- + +The N-Trig windows tools provide calibration and testing routines. Also an +unofficial unsupported set of user space tools including a calibrator is +available at: +http://code.launchpad.net/~rafi-seas/+junk/ntrig_calib + + +Tracking +-------- + +As of yet, all tested N-Trig firmwares do not track fingers. When multiple +contacts are active they seem to be sorted primarily by Y position. diff --git a/Documentation/kernel-doc-nano-HOWTO.txt b/Documentation/kernel-doc-nano-HOWTO.txt index 27a52b35d55b..3d8a97747f77 100644 --- a/Documentation/kernel-doc-nano-HOWTO.txt +++ b/Documentation/kernel-doc-nano-HOWTO.txt @@ -345,5 +345,10 @@ documentation, in <filename>, for the functions listed. section titled <section title> from <filename>. Spaces are allowed in <section title>; do not quote the <section title>. +!C<filename> is replaced by nothing, but makes the tools check that +all DOC: sections and documented functions, symbols, etc. are used. +This makes sense to use when you use !F/!P only and want to verify +that all documentation is included. + Tim. */ <twaugh@redhat.com> diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt index 0fe70b2c4194..4bc2f3c3da5b 100644 --- a/Documentation/kernel-parameters.txt +++ b/Documentation/kernel-parameters.txt @@ -43,10 +43,11 @@ parameter is applicable: AVR32 AVR32 architecture is enabled. AX25 Appropriate AX.25 support is enabled. BLACKFIN Blackfin architecture is enabled. - DRM Direct Rendering Management support is enabled. EDD BIOS Enhanced Disk Drive Services (EDD) is enabled EFI EFI Partitioning (GPT) is enabled EIDE EIDE/ATAPI support is enabled. + DRM Direct Rendering Management support is enabled. + DYNAMIC_DEBUG Build in debug messages and enable them at runtime FB The frame buffer device is enabled. GCOV GCOV profiling is enabled. HW Appropriate hardware is enabled. @@ -455,7 +456,7 @@ and is between 256 and 4096 characters. It is defined in the file [ARM] imx_timer1,OSTS,netx_timer,mpu_timer2, pxa_timer,timer3,32k_counter,timer0_1 [AVR32] avr32 - [X86-32] pit,hpet,tsc,vmi-timer; + [X86-32] pit,hpet,tsc; scx200_hrt on Geode; cyclone on IBM x440 [MIPS] MIPS [PARISC] cr16 @@ -570,6 +571,10 @@ and is between 256 and 4096 characters. It is defined in the file Format: <port#>,<type> See also Documentation/input/joystick-parport.txt + ddebug_query= [KNL,DYNAMIC_DEBUG] Enable debug messages at early boot + time. See Documentation/dynamic-debug-howto.txt for + details. + debug [KNL] Enable kernel debugging (events log level). debug_locks_verbose= @@ -1126,9 +1131,13 @@ and is between 256 and 4096 characters. It is defined in the file kvm.oos_shadow= [KVM] Disable out-of-sync shadow paging. Default is 1 (enabled) - kvm-amd.nested= [KVM,AMD] Allow nested virtualization in KVM/SVM. + kvm.mmu_audit= [KVM] This is a R/W parameter which allows audit + KVM MMU at runtime. Default is 0 (off) + kvm-amd.nested= [KVM,AMD] Allow nested virtualization in KVM/SVM. + Default is 1 (enabled) + kvm-amd.npt= [KVM,AMD] Disable nested paging (virtualized MMU) for all guests. Default is 1 (enabled) if in 64bit or 32bit-PAE mode @@ -1696,6 +1705,8 @@ and is between 256 and 4096 characters. It is defined in the file nojitter [IA64] Disables jitter checking for ITC timers. + no-kvmclock [X86,KVM] Disable paravirtualized KVM clock driver + nolapic [X86-32,APIC] Do not enable or use the local APIC. nolapic_timer [X86-32,APIC] Do not use the local APIC timer. @@ -1716,7 +1727,7 @@ and is between 256 and 4096 characters. It is defined in the file norandmaps Don't use address space randomization. Equivalent to echo 0 > /proc/sys/kernel/randomize_va_space - noreplace-paravirt [X86-32,PV_OPS] Don't patch paravirt_ops + noreplace-paravirt [X86,IA-64,PV_OPS] Don't patch paravirt_ops noreplace-smp [X86-32,SMP] Don't replace SMP instructions with UP alternatives @@ -1977,15 +1988,18 @@ and is between 256 and 4096 characters. It is defined in the file force Enable ASPM even on devices that claim not to support it. WARNING: Forcing ASPM on may cause system lockups. + pcie_ports= [PCIE] PCIe ports handling: + auto Ask the BIOS whether or not to use native PCIe services + associated with PCIe ports (PME, hot-plug, AER). Use + them only if that is allowed by the BIOS. + native Use native PCIe services associated with PCIe ports + unconditionally. + compat Treat PCIe ports as PCI-to-PCI bridges, disable the PCIe + ports driver. + pcie_pme= [PCIE,PM] Native PCIe PME signaling options: - Format: {auto|force}[,nomsi] - auto Use native PCIe PME signaling if the BIOS allows the - kernel to control PCIe config registers of root ports. - force Use native PCIe PME signaling even if the BIOS refuses - to allow the kernel to control the relevant PCIe config - registers. nomsi Do not use MSI for native PCIe PME signaling (this makes - all PCIe root ports use INTx for everything). + all PCIe root ports use INTx for all services). pcmv= [HW,PCMCIA] BadgePAD 4 @@ -2153,6 +2167,11 @@ and is between 256 and 4096 characters. It is defined in the file Reserves a hole at the top of the kernel virtual address space. + reservelow= [X86] + Format: nn[K] + Set the amount of memory to reserve for BIOS at + the bottom of the address space. + reset_devices [KNL] Force drivers to reset the underlying device during initialization. @@ -2165,6 +2184,11 @@ and is between 256 and 4096 characters. It is defined in the file in <PAGE_SIZE> units (needed only for swap files). See Documentation/power/swsusp-and-swap-files.txt + hibernate= [HIBERNATION] + noresume Don't check if there's a hibernation image + present during boot. + nocompress Don't compress/decompress hibernation images. + retain_initrd [RAM] Keep initrd memory after extraction rhash_entries= [KNL,NET] @@ -2360,6 +2384,15 @@ and is between 256 and 4096 characters. It is defined in the file switches= [HW,M68k] + sysfs.deprecated=0|1 [KNL] + Enable/disable old style sysfs layout for old udev + on older distributions. When this option is enabled + very new udev will not work anymore. When this option + is disabled (or CONFIG_SYSFS_DEPRECATED not compiled) + in older udev will not work anymore. + Default depends on CONFIG_SYSFS_DEPRECATED_V2 set in + the kernel configuration. + sysrq_always_enabled [KNL] Ignore sysrq setting - this boot parameter will @@ -2408,7 +2441,7 @@ and is between 256 and 4096 characters. It is defined in the file topology informations if the hardware supports these. The scheduler will make use of these informations and e.g. base its process migration decisions on it. - Default is off. + Default is on. tp720= [HW,PS2] @@ -2435,6 +2468,10 @@ and is between 256 and 4096 characters. It is defined in the file disables clocksource verification at runtime. Used to enable high-resolution timer mode on older hardware, and in virtualized environment. + [x86] noirqtime: Do not use TSC to do irq accounting. + Used to run time disable IRQ_TIME_ACCOUNTING on any + platforms where RDTSC is slow and this accounting + can add overhead. turbografx.map[2|3]= [HW,JOY] TurboGraFX parallel port interface diff --git a/Documentation/kprobes.txt b/Documentation/kprobes.txt index 1762b81fcdf2..741fe66d6eca 100644 --- a/Documentation/kprobes.txt +++ b/Documentation/kprobes.txt @@ -542,9 +542,11 @@ Kprobes does not use mutexes or allocate memory except during registration and unregistration. Probe handlers are run with preemption disabled. Depending on the -architecture, handlers may also run with interrupts disabled. In any -case, your handler should not yield the CPU (e.g., by attempting to -acquire a semaphore). +architecture and optimization state, handlers may also run with +interrupts disabled (e.g., kretprobe handlers and optimized kprobe +handlers run without interrupt disabled on x86/x86-64). In any case, +your handler should not yield the CPU (e.g., by attempting to acquire +a semaphore). Since a return probe is implemented by replacing the return address with the trampoline's address, stack backtraces and calls diff --git a/Documentation/kvm/api.txt b/Documentation/kvm/api.txt index 5f5b64982b1a..b336266bea5e 100644 --- a/Documentation/kvm/api.txt +++ b/Documentation/kvm/api.txt @@ -320,13 +320,13 @@ struct kvm_translation { 4.15 KVM_INTERRUPT Capability: basic -Architectures: x86 +Architectures: x86, ppc Type: vcpu ioctl Parameters: struct kvm_interrupt (in) Returns: 0 on success, -1 on error Queues a hardware interrupt vector to be injected. This is only -useful if in-kernel local APIC is not used. +useful if in-kernel local APIC or equivalent is not used. /* for KVM_INTERRUPT */ struct kvm_interrupt { @@ -334,8 +334,37 @@ struct kvm_interrupt { __u32 irq; }; +X86: + Note 'irq' is an interrupt vector, not an interrupt pin or line. +PPC: + +Queues an external interrupt to be injected. This ioctl is overleaded +with 3 different irq values: + +a) KVM_INTERRUPT_SET + + This injects an edge type external interrupt into the guest once it's ready + to receive interrupts. When injected, the interrupt is done. + +b) KVM_INTERRUPT_UNSET + + This unsets any pending interrupt. + + Only available with KVM_CAP_PPC_UNSET_IRQ. + +c) KVM_INTERRUPT_SET_LEVEL + + This injects a level type external interrupt into the guest context. The + interrupt stays pending until a specific ioctl with KVM_INTERRUPT_UNSET + is triggered. + + Only available with KVM_CAP_PPC_IRQ_LEVEL. + +Note that any value for 'irq' other than the ones stated above is invalid +and incurs unexpected behavior. + 4.16 KVM_DEBUG_GUEST Capability: basic @@ -1013,8 +1042,9 @@ number is just right, the 'nent' field is adjusted to the number of valid entries in the 'entries' array, which is then filled. The entries returned are the host cpuid as returned by the cpuid instruction, -with unknown or unsupported features masked out. The fields in each entry -are defined as follows: +with unknown or unsupported features masked out. Some features (for example, +x2apic), may not be present in the host cpu, but are exposed by kvm if it can +emulate them efficiently. The fields in each entry are defined as follows: function: the eax value used to obtain the entry index: the ecx value used to obtain the entry (for entries that are @@ -1032,6 +1062,29 @@ are defined as follows: eax, ebx, ecx, edx: the values returned by the cpuid instruction for this function/index combination +4.46 KVM_PPC_GET_PVINFO + +Capability: KVM_CAP_PPC_GET_PVINFO +Architectures: ppc +Type: vm ioctl +Parameters: struct kvm_ppc_pvinfo (out) +Returns: 0 on success, !0 on error + +struct kvm_ppc_pvinfo { + __u32 flags; + __u32 hcall[4]; + __u8 pad[108]; +}; + +This ioctl fetches PV specific information that need to be passed to the guest +using the device tree or other means from vm context. + +For now the only implemented piece of information distributed here is an array +of 4 instructions that make up a hypercall. + +If any additional field gets added to this structure later on, a bit for that +additional piece of information will be set in the flags bitmap. + 5. The kvm_run structure Application code obtains a pointer to the kvm_run structure by diff --git a/Documentation/kvm/ppc-pv.txt b/Documentation/kvm/ppc-pv.txt new file mode 100644 index 000000000000..a7f2244b3be9 --- /dev/null +++ b/Documentation/kvm/ppc-pv.txt @@ -0,0 +1,196 @@ +The PPC KVM paravirtual interface +================================= + +The basic execution principle by which KVM on PowerPC works is to run all kernel +space code in PR=1 which is user space. This way we trap all privileged +instructions and can emulate them accordingly. + +Unfortunately that is also the downfall. There are quite some privileged +instructions that needlessly return us to the hypervisor even though they +could be handled differently. + +This is what the PPC PV interface helps with. It takes privileged instructions +and transforms them into unprivileged ones with some help from the hypervisor. +This cuts down virtualization costs by about 50% on some of my benchmarks. + +The code for that interface can be found in arch/powerpc/kernel/kvm* + +Querying for existence +====================== + +To find out if we're running on KVM or not, we leverage the device tree. When +Linux is running on KVM, a node /hypervisor exists. That node contains a +compatible property with the value "linux,kvm". + +Once you determined you're running under a PV capable KVM, you can now use +hypercalls as described below. + +KVM hypercalls +============== + +Inside the device tree's /hypervisor node there's a property called +'hypercall-instructions'. This property contains at most 4 opcodes that make +up the hypercall. To call a hypercall, just call these instructions. + +The parameters are as follows: + + Register IN OUT + + r0 - volatile + r3 1st parameter Return code + r4 2nd parameter 1st output value + r5 3rd parameter 2nd output value + r6 4th parameter 3rd output value + r7 5th parameter 4th output value + r8 6th parameter 5th output value + r9 7th parameter 6th output value + r10 8th parameter 7th output value + r11 hypercall number 8th output value + r12 - volatile + +Hypercall definitions are shared in generic code, so the same hypercall numbers +apply for x86 and powerpc alike with the exception that each KVM hypercall +also needs to be ORed with the KVM vendor code which is (42 << 16). + +Return codes can be as follows: + + Code Meaning + + 0 Success + 12 Hypercall not implemented + <0 Error + +The magic page +============== + +To enable communication between the hypervisor and guest there is a new shared +page that contains parts of supervisor visible register state. The guest can +map this shared page using the KVM hypercall KVM_HC_PPC_MAP_MAGIC_PAGE. + +With this hypercall issued the guest always gets the magic page mapped at the +desired location in effective and physical address space. For now, we always +map the page to -4096. This way we can access it using absolute load and store +functions. The following instruction reads the first field of the magic page: + + ld rX, -4096(0) + +The interface is designed to be extensible should there be need later to add +additional registers to the magic page. If you add fields to the magic page, +also define a new hypercall feature to indicate that the host can give you more +registers. Only if the host supports the additional features, make use of them. + +The magic page has the following layout as described in +arch/powerpc/include/asm/kvm_para.h: + +struct kvm_vcpu_arch_shared { + __u64 scratch1; + __u64 scratch2; + __u64 scratch3; + __u64 critical; /* Guest may not get interrupts if == r1 */ + __u64 sprg0; + __u64 sprg1; + __u64 sprg2; + __u64 sprg3; + __u64 srr0; + __u64 srr1; + __u64 dar; + __u64 msr; + __u32 dsisr; + __u32 int_pending; /* Tells the guest if we have an interrupt */ +}; + +Additions to the page must only occur at the end. Struct fields are always 32 +or 64 bit aligned, depending on them being 32 or 64 bit wide respectively. + +Magic page features +=================== + +When mapping the magic page using the KVM hypercall KVM_HC_PPC_MAP_MAGIC_PAGE, +a second return value is passed to the guest. This second return value contains +a bitmap of available features inside the magic page. + +The following enhancements to the magic page are currently available: + + KVM_MAGIC_FEAT_SR Maps SR registers r/w in the magic page + +For enhanced features in the magic page, please check for the existence of the +feature before using them! + +MSR bits +======== + +The MSR contains bits that require hypervisor intervention and bits that do +not require direct hypervisor intervention because they only get interpreted +when entering the guest or don't have any impact on the hypervisor's behavior. + +The following bits are safe to be set inside the guest: + + MSR_EE + MSR_RI + MSR_CR + MSR_ME + +If any other bit changes in the MSR, please still use mtmsr(d). + +Patched instructions +==================== + +The "ld" and "std" instructions are transormed to "lwz" and "stw" instructions +respectively on 32 bit systems with an added offset of 4 to accomodate for big +endianness. + +The following is a list of mapping the Linux kernel performs when running as +guest. Implementing any of those mappings is optional, as the instruction traps +also act on the shared page. So calling privileged instructions still works as +before. + +From To +==== == + +mfmsr rX ld rX, magic_page->msr +mfsprg rX, 0 ld rX, magic_page->sprg0 +mfsprg rX, 1 ld rX, magic_page->sprg1 +mfsprg rX, 2 ld rX, magic_page->sprg2 +mfsprg rX, 3 ld rX, magic_page->sprg3 +mfsrr0 rX ld rX, magic_page->srr0 +mfsrr1 rX ld rX, magic_page->srr1 +mfdar rX ld rX, magic_page->dar +mfdsisr rX lwz rX, magic_page->dsisr + +mtmsr rX std rX, magic_page->msr +mtsprg 0, rX std rX, magic_page->sprg0 +mtsprg 1, rX std rX, magic_page->sprg1 +mtsprg 2, rX std rX, magic_page->sprg2 +mtsprg 3, rX std rX, magic_page->sprg3 +mtsrr0 rX std rX, magic_page->srr0 +mtsrr1 rX std rX, magic_page->srr1 +mtdar rX std rX, magic_page->dar +mtdsisr rX stw rX, magic_page->dsisr + +tlbsync nop + +mtmsrd rX, 0 b <special mtmsr section> +mtmsr rX b <special mtmsr section> + +mtmsrd rX, 1 b <special mtmsrd section> + +[Book3S only] +mtsrin rX, rY b <special mtsrin section> + +[BookE only] +wrteei [0|1] b <special wrteei section> + + +Some instructions require more logic to determine what's going on than a load +or store instruction can deliver. To enable patching of those, we keep some +RAM around where we can live translate instructions to. What happens is the +following: + + 1) copy emulation code to memory + 2) patch that code to fit the emulated instruction + 3) patch that code to return to the original pc + 4 + 4) patch the original instruction to branch to the new code + +That way we can inject an arbitrary amount of code as replacement for a single +instruction. This allows us to check for pending interrupts when setting EE=1 +for example. diff --git a/Documentation/kvm/timekeeping.txt b/Documentation/kvm/timekeeping.txt new file mode 100644 index 000000000000..0c5033a58c9e --- /dev/null +++ b/Documentation/kvm/timekeeping.txt @@ -0,0 +1,612 @@ + + Timekeeping Virtualization for X86-Based Architectures + + Zachary Amsden <zamsden@redhat.com> + Copyright (c) 2010, Red Hat. All rights reserved. + +1) Overview +2) Timing Devices +3) TSC Hardware +4) Virtualization Problems + +========================================================================= + +1) Overview + +One of the most complicated parts of the X86 platform, and specifically, +the virtualization of this platform is the plethora of timing devices available +and the complexity of emulating those devices. In addition, virtualization of +time introduces a new set of challenges because it introduces a multiplexed +division of time beyond the control of the guest CPU. + +First, we will describe the various timekeeping hardware available, then +present some of the problems which arise and solutions available, giving +specific recommendations for certain classes of KVM guests. + +The purpose of this document is to collect data and information relevant to +timekeeping which may be difficult to find elsewhere, specifically, +information relevant to KVM and hardware-based virtualization. + +========================================================================= + +2) Timing Devices + +First we discuss the basic hardware devices available. TSC and the related +KVM clock are special enough to warrant a full exposition and are described in +the following section. + +2.1) i8254 - PIT + +One of the first timer devices available is the programmable interrupt timer, +or PIT. The PIT has a fixed frequency 1.193182 MHz base clock and three +channels which can be programmed to deliver periodic or one-shot interrupts. +These three channels can be configured in different modes and have individual +counters. Channel 1 and 2 were not available for general use in the original +IBM PC, and historically were connected to control RAM refresh and the PC +speaker. Now the PIT is typically integrated as part of an emulated chipset +and a separate physical PIT is not used. + +The PIT uses I/O ports 0x40 - 0x43. Access to the 16-bit counters is done +using single or multiple byte access to the I/O ports. There are 6 modes +available, but not all modes are available to all timers, as only timer 2 +has a connected gate input, required for modes 1 and 5. The gate line is +controlled by port 61h, bit 0, as illustrated in the following diagram. + + -------------- ---------------- +| | | | +| 1.1932 MHz |---------->| CLOCK OUT | ---------> IRQ 0 +| Clock | | | | + -------------- | +->| GATE TIMER 0 | + | ---------------- + | + | ---------------- + | | | + |------>| CLOCK OUT | ---------> 66.3 KHZ DRAM + | | | (aka /dev/null) + | +->| GATE TIMER 1 | + | ---------------- + | + | ---------------- + | | | + |------>| CLOCK OUT | ---------> Port 61h, bit 5 + | | | +Port 61h, bit 0 ---------->| GATE TIMER 2 | \_.---- ____ + ---------------- _| )--|LPF|---Speaker + / *---- \___/ +Port 61h, bit 1 -----------------------------------/ + +The timer modes are now described. + +Mode 0: Single Timeout. This is a one-shot software timeout that counts down + when the gate is high (always true for timers 0 and 1). When the count + reaches zero, the output goes high. + +Mode 1: Triggered One-shot. The output is intially set high. When the gate + line is set high, a countdown is initiated (which does not stop if the gate is + lowered), during which the output is set low. When the count reaches zero, + the output goes high. + +Mode 2: Rate Generator. The output is initially set high. When the countdown + reaches 1, the output goes low for one count and then returns high. The value + is reloaded and the countdown automatically resumes. If the gate line goes + low, the count is halted. If the output is low when the gate is lowered, the + output automatically goes high (this only affects timer 2). + +Mode 3: Square Wave. This generates a high / low square wave. The count + determines the length of the pulse, which alternates between high and low + when zero is reached. The count only proceeds when gate is high and is + automatically reloaded on reaching zero. The count is decremented twice at + each clock to generate a full high / low cycle at the full periodic rate. + If the count is even, the clock remains high for N/2 counts and low for N/2 + counts; if the clock is odd, the clock is high for (N+1)/2 counts and low + for (N-1)/2 counts. Only even values are latched by the counter, so odd + values are not observed when reading. This is the intended mode for timer 2, + which generates sine-like tones by low-pass filtering the square wave output. + +Mode 4: Software Strobe. After programming this mode and loading the counter, + the output remains high until the counter reaches zero. Then the output + goes low for 1 clock cycle and returns high. The counter is not reloaded. + Counting only occurs when gate is high. + +Mode 5: Hardware Strobe. After programming and loading the counter, the + output remains high. When the gate is raised, a countdown is initiated + (which does not stop if the gate is lowered). When the counter reaches zero, + the output goes low for 1 clock cycle and then returns high. The counter is + not reloaded. + +In addition to normal binary counting, the PIT supports BCD counting. The +command port, 0x43 is used to set the counter and mode for each of the three +timers. + +PIT commands, issued to port 0x43, using the following bit encoding: + +Bit 7-4: Command (See table below) +Bit 3-1: Mode (000 = Mode 0, 101 = Mode 5, 11X = undefined) +Bit 0 : Binary (0) / BCD (1) + +Command table: + +0000 - Latch Timer 0 count for port 0x40 + sample and hold the count to be read in port 0x40; + additional commands ignored until counter is read; + mode bits ignored. + +0001 - Set Timer 0 LSB mode for port 0x40 + set timer to read LSB only and force MSB to zero; + mode bits set timer mode + +0010 - Set Timer 0 MSB mode for port 0x40 + set timer to read MSB only and force LSB to zero; + mode bits set timer mode + +0011 - Set Timer 0 16-bit mode for port 0x40 + set timer to read / write LSB first, then MSB; + mode bits set timer mode + +0100 - Latch Timer 1 count for port 0x41 - as described above +0101 - Set Timer 1 LSB mode for port 0x41 - as described above +0110 - Set Timer 1 MSB mode for port 0x41 - as described above +0111 - Set Timer 1 16-bit mode for port 0x41 - as described above + +1000 - Latch Timer 2 count for port 0x42 - as described above +1001 - Set Timer 2 LSB mode for port 0x42 - as described above +1010 - Set Timer 2 MSB mode for port 0x42 - as described above +1011 - Set Timer 2 16-bit mode for port 0x42 as described above + +1101 - General counter latch + Latch combination of counters into corresponding ports + Bit 3 = Counter 2 + Bit 2 = Counter 1 + Bit 1 = Counter 0 + Bit 0 = Unused + +1110 - Latch timer status + Latch combination of counter mode into corresponding ports + Bit 3 = Counter 2 + Bit 2 = Counter 1 + Bit 1 = Counter 0 + + The output of ports 0x40-0x42 following this command will be: + + Bit 7 = Output pin + Bit 6 = Count loaded (0 if timer has expired) + Bit 5-4 = Read / Write mode + 01 = MSB only + 10 = LSB only + 11 = LSB / MSB (16-bit) + Bit 3-1 = Mode + Bit 0 = Binary (0) / BCD mode (1) + +2.2) RTC + +The second device which was available in the original PC was the MC146818 real +time clock. The original device is now obsolete, and usually emulated by the +system chipset, sometimes by an HPET and some frankenstein IRQ routing. + +The RTC is accessed through CMOS variables, which uses an index register to +control which bytes are read. Since there is only one index register, read +of the CMOS and read of the RTC require lock protection (in addition, it is +dangerous to allow userspace utilities such as hwclock to have direct RTC +access, as they could corrupt kernel reads and writes of CMOS memory). + +The RTC generates an interrupt which is usually routed to IRQ 8. The interrupt +can function as a periodic timer, an additional once a day alarm, and can issue +interrupts after an update of the CMOS registers by the MC146818 is complete. +The type of interrupt is signalled in the RTC status registers. + +The RTC will update the current time fields by battery power even while the +system is off. The current time fields should not be read while an update is +in progress, as indicated in the status register. + +The clock uses a 32.768kHz crystal, so bits 6-4 of register A should be +programmed to a 32kHz divider if the RTC is to count seconds. + +This is the RAM map originally used for the RTC/CMOS: + +Location Size Description +------------------------------------------ +00h byte Current second (BCD) +01h byte Seconds alarm (BCD) +02h byte Current minute (BCD) +03h byte Minutes alarm (BCD) +04h byte Current hour (BCD) +05h byte Hours alarm (BCD) +06h byte Current day of week (BCD) +07h byte Current day of month (BCD) +08h byte Current month (BCD) +09h byte Current year (BCD) +0Ah byte Register A + bit 7 = Update in progress + bit 6-4 = Divider for clock + 000 = 4.194 MHz + 001 = 1.049 MHz + 010 = 32 kHz + 10X = test modes + 110 = reset / disable + 111 = reset / disable + bit 3-0 = Rate selection for periodic interrupt + 000 = periodic timer disabled + 001 = 3.90625 uS + 010 = 7.8125 uS + 011 = .122070 mS + 100 = .244141 mS + ... + 1101 = 125 mS + 1110 = 250 mS + 1111 = 500 mS +0Bh byte Register B + bit 7 = Run (0) / Halt (1) + bit 6 = Periodic interrupt enable + bit 5 = Alarm interrupt enable + bit 4 = Update-ended interrupt enable + bit 3 = Square wave interrupt enable + bit 2 = BCD calendar (0) / Binary (1) + bit 1 = 12-hour mode (0) / 24-hour mode (1) + bit 0 = 0 (DST off) / 1 (DST enabled) +OCh byte Register C (read only) + bit 7 = interrupt request flag (IRQF) + bit 6 = periodic interrupt flag (PF) + bit 5 = alarm interrupt flag (AF) + bit 4 = update interrupt flag (UF) + bit 3-0 = reserved +ODh byte Register D (read only) + bit 7 = RTC has power + bit 6-0 = reserved +32h byte Current century BCD (*) + (*) location vendor specific and now determined from ACPI global tables + +2.3) APIC + +On Pentium and later processors, an on-board timer is available to each CPU +as part of the Advanced Programmable Interrupt Controller. The APIC is +accessed through memory-mapped registers and provides interrupt service to each +CPU, used for IPIs and local timer interrupts. + +Although in theory the APIC is a safe and stable source for local interrupts, +in practice, many bugs and glitches have occurred due to the special nature of +the APIC CPU-local memory-mapped hardware. Beware that CPU errata may affect +the use of the APIC and that workarounds may be required. In addition, some of +these workarounds pose unique constraints for virtualization - requiring either +extra overhead incurred from extra reads of memory-mapped I/O or additional +functionality that may be more computationally expensive to implement. + +Since the APIC is documented quite well in the Intel and AMD manuals, we will +avoid repetition of the detail here. It should be pointed out that the APIC +timer is programmed through the LVT (local vector timer) register, is capable +of one-shot or periodic operation, and is based on the bus clock divided down +by the programmable divider register. + +2.4) HPET + +HPET is quite complex, and was originally intended to replace the PIT / RTC +support of the X86 PC. It remains to be seen whether that will be the case, as +the de facto standard of PC hardware is to emulate these older devices. Some +systems designated as legacy free may support only the HPET as a hardware timer +device. + +The HPET spec is rather loose and vague, requiring at least 3 hardware timers, +but allowing implementation freedom to support many more. It also imposes no +fixed rate on the timer frequency, but does impose some extremal values on +frequency, error and slew. + +In general, the HPET is recommended as a high precision (compared to PIT /RTC) +time source which is independent of local variation (as there is only one HPET +in any given system). The HPET is also memory-mapped, and its presence is +indicated through ACPI tables by the BIOS. + +Detailed specification of the HPET is beyond the current scope of this +document, as it is also very well documented elsewhere. + +2.5) Offboard Timers + +Several cards, both proprietary (watchdog boards) and commonplace (e1000) have +timing chips built into the cards which may have registers which are accessible +to kernel or user drivers. To the author's knowledge, using these to generate +a clocksource for a Linux or other kernel has not yet been attempted and is in +general frowned upon as not playing by the agreed rules of the game. Such a +timer device would require additional support to be virtualized properly and is +not considered important at this time as no known operating system does this. + +========================================================================= + +3) TSC Hardware + +The TSC or time stamp counter is relatively simple in theory; it counts +instruction cycles issued by the processor, which can be used as a measure of +time. In practice, due to a number of problems, it is the most complicated +timekeeping device to use. + +The TSC is represented internally as a 64-bit MSR which can be read with the +RDMSR, RDTSC, or RDTSCP (when available) instructions. In the past, hardware +limitations made it possible to write the TSC, but generally on old hardware it +was only possible to write the low 32-bits of the 64-bit counter, and the upper +32-bits of the counter were cleared. Now, however, on Intel processors family +0Fh, for models 3, 4 and 6, and family 06h, models e and f, this restriction +has been lifted and all 64-bits are writable. On AMD systems, the ability to +write the TSC MSR is not an architectural guarantee. + +The TSC is accessible from CPL-0 and conditionally, for CPL > 0 software by +means of the CR4.TSD bit, which when enabled, disables CPL > 0 TSC access. + +Some vendors have implemented an additional instruction, RDTSCP, which returns +atomically not just the TSC, but an indicator which corresponds to the +processor number. This can be used to index into an array of TSC variables to +determine offset information in SMP systems where TSCs are not synchronized. +The presence of this instruction must be determined by consulting CPUID feature +bits. + +Both VMX and SVM provide extension fields in the virtualization hardware which +allows the guest visible TSC to be offset by a constant. Newer implementations +promise to allow the TSC to additionally be scaled, but this hardware is not +yet widely available. + +3.1) TSC synchronization + +The TSC is a CPU-local clock in most implementations. This means, on SMP +platforms, the TSCs of different CPUs may start at different times depending +on when the CPUs are powered on. Generally, CPUs on the same die will share +the same clock, however, this is not always the case. + +The BIOS may attempt to resynchronize the TSCs during the poweron process and +the operating system or other system software may attempt to do this as well. +Several hardware limitations make the problem worse - if it is not possible to +write the full 64-bits of the TSC, it may be impossible to match the TSC in +newly arriving CPUs to that of the rest of the system, resulting in +unsynchronized TSCs. This may be done by BIOS or system software, but in +practice, getting a perfectly synchronized TSC will not be possible unless all +values are read from the same clock, which generally only is possible on single +socket systems or those with special hardware support. + +3.2) TSC and CPU hotplug + +As touched on already, CPUs which arrive later than the boot time of the system +may not have a TSC value that is synchronized with the rest of the system. +Either system software, BIOS, or SMM code may actually try to establish the TSC +to a value matching the rest of the system, but a perfect match is usually not +a guarantee. This can have the effect of bringing a system from a state where +TSC is synchronized back to a state where TSC synchronization flaws, however +small, may be exposed to the OS and any virtualization environment. + +3.3) TSC and multi-socket / NUMA + +Multi-socket systems, especially large multi-socket systems are likely to have +individual clocksources rather than a single, universally distributed clock. +Since these clocks are driven by different crystals, they will not have +perfectly matched frequency, and temperature and electrical variations will +cause the CPU clocks, and thus the TSCs to drift over time. Depending on the +exact clock and bus design, the drift may or may not be fixed in absolute +error, and may accumulate over time. + +In addition, very large systems may deliberately slew the clocks of individual +cores. This technique, known as spread-spectrum clocking, reduces EMI at the +clock frequency and harmonics of it, which may be required to pass FCC +standards for telecommunications and computer equipment. + +It is recommended not to trust the TSCs to remain synchronized on NUMA or +multiple socket systems for these reasons. + +3.4) TSC and C-states + +C-states, or idling states of the processor, especially C1E and deeper sleep +states may be problematic for TSC as well. The TSC may stop advancing in such +a state, resulting in a TSC which is behind that of other CPUs when execution +is resumed. Such CPUs must be detected and flagged by the operating system +based on CPU and chipset identifications. + +The TSC in such a case may be corrected by catching it up to a known external +clocksource. + +3.5) TSC frequency change / P-states + +To make things slightly more interesting, some CPUs may change frequency. They +may or may not run the TSC at the same rate, and because the frequency change +may be staggered or slewed, at some points in time, the TSC rate may not be +known other than falling within a range of values. In this case, the TSC will +not be a stable time source, and must be calibrated against a known, stable, +external clock to be a usable source of time. + +Whether the TSC runs at a constant rate or scales with the P-state is model +dependent and must be determined by inspecting CPUID, chipset or vendor +specific MSR fields. + +In addition, some vendors have known bugs where the P-state is actually +compensated for properly during normal operation, but when the processor is +inactive, the P-state may be raised temporarily to service cache misses from +other processors. In such cases, the TSC on halted CPUs could advance faster +than that of non-halted processors. AMD Turion processors are known to have +this problem. + +3.6) TSC and STPCLK / T-states + +External signals given to the processor may also have the effect of stopping +the TSC. This is typically done for thermal emergency power control to prevent +an overheating condition, and typically, there is no way to detect that this +condition has happened. + +3.7) TSC virtualization - VMX + +VMX provides conditional trapping of RDTSC, RDMSR, WRMSR and RDTSCP +instructions, which is enough for full virtualization of TSC in any manner. In +addition, VMX allows passing through the host TSC plus an additional TSC_OFFSET +field specified in the VMCS. Special instructions must be used to read and +write the VMCS field. + +3.8) TSC virtualization - SVM + +SVM provides conditional trapping of RDTSC, RDMSR, WRMSR and RDTSCP +instructions, which is enough for full virtualization of TSC in any manner. In +addition, SVM allows passing through the host TSC plus an additional offset +field specified in the SVM control block. + +3.9) TSC feature bits in Linux + +In summary, there is no way to guarantee the TSC remains in perfect +synchronization unless it is explicitly guaranteed by the architecture. Even +if so, the TSCs in multi-sockets or NUMA systems may still run independently +despite being locally consistent. + +The following feature bits are used by Linux to signal various TSC attributes, +but they can only be taken to be meaningful for UP or single node systems. + +X86_FEATURE_TSC : The TSC is available in hardware +X86_FEATURE_RDTSCP : The RDTSCP instruction is available +X86_FEATURE_CONSTANT_TSC : The TSC rate is unchanged with P-states +X86_FEATURE_NONSTOP_TSC : The TSC does not stop in C-states +X86_FEATURE_TSC_RELIABLE : TSC sync checks are skipped (VMware) + +4) Virtualization Problems + +Timekeeping is especially problematic for virtualization because a number of +challenges arise. The most obvious problem is that time is now shared between +the host and, potentially, a number of virtual machines. Thus the virtual +operating system does not run with 100% usage of the CPU, despite the fact that +it may very well make that assumption. It may expect it to remain true to very +exacting bounds when interrupt sources are disabled, but in reality only its +virtual interrupt sources are disabled, and the machine may still be preempted +at any time. This causes problems as the passage of real time, the injection +of machine interrupts and the associated clock sources are no longer completely +synchronized with real time. + +This same problem can occur on native harware to a degree, as SMM mode may +steal cycles from the naturally on X86 systems when SMM mode is used by the +BIOS, but not in such an extreme fashion. However, the fact that SMM mode may +cause similar problems to virtualization makes it a good justification for +solving many of these problems on bare metal. + +4.1) Interrupt clocking + +One of the most immediate problems that occurs with legacy operating systems +is that the system timekeeping routines are often designed to keep track of +time by counting periodic interrupts. These interrupts may come from the PIT +or the RTC, but the problem is the same: the host virtualization engine may not +be able to deliver the proper number of interrupts per second, and so guest +time may fall behind. This is especially problematic if a high interrupt rate +is selected, such as 1000 HZ, which is unfortunately the default for many Linux +guests. + +There are three approaches to solving this problem; first, it may be possible +to simply ignore it. Guests which have a separate time source for tracking +'wall clock' or 'real time' may not need any adjustment of their interrupts to +maintain proper time. If this is not sufficient, it may be necessary to inject +additional interrupts into the guest in order to increase the effective +interrupt rate. This approach leads to complications in extreme conditions, +where host load or guest lag is too much to compensate for, and thus another +solution to the problem has risen: the guest may need to become aware of lost +ticks and compensate for them internally. Although promising in theory, the +implementation of this policy in Linux has been extremely error prone, and a +number of buggy variants of lost tick compensation are distributed across +commonly used Linux systems. + +Windows uses periodic RTC clocking as a means of keeping time internally, and +thus requires interrupt slewing to keep proper time. It does use a low enough +rate (ed: is it 18.2 Hz?) however that it has not yet been a problem in +practice. + +4.2) TSC sampling and serialization + +As the highest precision time source available, the cycle counter of the CPU +has aroused much interest from developers. As explained above, this timer has +many problems unique to its nature as a local, potentially unstable and +potentially unsynchronized source. One issue which is not unique to the TSC, +but is highlighted because of its very precise nature is sampling delay. By +definition, the counter, once read is already old. However, it is also +possible for the counter to be read ahead of the actual use of the result. +This is a consequence of the superscalar execution of the instruction stream, +which may execute instructions out of order. Such execution is called +non-serialized. Forcing serialized execution is necessary for precise +measurement with the TSC, and requires a serializing instruction, such as CPUID +or an MSR read. + +Since CPUID may actually be virtualized by a trap and emulate mechanism, this +serialization can pose a performance issue for hardware virtualization. An +accurate time stamp counter reading may therefore not always be available, and +it may be necessary for an implementation to guard against "backwards" reads of +the TSC as seen from other CPUs, even in an otherwise perfectly synchronized +system. + +4.3) Timespec aliasing + +Additionally, this lack of serialization from the TSC poses another challenge +when using results of the TSC when measured against another time source. As +the TSC is much higher precision, many possible values of the TSC may be read +while another clock is still expressing the same value. + +That is, you may read (T,T+10) while external clock C maintains the same value. +Due to non-serialized reads, you may actually end up with a range which +fluctuates - from (T-1.. T+10). Thus, any time calculated from a TSC, but +calibrated against an external value may have a range of valid values. +Re-calibrating this computation may actually cause time, as computed after the +calibration, to go backwards, compared with time computed before the +calibration. + +This problem is particularly pronounced with an internal time source in Linux, +the kernel time, which is expressed in the theoretically high resolution +timespec - but which advances in much larger granularity intervals, sometimes +at the rate of jiffies, and possibly in catchup modes, at a much larger step. + +This aliasing requires care in the computation and recalibration of kvmclock +and any other values derived from TSC computation (such as TSC virtualization +itself). + +4.4) Migration + +Migration of a virtual machine raises problems for timekeeping in two ways. +First, the migration itself may take time, during which interrupts cannot be +delivered, and after which, the guest time may need to be caught up. NTP may +be able to help to some degree here, as the clock correction required is +typically small enough to fall in the NTP-correctable window. + +An additional concern is that timers based off the TSC (or HPET, if the raw bus +clock is exposed) may now be running at different rates, requiring compensation +in some way in the hypervisor by virtualizing these timers. In addition, +migrating to a faster machine may preclude the use of a passthrough TSC, as a +faster clock cannot be made visible to a guest without the potential of time +advancing faster than usual. A slower clock is less of a problem, as it can +always be caught up to the original rate. KVM clock avoids these problems by +simply storing multipliers and offsets against the TSC for the guest to convert +back into nanosecond resolution values. + +4.5) Scheduling + +Since scheduling may be based on precise timing and firing of interrupts, the +scheduling algorithms of an operating system may be adversely affected by +virtualization. In theory, the effect is random and should be universally +distributed, but in contrived as well as real scenarios (guest device access, +causes of virtualization exits, possible context switch), this may not always +be the case. The effect of this has not been well studied. + +In an attempt to work around this, several implementations have provided a +paravirtualized scheduler clock, which reveals the true amount of CPU time for +which a virtual machine has been running. + +4.6) Watchdogs + +Watchdog timers, such as the lock detector in Linux may fire accidentally when +running under hardware virtualization due to timer interrupts being delayed or +misinterpretation of the passage of real time. Usually, these warnings are +spurious and can be ignored, but in some circumstances it may be necessary to +disable such detection. + +4.7) Delays and precision timing + +Precise timing and delays may not be possible in a virtualized system. This +can happen if the system is controlling physical hardware, or issues delays to +compensate for slower I/O to and from devices. The first issue is not solvable +in general for a virtualized system; hardware control software can't be +adequately virtualized without a full real-time operating system, which would +require an RT aware virtualization platform. + +The second issue may cause performance problems, but this is unlikely to be a +significant issue. In many cases these delays may be eliminated through +configuration or paravirtualization. + +4.8) Covert channels and leaks + +In addition to the above problems, time information will inevitably leak to the +guest about the host in anything but a perfect implementation of virtualized +time. This may allow the guest to infer the presence of a hypervisor (as in a +red-pill type detection), and it may allow information to leak between guests +by using CPU utilization itself as a signalling channel. Preventing such +problems would require completely isolated virtual time which may not track +real time any longer. This may be useful in certain security or QA contexts, +but in general isn't recommended for real-world deployment scenarios. diff --git a/Documentation/lguest/lguest.c b/Documentation/lguest/lguest.c index 8a6a8c6d4980..dc73bc54cc4e 100644 --- a/Documentation/lguest/lguest.c +++ b/Documentation/lguest/lguest.c @@ -1640,15 +1640,6 @@ static void blk_request(struct virtqueue *vq) off = out->sector * 512; /* - * The block device implements "barriers", where the Guest indicates - * that it wants all previous writes to occur before this write. We - * don't have a way of asking our kernel to do a barrier, so we just - * synchronize all the data in the file. Pretty poor, no? - */ - if (out->type & VIRTIO_BLK_T_BARRIER) - fdatasync(vblk->fd); - - /* * In general the virtio block driver is allowed to try SCSI commands. * It'd be nice if we supported eject, for example, but we don't. */ @@ -1680,6 +1671,13 @@ static void blk_request(struct virtqueue *vq) /* Die, bad Guest, die. */ errx(1, "Write past end %llu+%u", off, ret); } + + wlen = sizeof(*in); + *in = (ret >= 0 ? VIRTIO_BLK_S_OK : VIRTIO_BLK_S_IOERR); + } else if (out->type & VIRTIO_BLK_T_FLUSH) { + /* Flush */ + ret = fdatasync(vblk->fd); + verbose("FLUSH fdatasync: %i\n", ret); wlen = sizeof(*in); *in = (ret >= 0 ? VIRTIO_BLK_S_OK : VIRTIO_BLK_S_IOERR); } else { @@ -1703,15 +1701,6 @@ static void blk_request(struct virtqueue *vq) } } - /* - * OK, so we noted that it was pretty poor to use an fdatasync as a - * barrier. But Christoph Hellwig points out that we need a sync - * *afterwards* as well: "Barriers specify no reordering to the front - * or the back." And Jens Axboe confirmed it, so here we are: - */ - if (out->type & VIRTIO_BLK_T_BARRIER) - fdatasync(vblk->fd); - /* Finished that request. */ add_used(vq, head, wlen); } @@ -1736,8 +1725,8 @@ static void setup_block_file(const char *filename) vblk->fd = open_or_die(filename, O_RDWR|O_LARGEFILE); vblk->len = lseek64(vblk->fd, 0, SEEK_END); - /* We support barriers. */ - add_feature(dev, VIRTIO_BLK_F_BARRIER); + /* We support FLUSH. */ + add_feature(dev, VIRTIO_BLK_F_FLUSH); /* Tell Guest how many sectors this device has. */ conf.capacity = cpu_to_le64(vblk->len / 512); diff --git a/Documentation/mutex-design.txt b/Documentation/mutex-design.txt index c91ccc0720fa..38c10fd7f411 100644 --- a/Documentation/mutex-design.txt +++ b/Documentation/mutex-design.txt @@ -9,7 +9,7 @@ firstly, there's nothing wrong with semaphores. But if the simpler mutex semantics are sufficient for your code, then there are a couple of advantages of mutexes: - - 'struct mutex' is smaller on most architectures: .e.g on x86, + - 'struct mutex' is smaller on most architectures: E.g. on x86, 'struct semaphore' is 20 bytes, 'struct mutex' is 16 bytes. A smaller structure size means less RAM footprint, and better CPU-cache utilization. @@ -136,3 +136,4 @@ the APIs of 'struct mutex' have been streamlined: void mutex_lock_nested(struct mutex *lock, unsigned int subclass); int mutex_lock_interruptible_nested(struct mutex *lock, unsigned int subclass); + int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock); diff --git a/Documentation/networking/bonding.txt b/Documentation/networking/bonding.txt index d2b62b71b617..5dc638791d97 100644 --- a/Documentation/networking/bonding.txt +++ b/Documentation/networking/bonding.txt @@ -765,6 +765,14 @@ xmit_hash_policy does not exist, and the layer2 policy is the only policy. The layer2+3 value was added for bonding version 3.2.2. +resend_igmp + + Specifies the number of IGMP membership reports to be issued after + a failover event. One membership report is issued immediately after + the failover, subsequent packets are sent in each 200ms interval. + + The valid range is 0 - 255; the default value is 1. This option + was added for bonding version 3.7.0. 3. Configuring Bonding Devices ============================== diff --git a/Documentation/networking/can.txt b/Documentation/networking/can.txt index cd79735013f9..5b04b67ddca2 100644 --- a/Documentation/networking/can.txt +++ b/Documentation/networking/can.txt @@ -22,6 +22,7 @@ This file contains 4.1.2 RAW socket option CAN_RAW_ERR_FILTER 4.1.3 RAW socket option CAN_RAW_LOOPBACK 4.1.4 RAW socket option CAN_RAW_RECV_OWN_MSGS + 4.1.5 RAW socket returned message flags 4.2 Broadcast Manager protocol sockets (SOCK_DGRAM) 4.3 connected transport protocols (SOCK_SEQPACKET) 4.4 unconnected transport protocols (SOCK_DGRAM) @@ -471,6 +472,17 @@ solution for a couple of reasons: setsockopt(s, SOL_CAN_RAW, CAN_RAW_RECV_OWN_MSGS, &recv_own_msgs, sizeof(recv_own_msgs)); + 4.1.5 RAW socket returned message flags + + When using recvmsg() call, the msg->msg_flags may contain following flags: + + MSG_DONTROUTE: set when the received frame was created on the local host. + + MSG_CONFIRM: set when the frame was sent via the socket it is received on. + This flag can be interpreted as a 'transmission confirmation' when the + CAN driver supports the echo of frames on driver level, see 3.2 and 6.2. + In order to receive such messages, CAN_RAW_RECV_OWN_MSGS must be set. + 4.2 Broadcast Manager protocol sockets (SOCK_DGRAM) 4.3 connected transport protocols (SOCK_SEQPACKET) 4.4 unconnected transport protocols (SOCK_DGRAM) diff --git a/Documentation/networking/dccp.txt b/Documentation/networking/dccp.txt index a62fdf7a6bff..271d524a4c8d 100644 --- a/Documentation/networking/dccp.txt +++ b/Documentation/networking/dccp.txt @@ -1,18 +1,20 @@ DCCP protocol -============ +============= Contents ======== - - Introduction - Missing features - Socket options +- Sysctl variables +- IOCTLs +- Other tunables - Notes + Introduction ============ - Datagram Congestion Control Protocol (DCCP) is an unreliable, connection oriented protocol designed to solve issues present in UDP and TCP, particularly for real-time and multimedia (streaming) traffic. @@ -29,9 +31,9 @@ It has a base protocol and pluggable congestion control IDs (CCIDs). DCCP is a Proposed Standard (RFC 2026), and the homepage for DCCP as a protocol is at http://www.ietf.org/html.charters/dccp-charter.html + Missing features ================ - The Linux DCCP implementation does not currently support all the features that are specified in RFCs 4340...42. @@ -45,7 +47,6 @@ http://linux-net.osdl.org/index.php/DCCP_Testing#Experimental_DCCP_source_tree Socket options ============== - DCCP_SOCKOPT_SERVICE sets the service. The specification mandates use of service codes (RFC 4340, sec. 8.1.2); if this socket option is not set, the socket will fall back to 0 (which means that no meaningful service code @@ -112,6 +113,7 @@ DCCP_SOCKOPT_CCID_TX_INFO On unidirectional connections it is useful to close the unused half-connection via shutdown (SHUT_WR or SHUT_RD): this will reduce per-packet processing costs. + Sysctl variables ================ Several DCCP default parameters can be managed by the following sysctls @@ -155,15 +157,30 @@ sync_ratelimit = 125 ms sequence-invalid packets on the same socket (RFC 4340, 7.5.4). The unit of this parameter is milliseconds; a value of 0 disables rate-limiting. + IOCTLS ====== FIONREAD Works as in udp(7): returns in the `int' argument pointer the size of the next pending datagram in bytes, or 0 when no datagram is pending. + +Other tunables +============== +Per-route rto_min support + CCID-2 supports the RTAX_RTO_MIN per-route setting for the minimum value + of the RTO timer. This setting can be modified via the 'rto_min' option + of iproute2; for example: + > ip route change 10.0.0.0/24 rto_min 250j dev wlan0 + > ip route add 10.0.0.254/32 rto_min 800j dev wlan0 + > ip route show dev wlan0 + CCID-3 also supports the rto_min setting: it is used to define the lower + bound for the expiry of the nofeedback timer. This can be useful on LANs + with very low RTTs (e.g., loopback, Gbit ethernet). + + Notes ===== - DCCP does not travel through NAT successfully at present on many boxes. This is because the checksum covers the pseudo-header as per TCP and UDP. Linux NAT support for DCCP has been added. diff --git a/Documentation/networking/e1000.txt b/Documentation/networking/e1000.txt index 2df71861e578..d9271e74e488 100644 --- a/Documentation/networking/e1000.txt +++ b/Documentation/networking/e1000.txt @@ -1,82 +1,35 @@ Linux* Base Driver for the Intel(R) PRO/1000 Family of Adapters =============================================================== -September 26, 2006 - +Intel Gigabit Linux driver. +Copyright(c) 1999 - 2010 Intel Corporation. Contents ======== -- In This Release - Identifying Your Adapter -- Building and Installation - Command Line Parameters - Speed and Duplex Configuration - Additional Configurations -- Known Issues - Support - -In This Release -=============== - -This file describes the Linux* Base Driver for the Intel(R) PRO/1000 Family -of Adapters. This driver includes support for Itanium(R)2-based systems. - -For questions related to hardware requirements, refer to the documentation -supplied with your Intel PRO/1000 adapter. All hardware requirements listed -apply to use with Linux. - -The following features are now available in supported kernels: - - Native VLANs - - Channel Bonding (teaming) - - SNMP - -Channel Bonding documentation can be found in the Linux kernel source: -/Documentation/networking/bonding.txt - -The driver information previously displayed in the /proc filesystem is not -supported in this release. Alternatively, you can use ethtool (version 1.6 -or later), lspci, and ifconfig to obtain the same information. - -Instructions on updating ethtool can be found in the section "Additional -Configurations" later in this document. - -NOTE: The Intel(R) 82562v 10/100 Network Connection only provides 10/100 -support. - - Identifying Your Adapter ======================== For more information on how to identify your adapter, go to the Adapter & Driver ID Guide at: - http://support.intel.com/support/network/adapter/pro100/21397.htm + http://support.intel.com/support/go/network/adapter/idguide.htm For the latest Intel network drivers for Linux, refer to the following website. In the search field, enter your adapter name or type, or use the networking link on the left to search for your adapter: - http://downloadfinder.intel.com/scripts-df/support_intel.asp - + http://support.intel.com/support/go/network/adapter/home.htm Command Line Parameters ======================= -If the driver is built as a module, the following optional parameters -are used by entering them on the command line with the modprobe command -using this syntax: - - modprobe e1000 [<option>=<VAL1>,<VAL2>,...] - -For example, with two PRO/1000 PCI adapters, entering: - - modprobe e1000 TxDescriptors=80,128 - -loads the e1000 driver with 80 TX descriptors for the first adapter and -128 TX descriptors for the second adapter. - The default value for each parameter is generally the recommended setting, unless otherwise noted. @@ -89,10 +42,6 @@ NOTES: For more information about the AutoNeg, Duplex, and Speed parameters, see the application note at: http://www.intel.com/design/network/applnots/ap450.htm - A descriptor describes a data buffer and attributes related to - the data buffer. This information is accessed by the hardware. - - AutoNeg ------- (Supported only on adapters with copper connections) @@ -106,7 +55,6 @@ Duplex parameters must not be specified. NOTE: Refer to the Speed and Duplex section of this readme for more information on the AutoNeg parameter. - Duplex ------ (Supported only on adapters with copper connections) @@ -119,7 +67,6 @@ set to auto-negotiate, the board auto-detects the correct duplex. If the link partner is forced (either full or half), Duplex defaults to half- duplex. - FlowControl ----------- Valid Range: 0-3 (0=none, 1=Rx only, 2=Tx only, 3=Rx&Tx) @@ -128,16 +75,16 @@ Default Value: Reads flow control settings from the EEPROM This parameter controls the automatic generation(Tx) and response(Rx) to Ethernet PAUSE frames. - InterruptThrottleRate --------------------- (not supported on Intel(R) 82542, 82543 or 82544-based adapters) -Valid Range: 0,1,3,100-100000 (0=off, 1=dynamic, 3=dynamic conservative) +Valid Range: 0,1,3,4,100-100000 (0=off, 1=dynamic, 3=dynamic conservative, + 4=simplified balancing) Default Value: 3 The driver can limit the amount of interrupts per second that the adapter -will generate for incoming packets. It does this by writing a value to the -adapter that is based on the maximum amount of interrupts that the adapter +will generate for incoming packets. It does this by writing a value to the +adapter that is based on the maximum amount of interrupts that the adapter will generate per second. Setting InterruptThrottleRate to a value greater or equal to 100 @@ -146,37 +93,43 @@ per second, even if more packets have come in. This reduces interrupt load on the system and can lower CPU utilization under heavy load, but will increase latency as packets are not processed as quickly. -The default behaviour of the driver previously assumed a static -InterruptThrottleRate value of 8000, providing a good fallback value for -all traffic types,but lacking in small packet performance and latency. -The hardware can handle many more small packets per second however, and +The default behaviour of the driver previously assumed a static +InterruptThrottleRate value of 8000, providing a good fallback value for +all traffic types,but lacking in small packet performance and latency. +The hardware can handle many more small packets per second however, and for this reason an adaptive interrupt moderation algorithm was implemented. Since 7.3.x, the driver has two adaptive modes (setting 1 or 3) in which -it dynamically adjusts the InterruptThrottleRate value based on the traffic +it dynamically adjusts the InterruptThrottleRate value based on the traffic that it receives. After determining the type of incoming traffic in the last -timeframe, it will adjust the InterruptThrottleRate to an appropriate value +timeframe, it will adjust the InterruptThrottleRate to an appropriate value for that traffic. The algorithm classifies the incoming traffic every interval into -classes. Once the class is determined, the InterruptThrottleRate value is -adjusted to suit that traffic type the best. There are three classes defined: +classes. Once the class is determined, the InterruptThrottleRate value is +adjusted to suit that traffic type the best. There are three classes defined: "Bulk traffic", for large amounts of packets of normal size; "Low latency", for small amounts of traffic and/or a significant percentage of small -packets; and "Lowest latency", for almost completely small packets or +packets; and "Lowest latency", for almost completely small packets or minimal traffic. -In dynamic conservative mode, the InterruptThrottleRate value is set to 4000 -for traffic that falls in class "Bulk traffic". If traffic falls in the "Low -latency" or "Lowest latency" class, the InterruptThrottleRate is increased +In dynamic conservative mode, the InterruptThrottleRate value is set to 4000 +for traffic that falls in class "Bulk traffic". If traffic falls in the "Low +latency" or "Lowest latency" class, the InterruptThrottleRate is increased stepwise to 20000. This default mode is suitable for most applications. For situations where low latency is vital such as cluster or grid computing, the algorithm can reduce latency even more when InterruptThrottleRate is set to mode 1. In this mode, which operates -the same as mode 3, the InterruptThrottleRate will be increased stepwise to +the same as mode 3, the InterruptThrottleRate will be increased stepwise to 70000 for traffic in class "Lowest latency". +In simplified mode the interrupt rate is based on the ratio of Tx and +Rx traffic. If the bytes per second rate is approximately equal, the +interrupt rate will drop as low as 2000 interrupts per second. If the +traffic is mostly transmit or mostly receive, the interrupt rate could +be as high as 8000. + Setting InterruptThrottleRate to 0 turns off any interrupt moderation and may improve small packet latency, but is generally not suitable for bulk throughput traffic. @@ -212,8 +165,6 @@ NOTE: When e1000 is loaded with default settings and multiple adapters be platform-specific. If CPU utilization is not a concern, use RX_POLLING (NAPI) and default driver settings. - - RxDescriptors ------------- Valid Range: 80-256 for 82542 and 82543-based adapters @@ -225,15 +176,14 @@ by the driver. Increasing this value allows the driver to buffer more incoming packets, at the expense of increased system memory utilization. Each descriptor is 16 bytes. A receive buffer is also allocated for each -descriptor and can be either 2048, 4096, 8192, or 16384 bytes, depending +descriptor and can be either 2048, 4096, 8192, or 16384 bytes, depending on the MTU setting. The maximum MTU size is 16110. -NOTE: MTU designates the frame size. It only needs to be set for Jumbo - Frames. Depending on the available system resources, the request - for a higher number of receive descriptors may be denied. In this +NOTE: MTU designates the frame size. It only needs to be set for Jumbo + Frames. Depending on the available system resources, the request + for a higher number of receive descriptors may be denied. In this case, use a lower number. - RxIntDelay ---------- Valid Range: 0-65535 (0=off) @@ -254,7 +204,6 @@ CAUTION: When setting RxIntDelay to a value other than 0, adapters may restoring the network connection. To eliminate the potential for the hang ensure that RxIntDelay is set to 0. - RxAbsIntDelay ------------- (This parameter is supported only on 82540, 82545 and later adapters.) @@ -268,7 +217,6 @@ packet is received within the set amount of time. Proper tuning, along with RxIntDelay, may improve traffic throughput in specific network conditions. - Speed ----- (This parameter is supported only on adapters with copper connections.) @@ -280,7 +228,6 @@ Speed forces the line speed to the specified value in megabits per second partner is set to auto-negotiate, the board will auto-detect the correct speed. Duplex should also be set when Speed is set to either 10 or 100. - TxDescriptors ------------- Valid Range: 80-256 for 82542 and 82543-based adapters @@ -295,6 +242,36 @@ NOTE: Depending on the available system resources, the request for a higher number of transmit descriptors may be denied. In this case, use a lower number. +TxDescriptorStep +---------------- +Valid Range: 1 (use every Tx Descriptor) + 4 (use every 4th Tx Descriptor) + +Default Value: 1 (use every Tx Descriptor) + +On certain non-Intel architectures, it has been observed that intense TX +traffic bursts of short packets may result in an improper descriptor +writeback. If this occurs, the driver will report a "TX Timeout" and reset +the adapter, after which the transmit flow will restart, though data may +have stalled for as much as 10 seconds before it resumes. + +The improper writeback does not occur on the first descriptor in a system +memory cache-line, which is typically 32 bytes, or 4 descriptors long. + +Setting TxDescriptorStep to a value of 4 will ensure that all TX descriptors +are aligned to the start of a system memory cache line, and so this problem +will not occur. + +NOTES: Setting TxDescriptorStep to 4 effectively reduces the number of + TxDescriptors available for transmits to 1/4 of the normal allocation. + This has a possible negative performance impact, which may be + compensated for by allocating more descriptors using the TxDescriptors + module parameter. + + There are other conditions which may result in "TX Timeout", which will + not be resolved by the use of the TxDescriptorStep parameter. As the + issue addressed by this parameter has never been observed on Intel + Architecture platforms, it should not be used on Intel platforms. TxIntDelay ---------- @@ -307,7 +284,6 @@ efficiency if properly tuned for specific network traffic. If the system is reporting dropped transmits, this value may be set too high causing the driver to run out of available transmit descriptors. - TxAbsIntDelay ------------- (This parameter is supported only on 82540, 82545 and later adapters.) @@ -330,6 +306,35 @@ Default Value: 1 A value of '1' indicates that the driver should enable IP checksum offload for received packets (both UDP and TCP) to the adapter hardware. +Copybreak +--------- +Valid Range: 0-xxxxxxx (0=off) +Default Value: 256 +Usage: insmod e1000.ko copybreak=128 + +Driver copies all packets below or equaling this size to a fresh Rx +buffer before handing it up the stack. + +This parameter is different than other parameters, in that it is a +single (not 1,1,1 etc.) parameter applied to all driver instances and +it is also available during runtime at +/sys/module/e1000/parameters/copybreak + +SmartPowerDownEnable +-------------------- +Valid Range: 0-1 +Default Value: 0 (disabled) + +Allows PHY to turn off in lower power states. The user can turn off +this parameter in supported chipsets. + +KumeranLockLoss +--------------- +Valid Range: 0-1 +Default Value: 1 (enabled) + +This workaround skips resetting the PHY at shutdown for the initial +silicon releases of ICH8 systems. Speed and Duplex Configuration ============================== @@ -385,40 +390,9 @@ If the link partner is forced to a specific speed and duplex, then this parameter should not be used. Instead, use the Speed and Duplex parameters previously mentioned to force the adapter to the same speed and duplex. - Additional Configurations ========================= - Configuring the Driver on Different Distributions - ------------------------------------------------- - Configuring a network driver to load properly when the system is started - is distribution dependent. Typically, the configuration process involves - adding an alias line to /etc/modules.conf or /etc/modprobe.conf as well - as editing other system startup scripts and/or configuration files. Many - popular Linux distributions ship with tools to make these changes for you. - To learn the proper way to configure a network device for your system, - refer to your distribution documentation. If during this process you are - asked for the driver or module name, the name for the Linux Base Driver - for the Intel(R) PRO/1000 Family of Adapters is e1000. - - As an example, if you install the e1000 driver for two PRO/1000 adapters - (eth0 and eth1) and set the speed and duplex to 10full and 100half, add - the following to modules.conf or or modprobe.conf: - - alias eth0 e1000 - alias eth1 e1000 - options e1000 Speed=10,100 Duplex=2,1 - - Viewing Link Messages - --------------------- - Link messages will not be displayed to the console if the distribution is - restricting system messages. In order to see network driver link messages - on your console, set dmesg to eight by entering the following: - - dmesg -n 8 - - NOTE: This setting is not saved across reboots. - Jumbo Frames ------------ Jumbo Frames support is enabled by changing the MTU to a value larger than @@ -437,9 +411,11 @@ Additional Configurations setting in a different location. Notes: - - - To enable Jumbo Frames, increase the MTU size on the interface beyond - 1500. + Degradation in throughput performance may be observed in some Jumbo frames + environments. If this is observed, increasing the application's socket buffer + size and/or increasing the /proc/sys/net/ipv4/tcp_*mem entry values may help. + See the specific application manual and /usr/src/linux*/Documentation/ + networking/ip-sysctl.txt for more details. - The maximum MTU setting for Jumbo Frames is 16110. This value coincides with the maximum Jumbo Frames size of 16128. @@ -447,40 +423,11 @@ Additional Configurations - Using Jumbo Frames at 10 or 100 Mbps may result in poor performance or loss of link. - - Some Intel gigabit adapters that support Jumbo Frames have a frame size - limit of 9238 bytes, with a corresponding MTU size limit of 9216 bytes. - The adapters with this limitation are based on the Intel(R) 82571EB, - 82572EI, 82573L and 80003ES2LAN controller. These correspond to the - following product names: - Intel(R) PRO/1000 PT Server Adapter - Intel(R) PRO/1000 PT Desktop Adapter - Intel(R) PRO/1000 PT Network Connection - Intel(R) PRO/1000 PT Dual Port Server Adapter - Intel(R) PRO/1000 PT Dual Port Network Connection - Intel(R) PRO/1000 PF Server Adapter - Intel(R) PRO/1000 PF Network Connection - Intel(R) PRO/1000 PF Dual Port Server Adapter - Intel(R) PRO/1000 PB Server Connection - Intel(R) PRO/1000 PL Network Connection - Intel(R) PRO/1000 EB Network Connection with I/O Acceleration - Intel(R) PRO/1000 EB Backplane Connection with I/O Acceleration - Intel(R) PRO/1000 PT Quad Port Server Adapter - - Adapters based on the Intel(R) 82542 and 82573V/E controller do not support Jumbo Frames. These correspond to the following product names: Intel(R) PRO/1000 Gigabit Server Adapter Intel(R) PRO/1000 PM Network Connection - - The following adapters do not support Jumbo Frames: - Intel(R) 82562V 10/100 Network Connection - Intel(R) 82566DM Gigabit Network Connection - Intel(R) 82566DC Gigabit Network Connection - Intel(R) 82566MM Gigabit Network Connection - Intel(R) 82566MC Gigabit Network Connection - Intel(R) 82562GT 10/100 Network Connection - Intel(R) 82562G 10/100 Network Connection - - Ethtool ------- The driver utilizes the ethtool interface for driver configuration and @@ -490,142 +437,14 @@ Additional Configurations The latest release of ethtool can be found from http://sourceforge.net/projects/gkernel. - NOTE: Ethtool 1.6 only supports a limited set of ethtool options. Support - for a more complete ethtool feature set can be enabled by upgrading - ethtool to ethtool-1.8.1. - Enabling Wake on LAN* (WoL) --------------------------- - WoL is configured through the Ethtool* utility. Ethtool is included with - all versions of Red Hat after Red Hat 7.2. For other Linux distributions, - download and install Ethtool from the following website: - http://sourceforge.net/projects/gkernel. - - For instructions on enabling WoL with Ethtool, refer to the website listed - above. + WoL is configured through the Ethtool* utility. WoL will be enabled on the system during the next shut down or reboot. For this driver version, in order to enable WoL, the e1000 driver must be loaded when shutting down or rebooting the system. - Wake On LAN is only supported on port A for the following devices: - Intel(R) PRO/1000 PT Dual Port Network Connection - Intel(R) PRO/1000 PT Dual Port Server Connection - Intel(R) PRO/1000 PT Dual Port Server Adapter - Intel(R) PRO/1000 PF Dual Port Server Adapter - Intel(R) PRO/1000 PT Quad Port Server Adapter - - NAPI - ---- - NAPI (Rx polling mode) is enabled in the e1000 driver. - - See www.cyberus.ca/~hadi/usenix-paper.tgz for more information on NAPI. - - -Known Issues -============ - -Dropped Receive Packets on Half-duplex 10/100 Networks ------------------------------------------------------- -If you have an Intel PCI Express adapter running at 10mbps or 100mbps, half- -duplex, you may observe occasional dropped receive packets. There are no -workarounds for this problem in this network configuration. The network must -be updated to operate in full-duplex, and/or 1000mbps only. - -Jumbo Frames System Requirement -------------------------------- -Memory allocation failures have been observed on Linux systems with 64 MB -of RAM or less that are running Jumbo Frames. If you are using Jumbo -Frames, your system may require more than the advertised minimum -requirement of 64 MB of system memory. - -Performance Degradation with Jumbo Frames ------------------------------------------ -Degradation in throughput performance may be observed in some Jumbo frames -environments. If this is observed, increasing the application's socket -buffer size and/or increasing the /proc/sys/net/ipv4/tcp_*mem entry values -may help. See the specific application manual and -/usr/src/linux*/Documentation/ -networking/ip-sysctl.txt for more details. - -Jumbo Frames on Foundry BigIron 8000 switch -------------------------------------------- -There is a known issue using Jumbo frames when connected to a Foundry -BigIron 8000 switch. This is a 3rd party limitation. If you experience -loss of packets, lower the MTU size. - -Allocating Rx Buffers when Using Jumbo Frames ---------------------------------------------- -Allocating Rx buffers when using Jumbo Frames on 2.6.x kernels may fail if -the available memory is heavily fragmented. This issue may be seen with PCI-X -adapters or with packet split disabled. This can be reduced or eliminated -by changing the amount of available memory for receive buffer allocation, by -increasing /proc/sys/vm/min_free_kbytes. - -Multiple Interfaces on Same Ethernet Broadcast Network ------------------------------------------------------- -Due to the default ARP behavior on Linux, it is not possible to have -one system on two IP networks in the same Ethernet broadcast domain -(non-partitioned switch) behave as expected. All Ethernet interfaces -will respond to IP traffic for any IP address assigned to the system. -This results in unbalanced receive traffic. - -If you have multiple interfaces in a server, either turn on ARP -filtering by entering: - - echo 1 > /proc/sys/net/ipv4/conf/all/arp_filter -(this only works if your kernel's version is higher than 2.4.5), - -NOTE: This setting is not saved across reboots. The configuration -change can be made permanent by adding the line: - net.ipv4.conf.all.arp_filter = 1 -to the file /etc/sysctl.conf - - or, - -install the interfaces in separate broadcast domains (either in -different switches or in a switch partitioned to VLANs). - -82541/82547 can't link or are slow to link with some link partners ------------------------------------------------------------------ -There is a known compatibility issue with 82541/82547 and some -low-end switches where the link will not be established, or will -be slow to establish. In particular, these switches are known to -be incompatible with 82541/82547: - - Planex FXG-08TE - I-O Data ETG-SH8 - -To workaround this issue, the driver can be compiled with an override -of the PHY's master/slave setting. Forcing master or forcing slave -mode will improve time-to-link. - - # make CFLAGS_EXTRA=-DE1000_MASTER_SLAVE=<n> - -Where <n> is: - - 0 = Hardware default - 1 = Master mode - 2 = Slave mode - 3 = Auto master/slave - -Disable rx flow control with ethtool ------------------------------------- -In order to disable receive flow control using ethtool, you must turn -off auto-negotiation on the same command line. - -For example: - - ethtool -A eth? autoneg off rx off - -Unplugging network cable while ethtool -p is running ----------------------------------------------------- -In kernel versions 2.5.50 and later (including 2.6 kernel), unplugging -the network cable while ethtool -p is running will cause the system to -become unresponsive to keyboard commands, except for control-alt-delete. -Restarting the system appears to be the only remedy. - - Support ======= diff --git a/Documentation/networking/e1000e.txt b/Documentation/networking/e1000e.txt new file mode 100644 index 000000000000..6aa048badf32 --- /dev/null +++ b/Documentation/networking/e1000e.txt @@ -0,0 +1,302 @@ +Linux* Driver for Intel(R) Network Connection +=============================================================== + +Intel Gigabit Linux driver. +Copyright(c) 1999 - 2010 Intel Corporation. + +Contents +======== + +- Identifying Your Adapter +- Command Line Parameters +- Additional Configurations +- Support + +Identifying Your Adapter +======================== + +The e1000e driver supports all PCI Express Intel(R) Gigabit Network +Connections, except those that are 82575, 82576 and 82580-based*. + +* NOTE: The Intel(R) PRO/1000 P Dual Port Server Adapter is supported by + the e1000 driver, not the e1000e driver due to the 82546 part being used + behind a PCI Express bridge. + +For more information on how to identify your adapter, go to the Adapter & +Driver ID Guide at: + + http://support.intel.com/support/go/network/adapter/idguide.htm + +For the latest Intel network drivers for Linux, refer to the following +website. In the search field, enter your adapter name or type, or use the +networking link on the left to search for your adapter: + + http://support.intel.com/support/go/network/adapter/home.htm + +Command Line Parameters +======================= + +The default value for each parameter is generally the recommended setting, +unless otherwise noted. + +NOTES: For more information about the InterruptThrottleRate, + RxIntDelay, TxIntDelay, RxAbsIntDelay, and TxAbsIntDelay + parameters, see the application note at: + http://www.intel.com/design/network/applnots/ap450.htm + +InterruptThrottleRate +--------------------- +Valid Range: 0,1,3,4,100-100000 (0=off, 1=dynamic, 3=dynamic conservative, + 4=simplified balancing) +Default Value: 3 + +The driver can limit the amount of interrupts per second that the adapter +will generate for incoming packets. It does this by writing a value to the +adapter that is based on the maximum amount of interrupts that the adapter +will generate per second. + +Setting InterruptThrottleRate to a value greater or equal to 100 +will program the adapter to send out a maximum of that many interrupts +per second, even if more packets have come in. This reduces interrupt +load on the system and can lower CPU utilization under heavy load, +but will increase latency as packets are not processed as quickly. + +The driver has two adaptive modes (setting 1 or 3) in which +it dynamically adjusts the InterruptThrottleRate value based on the traffic +that it receives. After determining the type of incoming traffic in the last +timeframe, it will adjust the InterruptThrottleRate to an appropriate value +for that traffic. + +The algorithm classifies the incoming traffic every interval into +classes. Once the class is determined, the InterruptThrottleRate value is +adjusted to suit that traffic type the best. There are three classes defined: +"Bulk traffic", for large amounts of packets of normal size; "Low latency", +for small amounts of traffic and/or a significant percentage of small +packets; and "Lowest latency", for almost completely small packets or +minimal traffic. + +In dynamic conservative mode, the InterruptThrottleRate value is set to 4000 +for traffic that falls in class "Bulk traffic". If traffic falls in the "Low +latency" or "Lowest latency" class, the InterruptThrottleRate is increased +stepwise to 20000. This default mode is suitable for most applications. + +For situations where low latency is vital such as cluster or +grid computing, the algorithm can reduce latency even more when +InterruptThrottleRate is set to mode 1. In this mode, which operates +the same as mode 3, the InterruptThrottleRate will be increased stepwise to +70000 for traffic in class "Lowest latency". + +In simplified mode the interrupt rate is based on the ratio of Tx and +Rx traffic. If the bytes per second rate is approximately equal the +interrupt rate will drop as low as 2000 interrupts per second. If the +traffic is mostly transmit or mostly receive, the interrupt rate could +be as high as 8000. + +Setting InterruptThrottleRate to 0 turns off any interrupt moderation +and may improve small packet latency, but is generally not suitable +for bulk throughput traffic. + +NOTE: InterruptThrottleRate takes precedence over the TxAbsIntDelay and + RxAbsIntDelay parameters. In other words, minimizing the receive + and/or transmit absolute delays does not force the controller to + generate more interrupts than what the Interrupt Throttle Rate + allows. + +NOTE: When e1000e is loaded with default settings and multiple adapters + are in use simultaneously, the CPU utilization may increase non- + linearly. In order to limit the CPU utilization without impacting + the overall throughput, we recommend that you load the driver as + follows: + + modprobe e1000e InterruptThrottleRate=3000,3000,3000 + + This sets the InterruptThrottleRate to 3000 interrupts/sec for + the first, second, and third instances of the driver. The range + of 2000 to 3000 interrupts per second works on a majority of + systems and is a good starting point, but the optimal value will + be platform-specific. If CPU utilization is not a concern, use + RX_POLLING (NAPI) and default driver settings. + +RxIntDelay +---------- +Valid Range: 0-65535 (0=off) +Default Value: 0 + +This value delays the generation of receive interrupts in units of 1.024 +microseconds. Receive interrupt reduction can improve CPU efficiency if +properly tuned for specific network traffic. Increasing this value adds +extra latency to frame reception and can end up decreasing the throughput +of TCP traffic. If the system is reporting dropped receives, this value +may be set too high, causing the driver to run out of available receive +descriptors. + +CAUTION: When setting RxIntDelay to a value other than 0, adapters may + hang (stop transmitting) under certain network conditions. If + this occurs a NETDEV WATCHDOG message is logged in the system + event log. In addition, the controller is automatically reset, + restoring the network connection. To eliminate the potential + for the hang ensure that RxIntDelay is set to 0. + +RxAbsIntDelay +------------- +Valid Range: 0-65535 (0=off) +Default Value: 8 + +This value, in units of 1.024 microseconds, limits the delay in which a +receive interrupt is generated. Useful only if RxIntDelay is non-zero, +this value ensures that an interrupt is generated after the initial +packet is received within the set amount of time. Proper tuning, +along with RxIntDelay, may improve traffic throughput in specific network +conditions. + +TxIntDelay +---------- +Valid Range: 0-65535 (0=off) +Default Value: 8 + +This value delays the generation of transmit interrupts in units of +1.024 microseconds. Transmit interrupt reduction can improve CPU +efficiency if properly tuned for specific network traffic. If the +system is reporting dropped transmits, this value may be set too high +causing the driver to run out of available transmit descriptors. + +TxAbsIntDelay +------------- +Valid Range: 0-65535 (0=off) +Default Value: 32 + +This value, in units of 1.024 microseconds, limits the delay in which a +transmit interrupt is generated. Useful only if TxIntDelay is non-zero, +this value ensures that an interrupt is generated after the initial +packet is sent on the wire within the set amount of time. Proper tuning, +along with TxIntDelay, may improve traffic throughput in specific +network conditions. + +Copybreak +--------- +Valid Range: 0-xxxxxxx (0=off) +Default Value: 256 + +Driver copies all packets below or equaling this size to a fresh Rx +buffer before handing it up the stack. + +This parameter is different than other parameters, in that it is a +single (not 1,1,1 etc.) parameter applied to all driver instances and +it is also available during runtime at +/sys/module/e1000e/parameters/copybreak + +SmartPowerDownEnable +-------------------- +Valid Range: 0-1 +Default Value: 0 (disabled) + +Allows PHY to turn off in lower power states. The user can set this parameter +in supported chipsets. + +KumeranLockLoss +--------------- +Valid Range: 0-1 +Default Value: 1 (enabled) + +This workaround skips resetting the PHY at shutdown for the initial +silicon releases of ICH8 systems. + +IntMode +------- +Valid Range: 0-2 (0=legacy, 1=MSI, 2=MSI-X) +Default Value: 2 + +Allows changing the interrupt mode at module load time, without requiring a +recompile. If the driver load fails to enable a specific interrupt mode, the +driver will try other interrupt modes, from least to most compatible. The +interrupt order is MSI-X, MSI, Legacy. If specifying MSI (IntMode=1) +interrupts, only MSI and Legacy will be attempted. + +CrcStripping +------------ +Valid Range: 0-1 +Default Value: 1 (enabled) + +Strip the CRC from received packets before sending up the network stack. If +you have a machine with a BMC enabled but cannot receive IPMI traffic after +loading or enabling the driver, try disabling this feature. + +WriteProtectNVM +--------------- +Valid Range: 0-1 +Default Value: 1 (enabled) + +Set the hardware to ignore all write/erase cycles to the GbE region in the +ICHx NVM (non-volatile memory). This feature can be disabled by the +WriteProtectNVM module parameter (enabled by default) only after a hardware +reset, but the machine must be power cycled before trying to enable writes. + +Note: the kernel boot option iomem=relaxed may need to be set if the kernel +config option CONFIG_STRICT_DEVMEM=y, if the root user wants to write the +NVM from user space via ethtool. + +Additional Configurations +========================= + + Jumbo Frames + ------------ + Jumbo Frames support is enabled by changing the MTU to a value larger than + the default of 1500. Use the ifconfig command to increase the MTU size. + For example: + + ifconfig eth<x> mtu 9000 up + + This setting is not saved across reboots. + + Notes: + + - The maximum MTU setting for Jumbo Frames is 9216. This value coincides + with the maximum Jumbo Frames size of 9234 bytes. + + - Using Jumbo Frames at 10 or 100 Mbps is not supported and may result in + poor performance or loss of link. + + - Some adapters limit Jumbo Frames sized packets to a maximum of + 4096 bytes and some adapters do not support Jumbo Frames. + + + Ethtool + ------- + The driver utilizes the ethtool interface for driver configuration and + diagnostics, as well as displaying statistical information. We + strongly recommend downloading the latest version of Ethtool at: + + http://sourceforge.net/projects/gkernel. + + Speed and Duplex + ---------------- + Speed and Duplex are configured through the Ethtool* utility. For + instructions, refer to the Ethtool man page. + + Enabling Wake on LAN* (WoL) + --------------------------- + WoL is configured through the Ethtool* utility. For instructions on + enabling WoL with Ethtool, refer to the Ethtool man page. + + WoL will be enabled on the system during the next shut down or reboot. + For this driver version, in order to enable WoL, the e1000e driver must be + loaded when shutting down or rebooting the system. + + In most cases Wake On LAN is only supported on port A for multiple port + adapters. To verify if a port supports Wake on LAN run ethtool eth<X>. + + +Support +======= + +For general information, go to the Intel support website at: + + www.intel.com/support/ + +or the Intel Wired Networking project hosted by Sourceforge at: + + http://sourceforge.net/projects/e1000 + +If an issue is identified with the released source code on the supported +kernel with a supported adapter, email the specific information related +to the issue to e1000-devel@lists.sf.net diff --git a/Documentation/networking/ip-sysctl.txt b/Documentation/networking/ip-sysctl.txt index f350c69b2bb4..c7165f4cb792 100644 --- a/Documentation/networking/ip-sysctl.txt +++ b/Documentation/networking/ip-sysctl.txt @@ -1014,6 +1014,12 @@ conf/interface/*: accept_ra - BOOLEAN Accept Router Advertisements; autoconfigure using them. + Possible values are: + 0 Do not accept Router Advertisements. + 1 Accept Router Advertisements if forwarding is disabled. + 2 Overrule forwarding behaviour. Accept Router Advertisements + even if forwarding is enabled. + Functional default: enabled if local forwarding is disabled. disabled if local forwarding is enabled. @@ -1075,7 +1081,12 @@ forwarding - BOOLEAN Note: It is recommended to have the same setting on all interfaces; mixed router/host scenarios are rather uncommon. - FALSE: + Possible values are: + 0 Forwarding disabled + 1 Forwarding enabled + 2 Forwarding enabled (Hybrid Mode) + + FALSE (0): By default, Host behaviour is assumed. This means: @@ -1085,18 +1096,24 @@ forwarding - BOOLEAN Advertisements (and do autoconfiguration). 4. If accept_redirects is TRUE (default), accept Redirects. - TRUE: + TRUE (1): If local forwarding is enabled, Router behaviour is assumed. This means exactly the reverse from the above: 1. IsRouter flag is set in Neighbour Advertisements. 2. Router Solicitations are not sent. - 3. Router Advertisements are ignored. + 3. Router Advertisements are ignored unless accept_ra is 2. 4. Redirects are ignored. - Default: FALSE if global forwarding is disabled (default), - otherwise TRUE. + TRUE (2): + + Hybrid mode. Same behaviour as TRUE, except for: + + 2. Router Solicitations are being sent when necessary. + + Default: 0 (disabled) if global forwarding is disabled (default), + otherwise 1 (enabled). hop_limit - INTEGER Default Hop Limit to set. diff --git a/Documentation/networking/ixgbevf.txt b/Documentation/networking/ixgbevf.txt index 19015de6725f..21dd5d15b6b4 100755..100644 --- a/Documentation/networking/ixgbevf.txt +++ b/Documentation/networking/ixgbevf.txt @@ -1,19 +1,16 @@ Linux* Base Driver for Intel(R) Network Connection ================================================== -November 24, 2009 +Intel Gigabit Linux driver. +Copyright(c) 1999 - 2010 Intel Corporation. Contents ======== -- In This Release - Identifying Your Adapter - Known Issues/Troubleshooting - Support -In This Release -=============== - This file describes the ixgbevf Linux* Base Driver for Intel Network Connection. @@ -33,7 +30,7 @@ Identifying Your Adapter For more information on how to identify your adapter, go to the Adapter & Driver ID Guide at: - http://support.intel.com/support/network/sb/CS-008441.htm + http://support.intel.com/support/go/network/adapter/idguide.htm Known Issues/Troubleshooting ============================ @@ -57,34 +54,3 @@ or the Intel Wired Networking project hosted by Sourceforge at: If an issue is identified with the released source code on the supported kernel with a supported adapter, email the specific information related to the issue to e1000-devel@lists.sf.net - -License -======= - -Intel 10 Gigabit Linux driver. -Copyright(c) 1999 - 2009 Intel Corporation. - -This program is free software; you can redistribute it and/or modify it -under the terms and conditions of the GNU General Public License, -version 2, as published by the Free Software Foundation. - -This program is distributed in the hope it will be useful, but WITHOUT -ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or -FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for -more details. - -You should have received a copy of the GNU General Public License along with -this program; if not, write to the Free Software Foundation, Inc., -51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA. - -The full GNU General Public License is included in this distribution in -the file called "COPYING". - -Trademarks -========== - -Intel, Itanium, and Pentium are trademarks or registered trademarks of -Intel Corporation or its subsidiaries in the United States and other -countries. - -* Other names and brands may be claimed as the property of others. diff --git a/Documentation/networking/phonet.txt b/Documentation/networking/phonet.txt index 6e8ce09f9c73..24ad2adba6e5 100644 --- a/Documentation/networking/phonet.txt +++ b/Documentation/networking/phonet.txt @@ -112,6 +112,22 @@ However, connect() and getpeername() are not supported, as they did not seem useful with Phonet usages (could be added easily). +Resource subscription +--------------------- + +A Phonet datagram socket can be subscribed to any number of 8-bits +Phonet resources, as follow: + + uint32_t res = 0xXX; + ioctl(fd, SIOCPNADDRESOURCE, &res); + +Subscription is similarly cancelled using the SIOCPNDELRESOURCE I/O +control request, or when the socket is closed. + +Note that no more than one socket can be subcribed to any given +resource at a time. If not, ioctl() will return EBUSY. + + Phonet Pipe protocol -------------------- @@ -166,6 +182,46 @@ The pipe protocol provides two socket options at the SOL_PNPIPE level: or zero if encapsulation is off. +Phonet Pipe-controller Implementation +------------------------------------- + +Phonet Pipe-controller is enabled by selecting the CONFIG_PHONET_PIPECTRLR Kconfig +option. It is useful when communicating with those Nokia Modems which do not +implement Pipe controller in them e.g. Nokia Slim Modem used in ST-Ericsson +U8500 platform. + +The implementation is based on the Data Connection Establishment Sequence +depicted in 'Nokia Wireless Modem API - Wireless_modem_user_guide.pdf' +document. + +It allows a phonet sequenced socket (host-pep) to initiate a Pipe connection +between itself and a remote pipe-end point (e.g. modem). + +The implementation adds socket options at SOL_PNPIPE level: + + PNPIPE_PIPE_HANDLE + It accepts an integer argument for setting value of pipe handle. + + PNPIPE_ENABLE accepts one integer value (int). If set to zero, the pipe + is disabled. If the value is non-zero, the pipe is enabled. If the pipe + is not (yet) connected, ENOTCONN is error is returned. + +The implementation also adds socket 'connect'. On calling the 'connect', pipe +will be created between the source socket and the destination, and the pipe +state will be set to PIPE_DISABLED. + +After a pipe has been created and enabled successfully, the Pipe data can be +exchanged between the host-pep and remote-pep (modem). + +User-space would typically follow below sequence with Pipe controller:- +-socket +-bind +-setsockopt for PNPIPE_PIPE_HANDLE +-connect +-setsockopt for PNPIPE_ENCAP_IP +-setsockopt for PNPIPE_ENABLE + + Authors ------- diff --git a/Documentation/networking/timestamping.txt b/Documentation/networking/timestamping.txt index e8c8f4f06c67..98097d8cb910 100644 --- a/Documentation/networking/timestamping.txt +++ b/Documentation/networking/timestamping.txt @@ -172,15 +172,19 @@ struct skb_shared_hwtstamps { }; Time stamps for outgoing packets are to be generated as follows: -- In hard_start_xmit(), check if skb_tx(skb)->hardware is set no-zero. - If yes, then the driver is expected to do hardware time stamping. +- In hard_start_xmit(), check if (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) + is set no-zero. If yes, then the driver is expected to do hardware time + stamping. - If this is possible for the skb and requested, then declare - that the driver is doing the time stamping by setting the field - skb_tx(skb)->in_progress non-zero. You might want to keep a pointer - to the associated skb for the next step and not free the skb. A driver - not supporting hardware time stamping doesn't do that. A driver must - never touch sk_buff::tstamp! It is used to store software generated - time stamps by the network subsystem. + that the driver is doing the time stamping by setting the flag + SKBTX_IN_PROGRESS in skb_shinfo(skb)->tx_flags , e.g. with + + skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS; + + You might want to keep a pointer to the associated skb for the next step + and not free the skb. A driver not supporting hardware time stamping doesn't + do that. A driver must never touch sk_buff::tstamp! It is used to store + software generated time stamps by the network subsystem. - As soon as the driver has sent the packet and/or obtained a hardware time stamp for it, it passes the time stamp back by calling skb_hwtstamp_tx() with the original skb, the raw @@ -191,6 +195,6 @@ Time stamps for outgoing packets are to be generated as follows: this would occur at a later time in the processing pipeline than other software time stamping and therefore could lead to unexpected deltas between time stamps. -- If the driver did not call set skb_tx(skb)->in_progress, then +- If the driver did not set the SKBTX_IN_PROGRESS flag (see above), then dev_hard_start_xmit() checks whether software time stamping is wanted as fallback and potentially generates the time stamp. diff --git a/Documentation/pcmcia/driver-changes.txt b/Documentation/pcmcia/driver-changes.txt index 26c0f9c00545..dd04361dd361 100644 --- a/Documentation/pcmcia/driver-changes.txt +++ b/Documentation/pcmcia/driver-changes.txt @@ -1,4 +1,29 @@ This file details changes in 2.6 which affect PCMCIA card driver authors: +* pcmcia_loop_config() and autoconfiguration (as of 2.6.36) + If struct pcmcia_device *p_dev->config_flags is set accordingly, + pcmcia_loop_config() now sets up certain configuration values + automatically, though the driver may still override the settings + in the callback function. The following autoconfiguration options + are provided at the moment: + CONF_AUTO_CHECK_VCC : check for matching Vcc + CONF_AUTO_SET_VPP : set Vpp + CONF_AUTO_AUDIO : auto-enable audio line, if required + CONF_AUTO_SET_IO : set ioport resources (->resource[0,1]) + CONF_AUTO_SET_IOMEM : set first iomem resource (->resource[2]) + +* pcmcia_request_configuration -> pcmcia_enable_device (as of 2.6.36) + pcmcia_request_configuration() got renamed to pcmcia_enable_device(), + as it mirrors pcmcia_disable_device(). Configuration settings are now + stored in struct pcmcia_device, e.g. in the fields config_flags, + config_index, config_base, vpp. + +* pcmcia_request_window changes (as of 2.6.36) + Instead of win_req_t, drivers are now requested to fill out + struct pcmcia_device *p_dev->resource[2,3,4,5] for up to four ioport + ranges. After a call to pcmcia_request_window(), the regions found there + are reserved and may be used immediately -- until pcmcia_release_window() + is called. + * pcmcia_request_io changes (as of 2.6.36) Instead of io_req_t, drivers are now requested to fill out struct pcmcia_device *p_dev->resource[0,1] for up to two ioport diff --git a/Documentation/power/00-INDEX b/Documentation/power/00-INDEX index fb742c213c9e..45e9d4a91284 100644 --- a/Documentation/power/00-INDEX +++ b/Documentation/power/00-INDEX @@ -14,6 +14,8 @@ interface.txt - Power management user interface in /sys/power notifiers.txt - Registering suspend notifiers in device drivers +opp.txt + - Operating Performance Point library pci.txt - How the PCI Subsystem Does Power Management pm_qos_interface.txt diff --git a/Documentation/power/interface.txt b/Documentation/power/interface.txt index e67211fe0ee2..c537834af005 100644 --- a/Documentation/power/interface.txt +++ b/Documentation/power/interface.txt @@ -57,7 +57,7 @@ smallest image possible. In particular, if "0" is written to this file, the suspend image will be as small as possible. Reading from this file will display the current image size limit, which -is set to 500 MB by default. +is set to 2/5 of available RAM by default. /sys/power/pm_trace controls the code which saves the last PM event point in the RTC across reboots, so that you can debug a machine that just hangs diff --git a/Documentation/power/opp.txt b/Documentation/power/opp.txt new file mode 100644 index 000000000000..44d87ad3cea9 --- /dev/null +++ b/Documentation/power/opp.txt @@ -0,0 +1,375 @@ +*=============* +* OPP Library * +*=============* + +(C) 2009-2010 Nishanth Menon <nm@ti.com>, Texas Instruments Incorporated + +Contents +-------- +1. Introduction +2. Initial OPP List Registration +3. OPP Search Functions +4. OPP Availability Control Functions +5. OPP Data Retrieval Functions +6. Cpufreq Table Generation +7. Data Structures + +1. Introduction +=============== +Complex SoCs of today consists of a multiple sub-modules working in conjunction. +In an operational system executing varied use cases, not all modules in the SoC +need to function at their highest performing frequency all the time. To +facilitate this, sub-modules in a SoC are grouped into domains, allowing some +domains to run at lower voltage and frequency while other domains are loaded +more. The set of discrete tuples consisting of frequency and voltage pairs that +the device will support per domain are called Operating Performance Points or +OPPs. + +OPP library provides a set of helper functions to organize and query the OPP +information. The library is located in drivers/base/power/opp.c and the header +is located in include/linux/opp.h. OPP library can be enabled by enabling +CONFIG_PM_OPP from power management menuconfig menu. OPP library depends on +CONFIG_PM as certain SoCs such as Texas Instrument's OMAP framework allows to +optionally boot at a certain OPP without needing cpufreq. + +Typical usage of the OPP library is as follows: +(users) -> registers a set of default OPPs -> (library) +SoC framework -> modifies on required cases certain OPPs -> OPP layer + -> queries to search/retrieve information -> + +OPP layer expects each domain to be represented by a unique device pointer. SoC +framework registers a set of initial OPPs per device with the OPP layer. This +list is expected to be an optimally small number typically around 5 per device. +This initial list contains a set of OPPs that the framework expects to be safely +enabled by default in the system. + +Note on OPP Availability: +------------------------ +As the system proceeds to operate, SoC framework may choose to make certain +OPPs available or not available on each device based on various external +factors. Example usage: Thermal management or other exceptional situations where +SoC framework might choose to disable a higher frequency OPP to safely continue +operations until that OPP could be re-enabled if possible. + +OPP library facilitates this concept in it's implementation. The following +operational functions operate only on available opps: +opp_find_freq_{ceil, floor}, opp_get_voltage, opp_get_freq, opp_get_opp_count +and opp_init_cpufreq_table + +opp_find_freq_exact is meant to be used to find the opp pointer which can then +be used for opp_enable/disable functions to make an opp available as required. + +WARNING: Users of OPP library should refresh their availability count using +get_opp_count if opp_enable/disable functions are invoked for a device, the +exact mechanism to trigger these or the notification mechanism to other +dependent subsystems such as cpufreq are left to the discretion of the SoC +specific framework which uses the OPP library. Similar care needs to be taken +care to refresh the cpufreq table in cases of these operations. + +WARNING on OPP List locking mechanism: +------------------------------------------------- +OPP library uses RCU for exclusivity. RCU allows the query functions to operate +in multiple contexts and this synchronization mechanism is optimal for a read +intensive operations on data structure as the OPP library caters to. + +To ensure that the data retrieved are sane, the users such as SoC framework +should ensure that the section of code operating on OPP queries are locked +using RCU read locks. The opp_find_freq_{exact,ceil,floor}, +opp_get_{voltage, freq, opp_count} fall into this category. + +opp_{add,enable,disable} are updaters which use mutex and implement it's own +RCU locking mechanisms. opp_init_cpufreq_table acts as an updater and uses +mutex to implment RCU updater strategy. These functions should *NOT* be called +under RCU locks and other contexts that prevent blocking functions in RCU or +mutex operations from working. + +2. Initial OPP List Registration +================================ +The SoC implementation calls opp_add function iteratively to add OPPs per +device. It is expected that the SoC framework will register the OPP entries +optimally- typical numbers range to be less than 5. The list generated by +registering the OPPs is maintained by OPP library throughout the device +operation. The SoC framework can subsequently control the availability of the +OPPs dynamically using the opp_enable / disable functions. + +opp_add - Add a new OPP for a specific domain represented by the device pointer. + The OPP is defined using the frequency and voltage. Once added, the OPP + is assumed to be available and control of it's availability can be done + with the opp_enable/disable functions. OPP library internally stores + and manages this information in the opp struct. This function may be + used by SoC framework to define a optimal list as per the demands of + SoC usage environment. + + WARNING: Do not use this function in interrupt context. + + Example: + soc_pm_init() + { + /* Do things */ + r = opp_add(mpu_dev, 1000000, 900000); + if (!r) { + pr_err("%s: unable to register mpu opp(%d)\n", r); + goto no_cpufreq; + } + /* Do cpufreq things */ + no_cpufreq: + /* Do remaining things */ + } + +3. OPP Search Functions +======================= +High level framework such as cpufreq operates on frequencies. To map the +frequency back to the corresponding OPP, OPP library provides handy functions +to search the OPP list that OPP library internally manages. These search +functions return the matching pointer representing the opp if a match is +found, else returns error. These errors are expected to be handled by standard +error checks such as IS_ERR() and appropriate actions taken by the caller. + +opp_find_freq_exact - Search for an OPP based on an *exact* frequency and + availability. This function is especially useful to enable an OPP which + is not available by default. + Example: In a case when SoC framework detects a situation where a + higher frequency could be made available, it can use this function to + find the OPP prior to call the opp_enable to actually make it available. + rcu_read_lock(); + opp = opp_find_freq_exact(dev, 1000000000, false); + rcu_read_unlock(); + /* dont operate on the pointer.. just do a sanity check.. */ + if (IS_ERR(opp)) { + pr_err("frequency not disabled!\n"); + /* trigger appropriate actions.. */ + } else { + opp_enable(dev,1000000000); + } + + NOTE: This is the only search function that operates on OPPs which are + not available. + +opp_find_freq_floor - Search for an available OPP which is *at most* the + provided frequency. This function is useful while searching for a lesser + match OR operating on OPP information in the order of decreasing + frequency. + Example: To find the highest opp for a device: + freq = ULONG_MAX; + rcu_read_lock(); + opp_find_freq_floor(dev, &freq); + rcu_read_unlock(); + +opp_find_freq_ceil - Search for an available OPP which is *at least* the + provided frequency. This function is useful while searching for a + higher match OR operating on OPP information in the order of increasing + frequency. + Example 1: To find the lowest opp for a device: + freq = 0; + rcu_read_lock(); + opp_find_freq_ceil(dev, &freq); + rcu_read_unlock(); + Example 2: A simplified implementation of a SoC cpufreq_driver->target: + soc_cpufreq_target(..) + { + /* Do stuff like policy checks etc. */ + /* Find the best frequency match for the req */ + rcu_read_lock(); + opp = opp_find_freq_ceil(dev, &freq); + rcu_read_unlock(); + if (!IS_ERR(opp)) + soc_switch_to_freq_voltage(freq); + else + /* do something when we cant satisfy the req */ + /* do other stuff */ + } + +4. OPP Availability Control Functions +===================================== +A default OPP list registered with the OPP library may not cater to all possible +situation. The OPP library provides a set of functions to modify the +availability of a OPP within the OPP list. This allows SoC frameworks to have +fine grained dynamic control of which sets of OPPs are operationally available. +These functions are intended to *temporarily* remove an OPP in conditions such +as thermal considerations (e.g. don't use OPPx until the temperature drops). + +WARNING: Do not use these functions in interrupt context. + +opp_enable - Make a OPP available for operation. + Example: Lets say that 1GHz OPP is to be made available only if the + SoC temperature is lower than a certain threshold. The SoC framework + implementation might choose to do something as follows: + if (cur_temp < temp_low_thresh) { + /* Enable 1GHz if it was disabled */ + rcu_read_lock(); + opp = opp_find_freq_exact(dev, 1000000000, false); + rcu_read_unlock(); + /* just error check */ + if (!IS_ERR(opp)) + ret = opp_enable(dev, 1000000000); + else + goto try_something_else; + } + +opp_disable - Make an OPP to be not available for operation + Example: Lets say that 1GHz OPP is to be disabled if the temperature + exceeds a threshold value. The SoC framework implementation might + choose to do something as follows: + if (cur_temp > temp_high_thresh) { + /* Disable 1GHz if it was enabled */ + rcu_read_lock(); + opp = opp_find_freq_exact(dev, 1000000000, true); + rcu_read_unlock(); + /* just error check */ + if (!IS_ERR(opp)) + ret = opp_disable(dev, 1000000000); + else + goto try_something_else; + } + +5. OPP Data Retrieval Functions +=============================== +Since OPP library abstracts away the OPP information, a set of functions to pull +information from the OPP structure is necessary. Once an OPP pointer is +retrieved using the search functions, the following functions can be used by SoC +framework to retrieve the information represented inside the OPP layer. + +opp_get_voltage - Retrieve the voltage represented by the opp pointer. + Example: At a cpufreq transition to a different frequency, SoC + framework requires to set the voltage represented by the OPP using + the regulator framework to the Power Management chip providing the + voltage. + soc_switch_to_freq_voltage(freq) + { + /* do things */ + rcu_read_lock(); + opp = opp_find_freq_ceil(dev, &freq); + v = opp_get_voltage(opp); + rcu_read_unlock(); + if (v) + regulator_set_voltage(.., v); + /* do other things */ + } + +opp_get_freq - Retrieve the freq represented by the opp pointer. + Example: Lets say the SoC framework uses a couple of helper functions + we could pass opp pointers instead of doing additional parameters to + handle quiet a bit of data parameters. + soc_cpufreq_target(..) + { + /* do things.. */ + max_freq = ULONG_MAX; + rcu_read_lock(); + max_opp = opp_find_freq_floor(dev,&max_freq); + requested_opp = opp_find_freq_ceil(dev,&freq); + if (!IS_ERR(max_opp) && !IS_ERR(requested_opp)) + r = soc_test_validity(max_opp, requested_opp); + rcu_read_unlock(); + /* do other things */ + } + soc_test_validity(..) + { + if(opp_get_voltage(max_opp) < opp_get_voltage(requested_opp)) + return -EINVAL; + if(opp_get_freq(max_opp) < opp_get_freq(requested_opp)) + return -EINVAL; + /* do things.. */ + } + +opp_get_opp_count - Retrieve the number of available opps for a device + Example: Lets say a co-processor in the SoC needs to know the available + frequencies in a table, the main processor can notify as following: + soc_notify_coproc_available_frequencies() + { + /* Do things */ + rcu_read_lock(); + num_available = opp_get_opp_count(dev); + speeds = kzalloc(sizeof(u32) * num_available, GFP_KERNEL); + /* populate the table in increasing order */ + freq = 0; + while (!IS_ERR(opp = opp_find_freq_ceil(dev, &freq))) { + speeds[i] = freq; + freq++; + i++; + } + rcu_read_unlock(); + + soc_notify_coproc(AVAILABLE_FREQs, speeds, num_available); + /* Do other things */ + } + +6. Cpufreq Table Generation +=========================== +opp_init_cpufreq_table - cpufreq framework typically is initialized with + cpufreq_frequency_table_cpuinfo which is provided with the list of + frequencies that are available for operation. This function provides + a ready to use conversion routine to translate the OPP layer's internal + information about the available frequencies into a format readily + providable to cpufreq. + + WARNING: Do not use this function in interrupt context. + + Example: + soc_pm_init() + { + /* Do things */ + r = opp_init_cpufreq_table(dev, &freq_table); + if (!r) + cpufreq_frequency_table_cpuinfo(policy, freq_table); + /* Do other things */ + } + + NOTE: This function is available only if CONFIG_CPU_FREQ is enabled in + addition to CONFIG_PM as power management feature is required to + dynamically scale voltage and frequency in a system. + +7. Data Structures +================== +Typically an SoC contains multiple voltage domains which are variable. Each +domain is represented by a device pointer. The relationship to OPP can be +represented as follows: +SoC + |- device 1 + | |- opp 1 (availability, freq, voltage) + | |- opp 2 .. + ... ... + | `- opp n .. + |- device 2 + ... + `- device m + +OPP library maintains a internal list that the SoC framework populates and +accessed by various functions as described above. However, the structures +representing the actual OPPs and domains are internal to the OPP library itself +to allow for suitable abstraction reusable across systems. + +struct opp - The internal data structure of OPP library which is used to + represent an OPP. In addition to the freq, voltage, availability + information, it also contains internal book keeping information required + for the OPP library to operate on. Pointer to this structure is + provided back to the users such as SoC framework to be used as a + identifier for OPP in the interactions with OPP layer. + + WARNING: The struct opp pointer should not be parsed or modified by the + users. The defaults of for an instance is populated by opp_add, but the + availability of the OPP can be modified by opp_enable/disable functions. + +struct device - This is used to identify a domain to the OPP layer. The + nature of the device and it's implementation is left to the user of + OPP library such as the SoC framework. + +Overall, in a simplistic view, the data structure operations is represented as +following: + +Initialization / modification: + +-----+ /- opp_enable +opp_add --> | opp | <------- + | +-----+ \- opp_disable + \-------> domain_info(device) + +Search functions: + /-- opp_find_freq_ceil ---\ +-----+ +domain_info<---- opp_find_freq_exact -----> | opp | + \-- opp_find_freq_floor ---/ +-----+ + +Retrieval functions: ++-----+ /- opp_get_voltage +| opp | <--- ++-----+ \- opp_get_freq + +domain_info <- opp_get_opp_count diff --git a/Documentation/power/regulator/overview.txt b/Documentation/power/regulator/overview.txt index 9363e056188a..8ed17587a74b 100644 --- a/Documentation/power/regulator/overview.txt +++ b/Documentation/power/regulator/overview.txt @@ -13,7 +13,7 @@ regulators (where voltage output is controllable) and current sinks (where current limit is controllable). (C) 2008 Wolfson Microelectronics PLC. -Author: Liam Girdwood <lg@opensource.wolfsonmicro.com> +Author: Liam Girdwood <lrg@slimlogic.co.uk> Nomenclature diff --git a/Documentation/power/runtime_pm.txt b/Documentation/power/runtime_pm.txt index 55b859b3bc72..489e9bacd165 100644 --- a/Documentation/power/runtime_pm.txt +++ b/Documentation/power/runtime_pm.txt @@ -1,6 +1,7 @@ Run-time Power Management Framework for I/O Devices (C) 2009 Rafael J. Wysocki <rjw@sisk.pl>, Novell Inc. +(C) 2010 Alan Stern <stern@rowland.harvard.edu> 1. Introduction @@ -157,7 +158,8 @@ rules: to execute it, the other callbacks will not be executed for the same device. * A request to execute ->runtime_resume() will cancel any pending or - scheduled requests to execute the other callbacks for the same device. + scheduled requests to execute the other callbacks for the same device, + except for scheduled autosuspends. 3. Run-time PM Device Fields @@ -165,7 +167,7 @@ The following device run-time PM fields are present in 'struct dev_pm_info', as defined in include/linux/pm.h: struct timer_list suspend_timer; - - timer used for scheduling (delayed) suspend request + - timer used for scheduling (delayed) suspend and autosuspend requests unsigned long timer_expires; - timer expiration time, in jiffies (if this is different from zero, the @@ -230,6 +232,28 @@ defined in include/linux/pm.h: interface; it may only be modified with the help of the pm_runtime_allow() and pm_runtime_forbid() helper functions + unsigned int no_callbacks; + - indicates that the device does not use the run-time PM callbacks (see + Section 8); it may be modified only by the pm_runtime_no_callbacks() + helper function + + unsigned int use_autosuspend; + - indicates that the device's driver supports delayed autosuspend (see + Section 9); it may be modified only by the + pm_runtime{_dont}_use_autosuspend() helper functions + + unsigned int timer_autosuspends; + - indicates that the PM core should attempt to carry out an autosuspend + when the timer expires rather than a normal suspend + + int autosuspend_delay; + - the delay time (in milliseconds) to be used for autosuspend + + unsigned long last_busy; + - the time (in jiffies) when the pm_runtime_mark_last_busy() helper + function was last called for this device; used in calculating inactivity + periods for autosuspend + All of the above fields are members of the 'power' member of 'struct device'. 4. Run-time PM Device Helper Functions @@ -255,6 +279,12 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h: error code on failure, where -EAGAIN or -EBUSY means it is safe to attempt to suspend the device again in future + int pm_runtime_autosuspend(struct device *dev); + - same as pm_runtime_suspend() except that the autosuspend delay is taken + into account; if pm_runtime_autosuspend_expiration() says the delay has + not yet expired then an autosuspend is scheduled for the appropriate time + and 0 is returned + int pm_runtime_resume(struct device *dev); - execute the subsystem-level resume callback for the device; returns 0 on success, 1 if the device's run-time PM status was already 'active' or @@ -267,6 +297,11 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h: device (the request is represented by a work item in pm_wq); returns 0 on success or error code if the request has not been queued up + int pm_request_autosuspend(struct device *dev); + - schedule the execution of the subsystem-level suspend callback for the + device when the autosuspend delay has expired; if the delay has already + expired then the work item is queued up immediately + int pm_schedule_suspend(struct device *dev, unsigned int delay); - schedule the execution of the subsystem-level suspend callback for the device in future, where 'delay' is the time to wait before queuing up a @@ -298,12 +333,20 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h: - decrement the device's usage counter int pm_runtime_put(struct device *dev); - - decrement the device's usage counter, run pm_request_idle(dev) and return - its result + - decrement the device's usage counter; if the result is 0 then run + pm_request_idle(dev) and return its result + + int pm_runtime_put_autosuspend(struct device *dev); + - decrement the device's usage counter; if the result is 0 then run + pm_request_autosuspend(dev) and return its result int pm_runtime_put_sync(struct device *dev); - - decrement the device's usage counter, run pm_runtime_idle(dev) and return - its result + - decrement the device's usage counter; if the result is 0 then run + pm_runtime_idle(dev) and return its result + + int pm_runtime_put_sync_autosuspend(struct device *dev); + - decrement the device's usage counter; if the result is 0 then run + pm_runtime_autosuspend(dev) and return its result void pm_runtime_enable(struct device *dev); - enable the run-time PM helper functions to run the device bus type's @@ -349,19 +392,51 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h: counter (used by the /sys/devices/.../power/control interface to effectively prevent the device from being power managed at run time) + void pm_runtime_no_callbacks(struct device *dev); + - set the power.no_callbacks flag for the device and remove the run-time + PM attributes from /sys/devices/.../power (or prevent them from being + added when the device is registered) + + void pm_runtime_mark_last_busy(struct device *dev); + - set the power.last_busy field to the current time + + void pm_runtime_use_autosuspend(struct device *dev); + - set the power.use_autosuspend flag, enabling autosuspend delays + + void pm_runtime_dont_use_autosuspend(struct device *dev); + - clear the power.use_autosuspend flag, disabling autosuspend delays + + void pm_runtime_set_autosuspend_delay(struct device *dev, int delay); + - set the power.autosuspend_delay value to 'delay' (expressed in + milliseconds); if 'delay' is negative then run-time suspends are + prevented + + unsigned long pm_runtime_autosuspend_expiration(struct device *dev); + - calculate the time when the current autosuspend delay period will expire, + based on power.last_busy and power.autosuspend_delay; if the delay time + is 1000 ms or larger then the expiration time is rounded up to the + nearest second; returns 0 if the delay period has already expired or + power.use_autosuspend isn't set, otherwise returns the expiration time + in jiffies + It is safe to execute the following helper functions from interrupt context: pm_request_idle() +pm_request_autosuspend() pm_schedule_suspend() pm_request_resume() pm_runtime_get_noresume() pm_runtime_get() pm_runtime_put_noidle() pm_runtime_put() +pm_runtime_put_autosuspend() +pm_runtime_enable() pm_suspend_ignore_children() pm_runtime_set_active() pm_runtime_set_suspended() -pm_runtime_enable() +pm_runtime_suspended() +pm_runtime_mark_last_busy() +pm_runtime_autosuspend_expiration() 5. Run-time PM Initialization, Device Probing and Removal @@ -524,3 +599,141 @@ poweroff and run-time suspend callback, and similarly for system resume, thaw, restore, and run-time resume, can achieve this with the help of the UNIVERSAL_DEV_PM_OPS macro defined in include/linux/pm.h (possibly setting its last argument to NULL). + +8. "No-Callback" Devices + +Some "devices" are only logical sub-devices of their parent and cannot be +power-managed on their own. (The prototype example is a USB interface. Entire +USB devices can go into low-power mode or send wake-up requests, but neither is +possible for individual interfaces.) The drivers for these devices have no +need of run-time PM callbacks; if the callbacks did exist, ->runtime_suspend() +and ->runtime_resume() would always return 0 without doing anything else and +->runtime_idle() would always call pm_runtime_suspend(). + +Subsystems can tell the PM core about these devices by calling +pm_runtime_no_callbacks(). This should be done after the device structure is +initialized and before it is registered (although after device registration is +also okay). The routine will set the device's power.no_callbacks flag and +prevent the non-debugging run-time PM sysfs attributes from being created. + +When power.no_callbacks is set, the PM core will not invoke the +->runtime_idle(), ->runtime_suspend(), or ->runtime_resume() callbacks. +Instead it will assume that suspends and resumes always succeed and that idle +devices should be suspended. + +As a consequence, the PM core will never directly inform the device's subsystem +or driver about run-time power changes. Instead, the driver for the device's +parent must take responsibility for telling the device's driver when the +parent's power state changes. + +9. Autosuspend, or automatically-delayed suspends + +Changing a device's power state isn't free; it requires both time and energy. +A device should be put in a low-power state only when there's some reason to +think it will remain in that state for a substantial time. A common heuristic +says that a device which hasn't been used for a while is liable to remain +unused; following this advice, drivers should not allow devices to be suspended +at run-time until they have been inactive for some minimum period. Even when +the heuristic ends up being non-optimal, it will still prevent devices from +"bouncing" too rapidly between low-power and full-power states. + +The term "autosuspend" is an historical remnant. It doesn't mean that the +device is automatically suspended (the subsystem or driver still has to call +the appropriate PM routines); rather it means that run-time suspends will +automatically be delayed until the desired period of inactivity has elapsed. + +Inactivity is determined based on the power.last_busy field. Drivers should +call pm_runtime_mark_last_busy() to update this field after carrying out I/O, +typically just before calling pm_runtime_put_autosuspend(). The desired length +of the inactivity period is a matter of policy. Subsystems can set this length +initially by calling pm_runtime_set_autosuspend_delay(), but after device +registration the length should be controlled by user space, using the +/sys/devices/.../power/autosuspend_delay_ms attribute. + +In order to use autosuspend, subsystems or drivers must call +pm_runtime_use_autosuspend() (preferably before registering the device), and +thereafter they should use the various *_autosuspend() helper functions instead +of the non-autosuspend counterparts: + + Instead of: pm_runtime_suspend use: pm_runtime_autosuspend; + Instead of: pm_schedule_suspend use: pm_request_autosuspend; + Instead of: pm_runtime_put use: pm_runtime_put_autosuspend; + Instead of: pm_runtime_put_sync use: pm_runtime_put_sync_autosuspend. + +Drivers may also continue to use the non-autosuspend helper functions; they +will behave normally, not taking the autosuspend delay into account. +Similarly, if the power.use_autosuspend field isn't set then the autosuspend +helper functions will behave just like the non-autosuspend counterparts. + +The implementation is well suited for asynchronous use in interrupt contexts. +However such use inevitably involves races, because the PM core can't +synchronize ->runtime_suspend() callbacks with the arrival of I/O requests. +This synchronization must be handled by the driver, using its private lock. +Here is a schematic pseudo-code example: + + foo_read_or_write(struct foo_priv *foo, void *data) + { + lock(&foo->private_lock); + add_request_to_io_queue(foo, data); + if (foo->num_pending_requests++ == 0) + pm_runtime_get(&foo->dev); + if (!foo->is_suspended) + foo_process_next_request(foo); + unlock(&foo->private_lock); + } + + foo_io_completion(struct foo_priv *foo, void *req) + { + lock(&foo->private_lock); + if (--foo->num_pending_requests == 0) { + pm_runtime_mark_last_busy(&foo->dev); + pm_runtime_put_autosuspend(&foo->dev); + } else { + foo_process_next_request(foo); + } + unlock(&foo->private_lock); + /* Send req result back to the user ... */ + } + + int foo_runtime_suspend(struct device *dev) + { + struct foo_priv foo = container_of(dev, ...); + int ret = 0; + + lock(&foo->private_lock); + if (foo->num_pending_requests > 0) { + ret = -EBUSY; + } else { + /* ... suspend the device ... */ + foo->is_suspended = 1; + } + unlock(&foo->private_lock); + return ret; + } + + int foo_runtime_resume(struct device *dev) + { + struct foo_priv foo = container_of(dev, ...); + + lock(&foo->private_lock); + /* ... resume the device ... */ + foo->is_suspended = 0; + pm_runtime_mark_last_busy(&foo->dev); + if (foo->num_pending_requests > 0) + foo_process_requests(foo); + unlock(&foo->private_lock); + return 0; + } + +The important point is that after foo_io_completion() asks for an autosuspend, +the foo_runtime_suspend() callback may race with foo_read_or_write(). +Therefore foo_runtime_suspend() has to check whether there are any pending I/O +requests (while holding the private lock) before allowing the suspend to +proceed. + +In addition, the power.autosuspend_delay field can be changed by user space at +any time. If a driver cares about this, it can call +pm_runtime_autosuspend_expiration() from within the ->runtime_suspend() +callback while holding its private lock. If the function returns a nonzero +value then the delay has not yet expired and the callback should return +-EAGAIN. diff --git a/Documentation/power/s2ram.txt b/Documentation/power/s2ram.txt index 514b94fc931e..1bdfa0443773 100644 --- a/Documentation/power/s2ram.txt +++ b/Documentation/power/s2ram.txt @@ -49,6 +49,13 @@ machine that doesn't boot) is: device (lspci and /sys/devices/pci* is your friend), and see if you can fix it, disable it, or trace into its resume function. + If no device matches the hash (or any matches appear to be false positives), + the culprit may be a device from a loadable kernel module that is not loaded + until after the hash is checked. You can check the hash against the current + devices again after more modules are loaded using sysfs: + + cat /sys/power/pm_trace_dev_match + For example, the above happens to be the VGA device on my EVO, which I used to run with "radeonfb" (it's an ATI Radeon mobility). It turns out that "radeonfb" simply cannot resume that device - it tries to set the diff --git a/Documentation/power/swsusp.txt b/Documentation/power/swsusp.txt index 9d60ab717a7b..ea718891a665 100644 --- a/Documentation/power/swsusp.txt +++ b/Documentation/power/swsusp.txt @@ -66,7 +66,8 @@ swsusp saves the state of the machine into active swaps and then reboots or powerdowns. You must explicitly specify the swap partition to resume from with ``resume='' kernel option. If signature is found it loads and restores saved state. If the option ``noresume'' is specified as a boot parameter, it skips -the resuming. +the resuming. If the option ``hibernate=nocompress'' is specified as a boot +parameter, it saves hibernation image without compression. In the meantime while the system is suspended you should not add/remove any of the hardware, write to the filesystems, etc. diff --git a/Documentation/powerpc/dts-bindings/fsl/spi.txt b/Documentation/powerpc/dts-bindings/fsl/spi.txt index 80510c018eea..777abd7399d5 100644 --- a/Documentation/powerpc/dts-bindings/fsl/spi.txt +++ b/Documentation/powerpc/dts-bindings/fsl/spi.txt @@ -1,7 +1,9 @@ * SPI (Serial Peripheral Interface) Required properties: -- cell-index : SPI controller index. +- cell-index : QE SPI subblock index. + 0: QE subblock SPI1 + 1: QE subblock SPI2 - compatible : should be "fsl,spi". - mode : the SPI operation mode, it can be "cpu" or "cpu-qe". - reg : Offset and length of the register set for the device @@ -29,3 +31,23 @@ Example: gpios = <&gpio 18 1 // device reg=<0> &gpio 19 1>; // device reg=<1> }; + + +* eSPI (Enhanced Serial Peripheral Interface) + +Required properties: +- compatible : should be "fsl,mpc8536-espi". +- reg : Offset and length of the register set for the device. +- interrupts : should contain eSPI interrupt, the device has one interrupt. +- fsl,espi-num-chipselects : the number of the chipselect signals. + +Example: + spi@110000 { + #address-cells = <1>; + #size-cells = <0>; + compatible = "fsl,mpc8536-espi"; + reg = <0x110000 0x1000>; + interrupts = <53 0x2>; + interrupt-parent = <&mpic>; + fsl,espi-num-chipselects = <4>; + }; diff --git a/Documentation/powerpc/dts-bindings/fsl/usb.txt b/Documentation/powerpc/dts-bindings/fsl/usb.txt index b00152402694..bd5723f0b67e 100644 --- a/Documentation/powerpc/dts-bindings/fsl/usb.txt +++ b/Documentation/powerpc/dts-bindings/fsl/usb.txt @@ -8,6 +8,7 @@ and additions : Required properties : - compatible : Should be "fsl-usb2-mph" for multi port host USB controllers, or "fsl-usb2-dr" for dual role USB controllers + or "fsl,mpc5121-usb2-dr" for dual role USB controllers of MPC5121 - phy_type : For multi port host USB controllers, should be one of "ulpi", or "serial". For dual role USB controllers, should be one of "ulpi", "utmi", "utmi_wide", or "serial". @@ -33,6 +34,12 @@ Recommended properties : - interrupt-parent : the phandle for the interrupt controller that services interrupts for this device. +Optional properties : + - fsl,invert-drvvbus : boolean; for MPC5121 USB0 only. Indicates the + port power polarity of internal PHY signal DRVVBUS is inverted. + - fsl,invert-pwr-fault : boolean; for MPC5121 USB0 only. Indicates + the PWR_FAULT signal polarity is inverted. + Example multi port host USB controller device node : usb@22000 { compatible = "fsl-usb2-mph"; @@ -57,3 +64,18 @@ Example dual role USB controller device node : dr_mode = "otg"; phy = "ulpi"; }; + +Example dual role USB controller device node for MPC5121ADS: + + usb@4000 { + compatible = "fsl,mpc5121-usb2-dr"; + reg = <0x4000 0x1000>; + #address-cells = <1>; + #size-cells = <0>; + interrupt-parent = < &ipic >; + interrupts = <44 0x8>; + dr_mode = "otg"; + phy_type = "utmi_wide"; + fsl,invert-drvvbus; + fsl,invert-pwr-fault; + }; diff --git a/Documentation/scsi/st.txt b/Documentation/scsi/st.txt index 40752602c050..691ca292c24d 100644 --- a/Documentation/scsi/st.txt +++ b/Documentation/scsi/st.txt @@ -2,7 +2,7 @@ This file contains brief information about the SCSI tape driver. The driver is currently maintained by Kai Mäkisara (email Kai.Makisara@kolumbus.fi) -Last modified: Sun Feb 24 21:59:07 2008 by kai.makisara +Last modified: Sun Aug 29 18:25:47 2010 by kai.makisara BASICS @@ -85,6 +85,17 @@ writing and the last operation has been a write. Two filemarks can be optionally written. In both cases end of data is signified by returning zero bytes for two consecutive reads. +Writing filemarks without the immediate bit set in the SCSI command block acts +as a synchronization point, i.e., all remaining data form the drive buffers is +written to tape before the command returns. This makes sure that write errors +are caught at that point, but this takes time. In some applications, several +consecutive files must be written fast. The MTWEOFI operation can be used to +write the filemarks without flushing the drive buffer. Writing filemark at +close() is always flushing the drive buffers. However, if the previous +operation is MTWEOFI, close() does not write a filemark. This can be used if +the program wants to close/open the tape device between files and wants to +skip waiting. + If rewind, offline, bsf, or seek is done and previous tape operation was write, a filemark is written before moving tape. @@ -301,6 +312,8 @@ MTBSR Space backward over count records. MTFSS Space forward over count setmarks. MTBSS Space backward over count setmarks. MTWEOF Write count filemarks. +MTWEOFI Write count filemarks with immediate bit set (i.e., does not + wait until data is on tape) MTWSM Write count setmarks. MTREW Rewind tape. MTOFFL Set device off line (often rewind plus eject). diff --git a/Documentation/sound/alsa/ALSA-Configuration.txt b/Documentation/sound/alsa/ALSA-Configuration.txt index 7f4dcebda9c6..d0eb696d32e8 100644 --- a/Documentation/sound/alsa/ALSA-Configuration.txt +++ b/Documentation/sound/alsa/ALSA-Configuration.txt @@ -300,6 +300,74 @@ Prior to version 0.9.0rc4 options had a 'snd_' prefix. This was removed. control correctly. If you have problems regarding this, try another ALSA compliant mixer (alsamixer works). + Module snd-azt1605 + ------------------ + + Module for Aztech Sound Galaxy soundcards based on the Aztech AZT1605 + chipset. + + port - port # for BASE (0x220,0x240,0x260,0x280) + wss_port - port # for WSS (0x530,0x604,0xe80,0xf40) + irq - IRQ # for WSS (7,9,10,11) + dma1 - DMA # for WSS playback (0,1,3) + dma2 - DMA # for WSS capture (0,1), -1 = disabled (default) + mpu_port - port # for MPU-401 UART (0x300,0x330), -1 = disabled (default) + mpu_irq - IRQ # for MPU-401 UART (3,5,7,9), -1 = disabled (default) + fm_port - port # for OPL3 (0x388), -1 = disabled (default) + + This module supports multiple cards. It does not support autoprobe: port, + wss_port, irq and dma1 have to be specified. The other values are + optional. + + "port" needs to match the BASE ADDRESS jumper on the card (0x220 or 0x240) + or the value stored in the card's EEPROM for cards that have an EEPROM and + their "CONFIG MODE" jumper set to "EEPROM SETTING". The other values can + be choosen freely from the options enumerated above. + + If dma2 is specified and different from dma1, the card will operate in + full-duplex mode. When dma1=3, only dma2=0 is valid and the only way to + enable capture since only channels 0 and 1 are available for capture. + + Generic settings are "port=0x220 wss_port=0x530 irq=10 dma1=1 dma2=0 + mpu_port=0x330 mpu_irq=9 fm_port=0x388". + + Whatever IRQ and DMA channels you pick, be sure to reserve them for + legacy ISA in your BIOS. + + Module snd-azt2316 + ------------------ + + Module for Aztech Sound Galaxy soundcards based on the Aztech AZT2316 + chipset. + + port - port # for BASE (0x220,0x240,0x260,0x280) + wss_port - port # for WSS (0x530,0x604,0xe80,0xf40) + irq - IRQ # for WSS (7,9,10,11) + dma1 - DMA # for WSS playback (0,1,3) + dma2 - DMA # for WSS capture (0,1), -1 = disabled (default) + mpu_port - port # for MPU-401 UART (0x300,0x330), -1 = disabled (default) + mpu_irq - IRQ # for MPU-401 UART (5,7,9,10), -1 = disabled (default) + fm_port - port # for OPL3 (0x388), -1 = disabled (default) + + This module supports multiple cards. It does not support autoprobe: port, + wss_port, irq and dma1 have to be specified. The other values are + optional. + + "port" needs to match the BASE ADDRESS jumper on the card (0x220 or 0x240) + or the value stored in the card's EEPROM for cards that have an EEPROM and + their "CONFIG MODE" jumper set to "EEPROM SETTING". The other values can + be choosen freely from the options enumerated above. + + If dma2 is specified and different from dma1, the card will operate in + full-duplex mode. When dma1=3, only dma2=0 is valid and the only way to + enable capture since only channels 0 and 1 are available for capture. + + Generic settings are "port=0x220 wss_port=0x530 irq=10 dma1=1 dma2=0 + mpu_port=0x330 mpu_irq=9 fm_port=0x388". + + Whatever IRQ and DMA channels you pick, be sure to reserve them for + legacy ISA in your BIOS. + Module snd-aw2 -------------- @@ -1641,20 +1709,6 @@ Prior to version 0.9.0rc4 options had a 'snd_' prefix. This was removed. This card is also known as Audio Excel DSP 16 or Zoltrix AV302. - Module snd-sgalaxy - ------------------ - - Module for Aztech Sound Galaxy sound card. - - sbport - Port # for SB16 interface (0x220,0x240) - wssport - Port # for WSS interface (0x530,0xe80,0xf40,0x604) - irq - IRQ # (7,9,10,11) - dma1 - DMA # - - This module supports multiple cards. - - The power-management is supported. - Module snd-sscape ----------------- diff --git a/Documentation/sound/alsa/HD-Audio-Models.txt b/Documentation/sound/alsa/HD-Audio-Models.txt index ce46fa1e643e..37c6aad5e590 100644 --- a/Documentation/sound/alsa/HD-Audio-Models.txt +++ b/Documentation/sound/alsa/HD-Audio-Models.txt @@ -296,6 +296,7 @@ Conexant 5051 Conexant 5066 ============= laptop Basic Laptop config (default) + hp-laptop HP laptops, e g G60 dell-laptop Dell laptops dell-vostro Dell Vostro olpc-xo-1_5 OLPC XO 1.5 diff --git a/Documentation/sound/alsa/HD-Audio.txt b/Documentation/sound/alsa/HD-Audio.txt index 278cc2122ea0..c82beb007634 100644 --- a/Documentation/sound/alsa/HD-Audio.txt +++ b/Documentation/sound/alsa/HD-Audio.txt @@ -57,9 +57,11 @@ dead. However, this detection isn't perfect on some devices. In such a case, you can change the default method via `position_fix` option. `position_fix=1` means to use LPIB method explicitly. -`position_fix=2` means to use the position-buffer. 0 is the default -value, the automatic check and fallback to LPIB as described in the -above. If you get a problem of repeated sounds, this option might +`position_fix=2` means to use the position-buffer. +`position_fix=3` means to use a combination of both methods, needed +for some VIA and ATI controllers. 0 is the default value for all other +controllers, the automatic check and fallback to LPIB as described in +the above. If you get a problem of repeated sounds, this option might help. In addition to that, every controller is known to be broken regarding diff --git a/Documentation/usb/proc_usb_info.txt b/Documentation/usb/proc_usb_info.txt index fafcd4723260..afe596d5f201 100644 --- a/Documentation/usb/proc_usb_info.txt +++ b/Documentation/usb/proc_usb_info.txt @@ -1,12 +1,17 @@ /proc/bus/usb filesystem output =============================== -(version 2003.05.30) +(version 2010.09.13) The usbfs filesystem for USB devices is traditionally mounted at /proc/bus/usb. It provides the /proc/bus/usb/devices file, as well as the /proc/bus/usb/BBB/DDD files. +In many modern systems the usbfs filsystem isn't used at all. Instead +USB device nodes are created under /dev/usb/ or someplace similar. The +"devices" file is available in debugfs, typically as +/sys/kernel/debug/usb/devices. + **NOTE**: If /proc/bus/usb appears empty, and a host controller driver has been linked, then you need to mount the @@ -106,8 +111,8 @@ Legend: Topology info: -T: Bus=dd Lev=dd Prnt=dd Port=dd Cnt=dd Dev#=ddd Spd=ddd MxCh=dd -| | | | | | | | |__MaxChildren +T: Bus=dd Lev=dd Prnt=dd Port=dd Cnt=dd Dev#=ddd Spd=dddd MxCh=dd +| | | | | | | | |__MaxChildren | | | | | | | |__Device Speed in Mbps | | | | | | |__DeviceNumber | | | | | |__Count of devices at this level @@ -120,8 +125,13 @@ T: Bus=dd Lev=dd Prnt=dd Port=dd Cnt=dd Dev#=ddd Spd=ddd MxCh=dd Speed may be: 1.5 Mbit/s for low speed USB 12 Mbit/s for full speed USB - 480 Mbit/s for high speed USB (added for USB 2.0) + 480 Mbit/s for high speed USB (added for USB 2.0); + also used for Wireless USB, which has no fixed speed + 5000 Mbit/s for SuperSpeed USB (added for USB 3.0) + For reasons lost in the mists of time, the Port number is always + too low by 1. For example, a device plugged into port 4 will + show up with "Port=03". Bandwidth info: B: Alloc=ddd/ddd us (xx%), #Int=ddd, #Iso=ddd @@ -291,7 +301,7 @@ Here's an example, from a system which has a UHCI root hub, an external hub connected to the root hub, and a mouse and a serial converter connected to the external hub. -T: Bus=00 Lev=00 Prnt=00 Port=00 Cnt=00 Dev#= 1 Spd=12 MxCh= 2 +T: Bus=00 Lev=00 Prnt=00 Port=00 Cnt=00 Dev#= 1 Spd=12 MxCh= 2 B: Alloc= 28/900 us ( 3%), #Int= 2, #Iso= 0 D: Ver= 1.00 Cls=09(hub ) Sub=00 Prot=00 MxPS= 8 #Cfgs= 1 P: Vendor=0000 ProdID=0000 Rev= 0.00 @@ -301,21 +311,21 @@ C:* #Ifs= 1 Cfg#= 1 Atr=40 MxPwr= 0mA I: If#= 0 Alt= 0 #EPs= 1 Cls=09(hub ) Sub=00 Prot=00 Driver=hub E: Ad=81(I) Atr=03(Int.) MxPS= 8 Ivl=255ms -T: Bus=00 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 2 Spd=12 MxCh= 4 +T: Bus=00 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 2 Spd=12 MxCh= 4 D: Ver= 1.00 Cls=09(hub ) Sub=00 Prot=00 MxPS= 8 #Cfgs= 1 P: Vendor=0451 ProdID=1446 Rev= 1.00 C:* #Ifs= 1 Cfg#= 1 Atr=e0 MxPwr=100mA I: If#= 0 Alt= 0 #EPs= 1 Cls=09(hub ) Sub=00 Prot=00 Driver=hub E: Ad=81(I) Atr=03(Int.) MxPS= 1 Ivl=255ms -T: Bus=00 Lev=02 Prnt=02 Port=00 Cnt=01 Dev#= 3 Spd=1.5 MxCh= 0 +T: Bus=00 Lev=02 Prnt=02 Port=00 Cnt=01 Dev#= 3 Spd=1.5 MxCh= 0 D: Ver= 1.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS= 8 #Cfgs= 1 P: Vendor=04b4 ProdID=0001 Rev= 0.00 C:* #Ifs= 1 Cfg#= 1 Atr=80 MxPwr=100mA I: If#= 0 Alt= 0 #EPs= 1 Cls=03(HID ) Sub=01 Prot=02 Driver=mouse E: Ad=81(I) Atr=03(Int.) MxPS= 3 Ivl= 10ms -T: Bus=00 Lev=02 Prnt=02 Port=02 Cnt=02 Dev#= 4 Spd=12 MxCh= 0 +T: Bus=00 Lev=02 Prnt=02 Port=02 Cnt=02 Dev#= 4 Spd=12 MxCh= 0 D: Ver= 1.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS= 8 #Cfgs= 1 P: Vendor=0565 ProdID=0001 Rev= 1.08 S: Manufacturer=Peracom Networks, Inc. @@ -330,12 +340,12 @@ E: Ad=82(I) Atr=03(Int.) MxPS= 8 Ivl= 8ms Selecting only the "T:" and "I:" lines from this (for example, by using "procusb ti"), we have: -T: Bus=00 Lev=00 Prnt=00 Port=00 Cnt=00 Dev#= 1 Spd=12 MxCh= 2 -T: Bus=00 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 2 Spd=12 MxCh= 4 +T: Bus=00 Lev=00 Prnt=00 Port=00 Cnt=00 Dev#= 1 Spd=12 MxCh= 2 +T: Bus=00 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 2 Spd=12 MxCh= 4 I: If#= 0 Alt= 0 #EPs= 1 Cls=09(hub ) Sub=00 Prot=00 Driver=hub -T: Bus=00 Lev=02 Prnt=02 Port=00 Cnt=01 Dev#= 3 Spd=1.5 MxCh= 0 +T: Bus=00 Lev=02 Prnt=02 Port=00 Cnt=01 Dev#= 3 Spd=1.5 MxCh= 0 I: If#= 0 Alt= 0 #EPs= 1 Cls=03(HID ) Sub=01 Prot=02 Driver=mouse -T: Bus=00 Lev=02 Prnt=02 Port=02 Cnt=02 Dev#= 4 Spd=12 MxCh= 0 +T: Bus=00 Lev=02 Prnt=02 Port=02 Cnt=02 Dev#= 4 Spd=12 MxCh= 0 I: If#= 0 Alt= 0 #EPs= 3 Cls=00(>ifc ) Sub=00 Prot=00 Driver=serial diff --git a/Documentation/vm/numa_memory_policy.txt b/Documentation/vm/numa_memory_policy.txt index 6690fc34ef6d..4e7da6543424 100644 --- a/Documentation/vm/numa_memory_policy.txt +++ b/Documentation/vm/numa_memory_policy.txt @@ -424,7 +424,7 @@ a command line tool, numactl(8), exists that allows one to: + set the shared policy for a shared memory segment via mbind(2) -The numactl(8) tool is packages with the run-time version of the library +The numactl(8) tool is packaged with the run-time version of the library containing the memory policy system call wrappers. Some distributions package the headers and compile-time libraries in a separate development package. diff --git a/Documentation/vm/page-types.c b/Documentation/vm/page-types.c index ccd951fa94ee..cc96ee2666f2 100644 --- a/Documentation/vm/page-types.c +++ b/Documentation/vm/page-types.c @@ -478,7 +478,7 @@ static void prepare_hwpoison_fd(void) } if (opt_unpoison && !hwpoison_forget_fd) { - sprintf(buf, "%s/renew-pfn", hwpoison_debug_fs); + sprintf(buf, "%s/unpoison-pfn", hwpoison_debug_fs); hwpoison_forget_fd = checked_open(buf, O_WRONLY); } } diff --git a/Documentation/workqueue.txt b/Documentation/workqueue.txt new file mode 100644 index 000000000000..996a27d9b8db --- /dev/null +++ b/Documentation/workqueue.txt @@ -0,0 +1,381 @@ + +Concurrency Managed Workqueue (cmwq) + +September, 2010 Tejun Heo <tj@kernel.org> + Florian Mickler <florian@mickler.org> + +CONTENTS + +1. Introduction +2. Why cmwq? +3. The Design +4. Application Programming Interface (API) +5. Example Execution Scenarios +6. Guidelines + + +1. Introduction + +There are many cases where an asynchronous process execution context +is needed and the workqueue (wq) API is the most commonly used +mechanism for such cases. + +When such an asynchronous execution context is needed, a work item +describing which function to execute is put on a queue. An +independent thread serves as the asynchronous execution context. The +queue is called workqueue and the thread is called worker. + +While there are work items on the workqueue the worker executes the +functions associated with the work items one after the other. When +there is no work item left on the workqueue the worker becomes idle. +When a new work item gets queued, the worker begins executing again. + + +2. Why cmwq? + +In the original wq implementation, a multi threaded (MT) wq had one +worker thread per CPU and a single threaded (ST) wq had one worker +thread system-wide. A single MT wq needed to keep around the same +number of workers as the number of CPUs. The kernel grew a lot of MT +wq users over the years and with the number of CPU cores continuously +rising, some systems saturated the default 32k PID space just booting +up. + +Although MT wq wasted a lot of resource, the level of concurrency +provided was unsatisfactory. The limitation was common to both ST and +MT wq albeit less severe on MT. Each wq maintained its own separate +worker pool. A MT wq could provide only one execution context per CPU +while a ST wq one for the whole system. Work items had to compete for +those very limited execution contexts leading to various problems +including proneness to deadlocks around the single execution context. + +The tension between the provided level of concurrency and resource +usage also forced its users to make unnecessary tradeoffs like libata +choosing to use ST wq for polling PIOs and accepting an unnecessary +limitation that no two polling PIOs can progress at the same time. As +MT wq don't provide much better concurrency, users which require +higher level of concurrency, like async or fscache, had to implement +their own thread pool. + +Concurrency Managed Workqueue (cmwq) is a reimplementation of wq with +focus on the following goals. + +* Maintain compatibility with the original workqueue API. + +* Use per-CPU unified worker pools shared by all wq to provide + flexible level of concurrency on demand without wasting a lot of + resource. + +* Automatically regulate worker pool and level of concurrency so that + the API users don't need to worry about such details. + + +3. The Design + +In order to ease the asynchronous execution of functions a new +abstraction, the work item, is introduced. + +A work item is a simple struct that holds a pointer to the function +that is to be executed asynchronously. Whenever a driver or subsystem +wants a function to be executed asynchronously it has to set up a work +item pointing to that function and queue that work item on a +workqueue. + +Special purpose threads, called worker threads, execute the functions +off of the queue, one after the other. If no work is queued, the +worker threads become idle. These worker threads are managed in so +called thread-pools. + +The cmwq design differentiates between the user-facing workqueues that +subsystems and drivers queue work items on and the backend mechanism +which manages thread-pool and processes the queued work items. + +The backend is called gcwq. There is one gcwq for each possible CPU +and one gcwq to serve work items queued on unbound workqueues. + +Subsystems and drivers can create and queue work items through special +workqueue API functions as they see fit. They can influence some +aspects of the way the work items are executed by setting flags on the +workqueue they are putting the work item on. These flags include +things like CPU locality, reentrancy, concurrency limits and more. To +get a detailed overview refer to the API description of +alloc_workqueue() below. + +When a work item is queued to a workqueue, the target gcwq is +determined according to the queue parameters and workqueue attributes +and appended on the shared worklist of the gcwq. For example, unless +specifically overridden, a work item of a bound workqueue will be +queued on the worklist of exactly that gcwq that is associated to the +CPU the issuer is running on. + +For any worker pool implementation, managing the concurrency level +(how many execution contexts are active) is an important issue. cmwq +tries to keep the concurrency at a minimal but sufficient level. +Minimal to save resources and sufficient in that the system is used at +its full capacity. + +Each gcwq bound to an actual CPU implements concurrency management by +hooking into the scheduler. The gcwq is notified whenever an active +worker wakes up or sleeps and keeps track of the number of the +currently runnable workers. Generally, work items are not expected to +hog a CPU and consume many cycles. That means maintaining just enough +concurrency to prevent work processing from stalling should be +optimal. As long as there are one or more runnable workers on the +CPU, the gcwq doesn't start execution of a new work, but, when the +last running worker goes to sleep, it immediately schedules a new +worker so that the CPU doesn't sit idle while there are pending work +items. This allows using a minimal number of workers without losing +execution bandwidth. + +Keeping idle workers around doesn't cost other than the memory space +for kthreads, so cmwq holds onto idle ones for a while before killing +them. + +For an unbound wq, the above concurrency management doesn't apply and +the gcwq for the pseudo unbound CPU tries to start executing all work +items as soon as possible. The responsibility of regulating +concurrency level is on the users. There is also a flag to mark a +bound wq to ignore the concurrency management. Please refer to the +API section for details. + +Forward progress guarantee relies on that workers can be created when +more execution contexts are necessary, which in turn is guaranteed +through the use of rescue workers. All work items which might be used +on code paths that handle memory reclaim are required to be queued on +wq's that have a rescue-worker reserved for execution under memory +pressure. Else it is possible that the thread-pool deadlocks waiting +for execution contexts to free up. + + +4. Application Programming Interface (API) + +alloc_workqueue() allocates a wq. The original create_*workqueue() +functions are deprecated and scheduled for removal. alloc_workqueue() +takes three arguments - @name, @flags and @max_active. @name is the +name of the wq and also used as the name of the rescuer thread if +there is one. + +A wq no longer manages execution resources but serves as a domain for +forward progress guarantee, flush and work item attributes. @flags +and @max_active control how work items are assigned execution +resources, scheduled and executed. + +@flags: + + WQ_NON_REENTRANT + + By default, a wq guarantees non-reentrance only on the same + CPU. A work item may not be executed concurrently on the same + CPU by multiple workers but is allowed to be executed + concurrently on multiple CPUs. This flag makes sure + non-reentrance is enforced across all CPUs. Work items queued + to a non-reentrant wq are guaranteed to be executed by at most + one worker system-wide at any given time. + + WQ_UNBOUND + + Work items queued to an unbound wq are served by a special + gcwq which hosts workers which are not bound to any specific + CPU. This makes the wq behave as a simple execution context + provider without concurrency management. The unbound gcwq + tries to start execution of work items as soon as possible. + Unbound wq sacrifices locality but is useful for the following + cases. + + * Wide fluctuation in the concurrency level requirement is + expected and using bound wq may end up creating large number + of mostly unused workers across different CPUs as the issuer + hops through different CPUs. + + * Long running CPU intensive workloads which can be better + managed by the system scheduler. + + WQ_FREEZEABLE + + A freezeable wq participates in the freeze phase of the system + suspend operations. Work items on the wq are drained and no + new work item starts execution until thawed. + + WQ_MEM_RECLAIM + + All wq which might be used in the memory reclaim paths _MUST_ + have this flag set. The wq is guaranteed to have at least one + execution context regardless of memory pressure. + + WQ_HIGHPRI + + Work items of a highpri wq are queued at the head of the + worklist of the target gcwq and start execution regardless of + the current concurrency level. In other words, highpri work + items will always start execution as soon as execution + resource is available. + + Ordering among highpri work items is preserved - a highpri + work item queued after another highpri work item will start + execution after the earlier highpri work item starts. + + Although highpri work items are not held back by other + runnable work items, they still contribute to the concurrency + level. Highpri work items in runnable state will prevent + non-highpri work items from starting execution. + + This flag is meaningless for unbound wq. + + WQ_CPU_INTENSIVE + + Work items of a CPU intensive wq do not contribute to the + concurrency level. In other words, runnable CPU intensive + work items will not prevent other work items from starting + execution. This is useful for bound work items which are + expected to hog CPU cycles so that their execution is + regulated by the system scheduler. + + Although CPU intensive work items don't contribute to the + concurrency level, start of their executions is still + regulated by the concurrency management and runnable + non-CPU-intensive work items can delay execution of CPU + intensive work items. + + This flag is meaningless for unbound wq. + + WQ_HIGHPRI | WQ_CPU_INTENSIVE + + This combination makes the wq avoid interaction with + concurrency management completely and behave as a simple + per-CPU execution context provider. Work items queued on a + highpri CPU-intensive wq start execution as soon as resources + are available and don't affect execution of other work items. + +@max_active: + +@max_active determines the maximum number of execution contexts per +CPU which can be assigned to the work items of a wq. For example, +with @max_active of 16, at most 16 work items of the wq can be +executing at the same time per CPU. + +Currently, for a bound wq, the maximum limit for @max_active is 512 +and the default value used when 0 is specified is 256. For an unbound +wq, the limit is higher of 512 and 4 * num_possible_cpus(). These +values are chosen sufficiently high such that they are not the +limiting factor while providing protection in runaway cases. + +The number of active work items of a wq is usually regulated by the +users of the wq, more specifically, by how many work items the users +may queue at the same time. Unless there is a specific need for +throttling the number of active work items, specifying '0' is +recommended. + +Some users depend on the strict execution ordering of ST wq. The +combination of @max_active of 1 and WQ_UNBOUND is used to achieve this +behavior. Work items on such wq are always queued to the unbound gcwq +and only one work item can be active at any given time thus achieving +the same ordering property as ST wq. + + +5. Example Execution Scenarios + +The following example execution scenarios try to illustrate how cmwq +behave under different configurations. + + Work items w0, w1, w2 are queued to a bound wq q0 on the same CPU. + w0 burns CPU for 5ms then sleeps for 10ms then burns CPU for 5ms + again before finishing. w1 and w2 burn CPU for 5ms then sleep for + 10ms. + +Ignoring all other tasks, works and processing overhead, and assuming +simple FIFO scheduling, the following is one highly simplified version +of possible sequences of events with the original wq. + + TIME IN MSECS EVENT + 0 w0 starts and burns CPU + 5 w0 sleeps + 15 w0 wakes up and burns CPU + 20 w0 finishes + 20 w1 starts and burns CPU + 25 w1 sleeps + 35 w1 wakes up and finishes + 35 w2 starts and burns CPU + 40 w2 sleeps + 50 w2 wakes up and finishes + +And with cmwq with @max_active >= 3, + + TIME IN MSECS EVENT + 0 w0 starts and burns CPU + 5 w0 sleeps + 5 w1 starts and burns CPU + 10 w1 sleeps + 10 w2 starts and burns CPU + 15 w2 sleeps + 15 w0 wakes up and burns CPU + 20 w0 finishes + 20 w1 wakes up and finishes + 25 w2 wakes up and finishes + +If @max_active == 2, + + TIME IN MSECS EVENT + 0 w0 starts and burns CPU + 5 w0 sleeps + 5 w1 starts and burns CPU + 10 w1 sleeps + 15 w0 wakes up and burns CPU + 20 w0 finishes + 20 w1 wakes up and finishes + 20 w2 starts and burns CPU + 25 w2 sleeps + 35 w2 wakes up and finishes + +Now, let's assume w1 and w2 are queued to a different wq q1 which has +WQ_HIGHPRI set, + + TIME IN MSECS EVENT + 0 w1 and w2 start and burn CPU + 5 w1 sleeps + 10 w2 sleeps + 10 w0 starts and burns CPU + 15 w0 sleeps + 15 w1 wakes up and finishes + 20 w2 wakes up and finishes + 25 w0 wakes up and burns CPU + 30 w0 finishes + +If q1 has WQ_CPU_INTENSIVE set, + + TIME IN MSECS EVENT + 0 w0 starts and burns CPU + 5 w0 sleeps + 5 w1 and w2 start and burn CPU + 10 w1 sleeps + 15 w2 sleeps + 15 w0 wakes up and burns CPU + 20 w0 finishes + 20 w1 wakes up and finishes + 25 w2 wakes up and finishes + + +6. Guidelines + +* Do not forget to use WQ_MEM_RECLAIM if a wq may process work items + which are used during memory reclaim. Each wq with WQ_MEM_RECLAIM + set has an execution context reserved for it. If there is + dependency among multiple work items used during memory reclaim, + they should be queued to separate wq each with WQ_MEM_RECLAIM. + +* Unless strict ordering is required, there is no need to use ST wq. + +* Unless there is a specific need, using 0 for @max_active is + recommended. In most use cases, concurrency level usually stays + well under the default limit. + +* A wq serves as a domain for forward progress guarantee + (WQ_MEM_RECLAIM, flush and work item attributes. Work items which + are not involved in memory reclaim and don't need to be flushed as a + part of a group of work items, and don't require any special + attribute, can use one of the system wq. There is no difference in + execution characteristics between using a dedicated wq and a system + wq. + +* Unless work items are expected to consume a huge amount of CPU + cycles, using a bound wq is usually beneficial due to the increased + level of locality in wq operations and work item execution. diff --git a/Documentation/x86/x86_64/kernel-stacks b/Documentation/x86/x86_64/kernel-stacks index 5ad65d51fb95..a01eec5d1d0b 100644 --- a/Documentation/x86/x86_64/kernel-stacks +++ b/Documentation/x86/x86_64/kernel-stacks @@ -18,9 +18,9 @@ specialized stacks contain no useful data. The main CPU stacks are: Used for external hardware interrupts. If this is the first external hardware interrupt (i.e. not a nested hardware interrupt) then the kernel switches from the current task to the interrupt stack. Like - the split thread and interrupt stacks on i386 (with CONFIG_4KSTACKS), - this gives more room for kernel interrupt processing without having - to increase the size of every per thread stack. + the split thread and interrupt stacks on i386, this gives more room + for kernel interrupt processing without having to increase the size + of every per thread stack. The interrupt stack is also used when processing a softirq. |